What's New in the eXo Platform 1.0 beta 4

By using an open source model we target developers and provide them with many features that make their everyday life easier. Our strategy clearly defines a bottom-up approach.


Since our September article, many things have changed. The number of developers and users have dramatically increased. Big companies have already moved their development environment to the eXo platform and smaller companies are about to use it in production.

By using an open source model we target developers and provide them with many features that make their everyday life easier. Our strategy clearly defines a bottom-up approach.

With this new version, we go even further and prove our compliance with the portlet API specification defined by the Java Community Process, in order to convince and assure managers and decision makers that using the eXo platform is a safe choice.

Furthermore, we maintain our efforts to ease developers’ lives and this new version comes with many new features and services as well as an eclipse plug in.

Non techie part


As soon as the specifications were final, we contacted Sun Microsystems for the Technology Compatibility Kit (TCK) software. This tool is a test suite composed of 372 tests.

The concept of the TCK is relatively simple. The licensee has to deploy several portlet application (WARs) in its portal / portlet-container and use the TCK client (HTTP client) to access them. The portlets interact with the portlet-container through the portlet API, and therefore, test the compliance of the implementation.

The test suite documentation was impressive and in two days our team had set up the entire set. After the first test showed 71% compliance, it took us one week to be 100% compliant. We did not encounter any major design problems needing big refactorings, and the fixes were only small details.

The compliance is fundamentally important as this certifies that portlets developed with the eXo platform can be deployed on any other compliant portal and vice versa. Indeed, as many features of the first versions - like hot deployment - really increases developers' productivity and reduces time to market, several large companies have decided to use the eXo platform in development stages. The certified compliance ensures that this is a good development choice. It is now our challenge to convince you that the eXo platform is fit for production.

Finally, sincere thanks to Sun Microsystems, and notably Adam Abramski, for their support and expedient responses.

Business model : dual licences, support and services

Like our code, our strategy is open : we communicate everything.

The eXo platform SARL is a commercial company distributing open source software (OSS) under the GNU/GPL licence. The company also provides commercial licences to Integration Service Vendors (ISVs) and end users. The "end user" licence adds many common warranties on the product we distribute, whereas any open source licence disclaims all responsibilities with the free software. The ISV licence allows integration companies to distribute their product bundled with the eXo platform without forcing them to use the GPL licence. Of course anyone is free to use the GPL licence and we do encourage it. But, as this sometimes is not possible we also propose other options. We enforce no restrictions on anyone, and truly believe this is what open source is about : Freedom.

We aim to delve deeper into the notion of "derivative work", as defined in the GPL licence. We have had many questions on that topic and have experienced that this is not well understood by our cutomers. Distributing the eXo platform with your own portlets that only communicate with the portal/portlet-container through the standard portlet API is possible. You do not have to use the GPL licence for them. This is not a derivative work of our software. But if those portlets use any non-standard extensions implemented by the eXo platform such as filters, message listeners, services etc., then this has to be viewed as "derivative work" and your portlets should also be distributed under the GPL licence.

Now, a few essentials about Intellectual Property (IP). Many open source committers are poorly informed on this very important point. In order to provide several licences for the same code source, the eXo platform SARL company requires a copy of the IP from every committer. This is a copy of the developer rights, and the concept of copy is of importance. When a developer commits code to the eXo platform project and provides a copy of its IP he does not part with it. He may still do anything with his code, and use it in any other project using any other licence. It is his code and IP, and with the copy he extends the company the same rights.

In exchange for these rights we reward credits for all tasks, issues and bugs. Once a task is completed and unit tested the developer's account is credited. Every three months we distribute 75% of the net revenues from licence sales to all committers (individual or company) according to their rewarded credits. The eXo platform project is composed of several modules that each have a credit budget for a three month period. Each module is managed by a leader that defines the amount of points rewarded for each task within his module.

This innovative approach to collaboration has already attracted many developers and companies. There are presently 6 very active developers, 4 of them working full time, including weekends! Please, join the consortium!

The eXo platform SARL provides services and support for its products. For more detailed information please browse the www.exoplatform.com site.


The eXo platform is based on several projects also in final development stages, such as Java Server Faces, Pico container or jBPM (Java Business Process Management).

The JSF team is expected to release a new version by year's end, and a final one within the first quarter of 2004. Pico Container is in beta3, with the next, final release expected soon. jBPM 1.0 is almost final and the current version (beta 5.2) is stable.

When all these products are out, we will release our first 1.0 version, probably during first quarter of 2004. We may release a last beta version in the begining of February. The next big step is then to support the Oasis WSRP standard.

Our intention is to challenge and compete with commercial solutions providing integrated Portal - Content Management Systems (CMS). There is still work to be done to tightly couple our workflow engine with our CMS repository, but our first version will be a viable open source alternative to very expensive, closed commercial solutions

Development environment

The goal of this section is to introduce the reader to the portlet API by showing a small tutorial on how to create a Hello World portlet and to test it with the eXo platform. Then, we introduce the eXo platform Eclipse plug-in, a tool that improves developers' productivity while implementing portlets.

Your first Hello World portlet

A basic portlet should extends the GenericPortlet class from the portlet API. It provides several methods like doView(), doHelp() that are called by the render() method of the GenericPortlet according to the current mode. The Hello World portlet code we will show is quite simple : it has dedicated behaviour for normal window state.

public class HelloWorldPortlet extends GenericPortlet {

  private static final String HELLO_TEMPLATE = "/WEB-INF/templates/html/HelloWorld.jsp";

  public void init(PortletConfig config) throws PortletException {

  public void doView(RenderRequest request, RenderResponse response)
    throws PortletException, IOException {
    WindowState state = request.getWindowState();
    response.setContentType("text/html") ;
    if (state == state.NORMAL) {
      Writer writer = response.getWriter() ;
      writer.write("<center><img src='/HelloWorld/images/hello-world.png'/></center>");
      writer.write("<center>Hello Portal World in View Mode</center>");
    PortletContext context = getPortletContext() ;
    PortletRequestDispatcher rd = context.getRequestDispatcher(HELLO_TEMPLATE) ;
    rd.include(request, response);

In the normal state we get a PrintWriter from the RenderResponse object and write directly into it. In any other window state (including the normal one), we use a PortletRequestDispatcher to include the content of a jsp page : HelloWorld.jsp. Obtaining the current state is very easy : request.getWindowState(). As the portlet API leverages the servlet API, the code is quite similar to what you write for a simple servlet. Even the init() method, that uses a PortletConfig object, uses almost the same signature as the corresponding servlet one.

The jsp page does not change much either. To obtain the portlet objects you need to either use some tags defined in the API or simply get them as request attributes :

<%@ page import="javax.portlet.RenderRequest"%>
  RenderRequest renderRequest = (RenderRequest) request.getAttribute("javax.portlet.request");

<b>Hello</b> include in jsp in portlet<br>
portlet mode :
<%= renderRequest.getPortletMode().toString() %>

Each portlet application comes with a portlet.xml file that is located under the WEB-INF/ directory close to the web.xml. It provides information such as the mode supported per markup language, some init parameters, the portlet class name, etc.

    <description lang="EN">My First Hello World Portlet</description>
    <display-name lang="EN">Hello World</display-name>
      <description>something to describe</description>
      <title>Hello World</title>

This XML file is a minimal one but we do not need more information for our Hello World portlet. The web.xml file should also contain some basic information such as the portlet application name :

     This application is a portlet. It can not be used outside a portal.

If you want to use the tag library you also need to inform on the taglib location in the web.xml file.

That's all, we have our first portlet. We now need to make a WAR and deploy it into our Tomcat webapps/ directory.

There are several ways to vizualise your portlet. The usual way is to access the basic portal page, to log in and to finally use the customizer portlet to add the Hello World portlet somewhere in your portal page. The basic URL for this main portal page access is : /exo/faces/public/portal.jsp?_ctx=community (refer to the last section where we define that URL more precisely). Of course, this is quite a long process while you develop, therefore we introduced a new development page where you can visualize a portlet located in a well known portlet application : /exo/faces/public/portlet.jsp?_ctx=anonymous&portletName=HelloWorld/HelloWorld. The portletName parameter is composed like this : portletApplicationName + "/" + portletName.

Figure 1. The Hello World portlet

The Hello World portlet

The Eclipse plugin

The ultimate goal of this plugin is to provide eXo platform application developers with a rich set of tools such as wizards, editors, and views that integrates with the Eclipse platform. This should leverage the existing capability of the Eclipse platform and its JDT tooling while providing specialized tools within Eclipse to help the eXo platform developers community.

The first beta release of the plugin targets a certain set of eXo platform developers, more specifically, the portlet developers. Building on our experience with developing the Pluto Plugin, we identify a simple development cycle that most of the portlet developers follow. It starts with first using a wizard to create a Java web application that contains one or more portlets. Next, you write the source code using the Eclipse Java editor. Then, you package and deploy the web application using the Deploy Portlets action provided by the eXo plugin. You then start the eXo portal and test your portlet. Finally, you go back to the source code for further editing and the whole cycle (except the project creation step) is repeated until the portlet functionality is completed.

This release of the plugin comes with three main tools to be used during the development cycle identified above:

  • A portlet project wizard

    This wizard creates a project with all the essential files and directory structures that are common among any portlet project. You also can specify the source folder name, the name of the folder that contains the web content (such as jsp files), and the context root to use when deploying the application. One feature that we particularly like is the ability to start with a sample project. Currently, the wizard comes with one sample application (you guessed it, it is a HelloWorld sample). However, expect to see much more interesting sample projects in future releases of the plugin. The following figures show the three pages that represent the wizard.

    Figure 2. The project settings page (part 1 of 3)

    The project settings page (part 1 of 3)

    Figure 3. The sample projects page (part 2 of 3)

    The sample projects page (part 2 of 3)

    Figure 4. The deployment settings page (part 3 of 3)

    The deployment settings page (part 3 of 3)
  • A web application settings property page

    For each java project, the plugin provides a property page that contains information related to deployment, such as the context root and the deployment directory. The information presented in this page is used later by the Deploy Portlet action. The following figure shows the property page.

    Figure 5. The web application settings property page

    The web application settings property page
  • Deploy Portlet action

    You can access this action either via the menu bar or by using the default key shortcut (Ctrl+Shift+D). This action takes care of packaging the portlet project and copying the result to the deployment directory. Figure 6 shows the action in the menu bar.

    Figure 6. The deploy portlet action

    The deploy portlet action

How to build the platform

Properties File: To build the eXo platform you need to create a build.properties file in ExoBuild module. You can create this file by following the next steps.

  • Copy the local.properties.sample in ExoBuild/build-props to local.properties and customize this file according to your environment. This file contains some information such as the repository of the eXo project, the developer info...If you are not an eXo platform developer, you just need to edit the base.dir property and jdk property. By default, jdk property is set to use jdk version 1.4 or later version.
  • ${platform}.properties file, We currently support jboss, tomcat and jetty platforms. If you don't save tomcat, jboss or jetty in ${base.dir}/exo-tomcat and ${base.dir}/exo-jboss, you need to recustomize this file. Usually you should change the server.dir property to the location of your jboss, tomcat or jetty. Other properties such as deploy.dir or lib.dir are computed base on the server.dir property
  • common.properties contains the directories structure and database info of the eXo platform. See comments of each property in this property file for more information. We currently support hsql, MySQL and DB2. An oracle support is on the way.
  • Concatenate 3 files local.properties, common.properties and ${platform}.properties to create build.properties file after you have updated the properties. You can use ant tasks : ant prepare.jboss , prepare.tomcat , prepare.jetty to create this build.properties file as well.

Ant tasks : The eXo platform contains many modules. Each module can depend on other modules. So the build must process in a specified order. The current order is ExoCommons, ExoServicesContainer, ExoServicesAPI , ExoServer, ExoServices, ExoPortal and ExoPortlets. Once you run ant build.exo.portal, you can start modifying code in each module and ant deploy localy. Note that what you modify may affect the other module so you may need to recompile and redeploy the dependant module as well.

  • ant prepare.jboss : this task will create a build.properties by concatenated 3 files local.properties , common.properties, jboss.properties
  • ant prepare.tomcat : this task will create a build.properties by concatenated 3 files local.properties , common.properties, tomcat.properties
  • ant build.exo.portal: This task will call many other sub tasks in build-script and other modules. Some important sub tasks are:
    • prepare task : prepare.tomcat and prepare.jboss, those 2 tasks will create the portlet deploy dir and services deploy dir. They will copy the the missing jar files and overwrite some configuration files of jboss and tomcat.
      <ant antfile="${exo_build.dir}/build-script/platform-prepare-task.xml"
       target="prepare.tomcat" inheritall="false"/>
      <ant antfile="${exo_build.dir}/build-script/platform-prepare-task.xml"
       target="prepare.jboss" inheritall="false"/>

      Note that the pepare.jboss and prepare.tomcat will check for the jboss.version and tomcat.version in the build.properties file and execute only if the property is present.

    • modules deploy task : you will find many ant call
      <ant antfile="${exo_commons.dir}/build.xml" target="deploy" inheritall="false"/>
      <ant antfile="${exo_irc.dir}/build.xml" target="deploy" inheritall="false"/>
      <ant antfile="${exo_services_container.dir}/build.xml" target="deploy" inheritall="false"/>
      <ant antfile="${exo_services_api.dir}/build.xml" target="deploy" inheritall="false"/>
      <ant antfile="${exo_services.dir}/build.xml" target="deploy" inheritall="false"/>
      <ant antfile="${exo_portal.dir}/build.xml" target="deploy" inheritall="false"/>
      <ant antfile="${exo_portlets.dir}/build.xml" target="deploy" inheritall="false"/>

      Each module has a build.xml and a deploy target. The deploy target usually compiles the code of the module, packages it and deploys the jar files to one of the following directories : ${exo-core-lib.deploy.dir}, ${exo-portal-lib.deploy.dir}, ${service.deploy.dir} and ${portlet.deploy.dir}.

      In the ExoPortlets and ExoServices modules, you will find many services and portlets. Each service and portlet has its own build.xml. There are also usually two other included files : common-service-build.xml and common-portlet-build.xml. The common build file defines the common tasks such clean, compile, package, classpath... In the build.xml, you only need to customize the service or portlet name and the deploy target.

  • ant developer.update : this will call a developer.update task in build-script/cvs-task.xml. You need a ssh client and bash shell to run this task. It may work with other environements as well, but we have never tested it. Look for the documents on sourceforge to see how to configure ssh. You need to run "ssh-agent bash" and "ssh-add Public_key.txt" before running this task. Please refer to the sourceforge cvs documentation for information on how to customize ssh and upload the private key to your account
  • ant test.all : This task will call the test.all target in each module and will generate a unit test report in ExoBuild/reports/junit. You can find the test report in html format in ExoBuild/reports/junit/html. To run this task , you need to update the build.properties and change the server.type property to server.type=standalone This will tell the eXo module to use some mock objects and mock service during the test.
  • ant clean.deploy : delete the portlet deploy dir , temp dir , work dir.
  • ant clean.modules : delete build dir , temp dir dist dir in all the modules.
  • ant clean.all : clean.deploy + clean.modules

Now that's hardcore...

Specifications extensions

In our previous article, we focused on the architecture and choices we had to make while we were building the core of the platform. We also showed that open source software’s drive innovation and never stop where standards do.

First, the eXo platform supports most of the non mandatory features and suggestion defined in the specifications such as :

  • Caching : Each portlet content can be cached in a per user map to reduce portal page creation time. The implementation of this feature uses Aspect Oriented Programmation (AOP) and the AspectJ language. At build time we weave several aspects, as described in the previous article, to the class that calls the portlets instances, the cache aspect is one of these. When cache is enabled we look in the advice to see if the content has already been generated. If so we directly return it, if not we move to the next aspect. The cache is discarded when the processAction method is called or when the expiration period has elapsed.

    The portlet API specifications lets you define cache in the portlet.xml file :


    You can use several values :

    • -1 means that cache never expires
    • 0 means that cache is disabled
    • n represents any second integer before the cache is discarded.

    Cache can be completely disabled by defining it in the portlet-container.xml configuration file :


    Note that we also cache PortletPreferences object once they have been extracted from the underlying storage system to avoid calls to the database on each portlet request.

  • Portlet filters : filters are on the list, in the suggestions chapter, for the next version of the portlet specifications. We have implemented this feature reproducing as well as possible the servlet filter's API. The interfaces used are very similar to the servlet ones.

    Figure 7. The Portlet Filters extension

    The Portlet Filters extension

    The main difference is that filters are defined per portlet. To add a filter the portlet developer must define it in the portlet.xml file.


    For example, it is good practise to use such filters to log access to portlets with non intrusive code. The class would look like :

    public class LoggerFilter implements PortletFilter{
      public void init(PortletFilterConfig portletFilterConfig) throws PortletException {
          throw new PortletException();
      public void doFilter(PortletRequest portletRequest,
                    PortletResponse portletResponse,
             PortletFilterChain filterChain)
     throws IOException, PortletException {
        // do something before the portlet is reached
        filterChain.doFilter(portletRequest, portletResponse) ;
        // do something after the portlet is reached
      public void destroy() {

    The filter chain is created from the list of filters defined in the portlet.xml file. The implementation is here also done using an AspectJ aspect. After any aspect has been proceeded, the filter aspect launches the portlet filter in a recurisve way.

  • Portlet inter-communication : this feature is also a suggestion for the next version of the specifications. It lets a portlet sends an event to another one or broadcast the event within the scope of the portlet application. This message mechanism may, for instance, be used when a user clicks on the node of a file tree included in a portlet. An event can then be sent to another portlet that would modify its state and render, for example, the content of the node.

    Here again, we extend the specifications' portlet.xml file and ask the developer to reference a MessageListener class.

        <description>a simple example</description>

    By convention, the portlet that sends the Message object must be aware of the type of the objects the listener can receive.

    Figure 8. The Portlet Inter-Communications extension

    The Portlet Inter-Communications extension

    This mechanism can only occurs in a processAction method call, in other words before any render methods is called so that the state of portlets is consistent.

    public class SimpleMessageListener implements MessageListener{
      public void messageReceived(MessageEvent messageEvent) throws PortletException {
        DefaultPortletMessage message = (DefaultPortletMessage) messageEvent.getMessage();
        System.out.println("Message received in listener : " + message.getMessage());

    To be able to send a message you need to cast the PortletContext object to the ExoPortletContext interface which extends the specifications.

    public class PortletThatSendsMessage extends GenericPortlet{
      public void processAction(ActionRequest actionRequest, ActionResponse actionResponse)
          throws PortletException, IOException {
        ExoPortletContext context = (ExoPortletContext) actionRequest.getPortletSession().getPortletContext();
       new DefaultPortletMessage("message sent"),
        actionResponse.setRenderParameter("status", "Everything is ok");
      public void render(RenderRequest renderRequest, RenderResponse renderResponse)
           throws PortletException, IOException {
        ExoPortletContext context = (ExoPortletContext) renderRequest.getPortletSession().getPortletContext();
       new DefaultPortletMessage("message sent"),
        PrintWriter w = renderResponse.getWriter();
        w.println("Everything is ok");

    Note that this code is extracted from our unit tests set. Therefore the sent in the processAction() method will be executed as expected while the call done within the render() method will throw a PortletException.

  • Declarative security : the specifications only define programmatic security. We have added the declarative capabilities within our portlet framework. You can then define J2EE roles for a portlet, for each mode, to protect access to its content. Here is a sample of our portlet framework controller XML file :

        <action name="ListUser" class="exo.portal.portlets.user.ListUser">
          <forward name="success" page="user/ListUser.jsp"/>
          <forward name="error" page="Error.jsp"/>
        <action name="SaveUserInfo" class="exo.portal.portlets.user.SaveUserInfo">
          <forward name="success" page="user/UserInfo.jsp"/>
          <forward name="error" page="Error.jsp"/>

    The require-role tag lets you define which J2EE roles defined in the web.xml are needed to execute that action. If you are accustomed to MVC type 2 web frameworks such as Struts or Webwork this syntax should be easy to grasp. I If you want more information look up our previous article or read the portlet source code.

    This feature may be added as an extension of the portlet.xml in the near future.

We have built the portlet container as a real open source production choice and we have t herefore added several features to improve response time and memory management :

  • Pooling : the portlet specification is built on the top of the servlet specification. Therefore PortletRequest, PortletResponse, PortletSession objects and many wrappers must be created for each portlet call. When you have a large amount of portlets within a page, and many concurrent requests, the number of objects to instanciate can be very important. To avoid these problems we use pooled objects.

    When a request comes to the portlet container, we borrow portlet objects and wrapper from the pool, fill them with the incoming information and call the portlet instance. When the portlet has generated the content and the request goes back to the portal, we release the pooled object.

    You can configure the number of objects in the pool using the portlet-container.xml file :


  • Support of shared sessions : some application servers such as IBM WebSphere or Tomcat (4.1.29), in its default mode, support shared session. This simply means that when a request is dispatched to another web or portlet application context (corresponding to a WAR) the session of the first context is propagated to the context where the request is dispatched to.

    This behaviour is not the one defined by the servlet specifications which impose the creation of a new session per context :

    HttpSession objects must be scoped at the application (or servlet context) level. The underlying mechanism, such as the cookie used to establish the session, can be the same for different contexts, but the object referenced, including the attributes in that object, must never be shared between contexts by the container.

    To illustrate this requirement with an example: if a servlet uses the RequestDispatcher to call a servlet in another Web application, any sessions created for and visible to the servlet being called must be different from those visible to the calling servlet.

    To reduce the number of session object we decided to also support this mode and even make it the default one. Of course we tested our implementation with the TCK test suite and we claim compliance for this mode too.

    This feature required substancial work and imagination as the portlet API imposes a unique session per portlet application. We therefore had to partition the main session with encoded attributes to simulate independent sessions' objects.

    You may configure the use of share session in the portlet-container.xml file. Note that you also need to configure the underlying servlet container to the same session mode.


  • Portlet lazy loading : to manage memory resources we have decided to instantiate and init portlets only when they are called for the first time.

    Here, we leverage pico-container capabilities. When portlets are deployed in the container, we reference them in what we call the ServiceManager. This is a singleton wrapper around a pico container instance.

    Figure 9. The ServiceManager class diagram

    The ServiceManager class diagram

    When the portlet is called we obtain it by calling the ServiceManager getService(portletKey) which underneath calls the pico container which returns and instantiates the portlet if it is not cached already.

Technology bridges

In the previous article we presented our custom portlet framework as a mix between Struts and Webwork, but dedicated to portlets' behaviour, which is quite unique. We developed many portlets with it, and we really recommand its use when you start your portlets from scratch.

We had a number of question about how to use existing servlet applications within the context of portlets. Those demands were quite recurrent for Struts and also important for Cocoon.

Struts 1.2 main goal is to support portlets, but that will not make existing servlets work in any portal without any rewriting. The extension of the framework is not the same than the bridge we have developed. By adding a layer between the portal and any existing Struts application within the portlet, we allow any existing struts application to be embedded in a portlet with a minimal amount of change. Note that this portlet bridge is eXo platform dependent as its consist in two phases : (1) obtaining the servlet objects from the portlet objects (using custom casts) and (2) rewriting the URL that the Struts application generates in order for the portal to find the correct portlet application and portlet that embedded the Struts framework.

Let's take a simple Struts application deployed as a servlet and make a portlet out of it. As the produced mark up language is embedded in a portal page, you need to remove all the header and footer tags of the Struts jsp page. For the same reasons, replace all the forward() methods by include() methods. Finally, if you have any hardcoded URL in your jsp page (shame on you), just use the encodeURL() method if you don't already. As most of Struts application use the html:link tag this should not be a problem. The rest of the application will stay the same.

Now we need to use a custom portlet which we define in the portlet.xml :

<?xml version="1.0" encoding="UTF-8"?>

<portlet-app xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_1_0.xsd"
    <description lang="EN">Struts application</description>
    <display-name lang="EN">StrutsExample</display-name>


The portlet-class is the most important one. Indeed, this is the ExoStrutsPortlet object that is the true bridge implementation. Don't worry, we will not get into the code of this class. The next code is the index.jsp page of the struts basic example. We have not removed the header and footer tags (because it still works) but you should.

<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<%@ taglib uri="/WEB-INF/struts-bean.tld" prefix="bean" %>
<%@ taglib uri="/WEB-INF/struts-html.tld" prefix="html" %>
<%@ taglib uri="/WEB-INF/struts-logic.tld" prefix="logic" %>

<html:html locale="true">
<title><bean:message key="index.title"/></title>
<body bgcolor="white">

<logic:notPresent name="database" scope="application">
  <font color="red">
    ERROR:  User database not loaded -- check servlet container logs
    for error messages.

<logic:notPresent name="org.apache.struts.action.MESSAGE" scope="application">
  <font color="red">
    ERROR:  Application resources not loaded -- check servlet container
    logs for error messages.

<h3><bean:message key="index.heading"/></h3>
<li><html:link page="/editRegistration.do?action=Create">
 <bean:message key="index.registration"/></html:link></li>
<li><html:link page="/logon.jsp"><bean:message key="index.logon"/></html:link></li>

<html:link page="/tour.do">
<font size="-1"><bean:message key="index.tour"/></font>

<html:img page="/struts-power.gif" alt="Powered by Struts"/>



As you can see nothing was changed from the original index.jsp. We can now deploy the portlet war into the portal.

Figure 10. The Struts portlets deployment WAR directories

The Struts portlets deployment WAR directories

Finally, we can launch the portlet using our development page (new in beta 4), watch the browser image for the URL:

Figure 11. The Struts portlets

The Struts portlets

The same work can be done for any existing framework. We will now focus on the Java Server Faces (JSR 127) bridge.

The support of JSF within a Portlet is a new feature in the eXo platform. We solved many intriguing tasks that occurred in the process of establishing a faces portlet able to adopt the exisiting JSF based projects

There are two well known implementations of JSF : Sun JSF-RI and the OSS project MyFaces. The eXo platform supports both of them. You may choose to compile the platform to work on the one you prefer. Most of our code are independent from the JSF implementation, but there are some differences between Sun Reference Implementation and MyFaces implementation. By the production release we seek to make it independent of which JSF implementation you want to use.

  • Portlet/Servlet : Similarities and Differences

    The Servlet and Portlet interfaces have a lot of similarities. Both set of interfaces define Request , Response , Session, Config, Context objects. In each case, you can use the request interface to retrieve the request parameters, to set an attribute and to use the respone interface to send back the response to the client. You can also use the config interface to read the application configuration and use the context interface to access the container...

    However Servlet and Portlet are not the same. The biggest modification has been made to introduce an MVC design directly into the portlet. The portlet defines 2 phases of execution, processAction(..) and render(..). Also the Request and Response interfaces are well defined for each execution phase : ActionRequest and ActionResponse for the process action phase, and RenderRequest and RenderResponse for the render phase. Therefore the current JSF implementation will not support the portlet out of box. However, JSF technology has a very flexible and well thought design.

  • The JSF Implementation

    As you know, JSF has 7 phases of execution controlled by the FacesServlet. The FacesContext contains all of the per-request state information related to the processing of a single JavaServer Faces request, and the rendering of the corresponding response. It is given to, and potentially modified by, each phase of the request processing lifecycle. The ExternalContext contains the context, request, and response objects of the request life cycle; those objects can be either of Portlet or Servlet types. We will list a pseudo code here to illustrate how FacesServlet work and the faces context is created

    class FacesServlet extends HttpServlet {
      public void init(ServletConfig servletConfig) throws ServletException {
        //do Ñ•ome initialization here such create FacesContext and Lifecycle factory object
      public void service(HttpServletRequest request, HttpServletResponse response) {
        //create face context instance base on the servlet context, request and response.
        //The faces context will create the ExternalContext object and pass the servletContext
        //request and response to the new ExternalContext. You can access those objects at any place
        //in 7 phases of execution by calling:
        // HttpServletRequest=(HttpServletRequest)FacesContext.getCurrentInstance().getExternalContext().getRequest()
        // HttpServletResponse=(HttpServletRequest)FacesContext.getCurrentInstance().getExternalContext().getResponse()
        FaceContext facesContext = facesContextFactoy.getFacesContext(servletContext, request, response) ;
        //get lifecyle object
        Lifecycle lifecycle = lifecycleFactory.getLifeCycle() ;
        //the lifecycle execute 7 phases:
        // 1) Reconstitute Request tree
        // 2) Apply Request Values (decode)
        // 3) Handle Request Event
        // 4) Process Validations
        // 5) Update Model Values
        // 6) Invoke Application
        // 7) Render Request
        lifecyle.execute(facesContext) ;
        //release the resource such request , response and the current context that associate
        //with the current thread
        facesContext.release() ;

    You can access the context, request and response objects at any place by obtaining the current faces context object. You just need to get the external context, and then extract the request or response objects out of it. However when you write a JSF application or component, you don't know that the application/component will be deployed into a servlet environment or a portlet environement so you should not call the getRequest() and getResponse() and cast the returned object.

    The ExternalContext interface comes with a set of method that can help you to abstract the current environment. For example, to obtain the request parameter, you can use : externalContext.getRequestParameterMap().get(key) instead of ((HttpServletRequest)externalContext.getRequest()).getParameter(key). PLease check ExternalContext API for more detail and the available set of methods.

  • Our modifications

    Now that you have an overview how JSF works in the servlet environment. You should see that in order to make a JSF support within the portlet technology, we only need to replace the FacesContext, ExternalContext and FacesContext factory implementations by a custom portlet implementation

    Figure 12. The request process in the JSF bridge

    The request process in the JSF bridge

    As we mention above, the portal controller delegates (1) the request to the portletcontainer where the faces portlet is located. The portlet replaces (2) the FacesContext instance by a new one and obtains the lifecycle object before executing (3) it. Wehen the new JSF lifecycle has finished its work (4), the portlet restores the previous state (5) and returns (6) an Output object to the portal WAR.

    The next JSP page shows hw simple it is to introduce JSF code inside a portlet :

    <%@ taglib uri="http://exoplatform.org/jsf/custom" prefix="x" %>
    <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f" %>
    <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h" %>
    <%@ page import="framework.bean.HelloBean"%>
      <div>Faces, Welcome to eXo Platform, this is a test</div>
        <x:form id="facesForm" formName="facesForm" method="POST">
          <h:input_hidden id="actionName" value="HelloFacesAction" />
          <h:output_text id="label_name" value="Please enter your name:<br>" />
          <h:input_text id="input_name" valueRef="hello.name" />
          <h:command_button id="submit" type="submit" commandName="submit" label="Submit" actionRef="hello.submit"/>

    Do not focus on the eXo custom taglib,we have just added several usefull reusable tags. The HelloBean is a JSF Managed Bean defined in the faces-config.xml file :

        <description>Hello Bean</description>
          <value>Enter your name here</value>

    The output created by this small portlet can be seen in the next screenshot or using the URL : /exo/faces/public/page.jsp?_ctx=anonymous&portletName=HelloFacesPortletFramework/HelloFacesPortletFramework with your eXo platform distribution as this portlet is bundled with it.

    Figure 13. A simple JSF Hello World portlet

    A simple JSF Hello World portlet

Let's finish with the configuration needed in the faces portlet WAR.

  • JSF Factory Objects The JSF factory objects are configured in 'WEB-INF/classes/faces.propterties', and must be included in every Portlet-WAR, to accomodate that your web application performs within the eXo platform.
    javax.faces.render.RenderKitFactory=com.sun.faces.renderkit.RenderKitFactoryImpl sPortletLifecycleFactoryImpl
  • The faces-config.xml should be configured as normal - there is nothing special to it. In web.xml however, you must add
    When the servlet context is initialized, it configures the JSF Factory to use FacesPortletViewHandler. That aside normal JSF configuration is sufficient.
  • In portlet.xml you must configure the faces portlet class to use exo.portal.portlet.ExoFacesPortlet or your custom decsendant of it. In addition you should declare the faces url mapping.
      <description>Faces Mapping URL</description>
    If NOT specified it will use the default /faces url prefix for each dispatched request.

Finally, note that we are also making it available for cocoon as our previous version needed some manual rewrite of URL using PortletURLs.

The goals of those bridges is to use all the existing framework and application within a portlet of the eXo platform. If you are interested in using a servlet application based on a framework within the scope of a portlet, please contact us.

Discover Java Server Faces : The portal design

Let's describe how our Java Server Faces based portal works. This section gives an overview of the eXo portal achitecture, how we use JSF and the way the portal interacts with the portlet container. We focus on the request lifecycle which goes through the filter phase, the reconstitute request tree phase, the decode phase and finally, the render phase.

As we mention above, the eXo portal is composed of many portlet WARs and a master eXo portal WAR. The eXo portal module is responsible for checking the security, decoding the request, loading the user configuration, building the jsf tree, and distpatching the request to the portlet container. When all the portlets located in the requested page are rendered, the portal returns an aggregated page to the client.

The lifecycle of the exoportal can be seen as :

Figure 14. The eXo portal lifecycle

The eXo portal lifecycle

Note that the eXo portal does not use the entire JSF lifecycle, therefore some phases are not showned in the diagram.

Portal initialization

When the web server is started or the eXo web arvchive (WAR) is deployed, the PortalContextListener (You can find the configuration of this listener in web.xml) will catch the start event and run the checking code for eXo tables, the default groups and the default users. If the anonymous and admin users are missing, the listener will look up the organization service and cms service, to create the missing user and a home directory for that user in the cms. Also note that the default configuration XML file for the page layout (exo/WEB-INF/conf/user-pages.xml) is copied to the home directory in the cms repository of that user.

Portlet Initialization

When the webserver is started or a portlet war is dropped into the portlet deployed directory, the PortletApplicationListener (You can find the configuration of this listener in portlet.war/WEB-INF/web.xml or default-web.xml) will catch the start event, it will lookup for the portlet container service and register the application with the portlet container if a portlet.xml file is present.

Request processing

After the server is started and that all the initialization has been done. The portal is ready to receive the first request. There is a very simple rule to handle the request : any dynamic request must go though the portal and be treated by the filter , the jsf servlet controller and the portlet container. For a static resource request, such as an image file, the request can be handled by the default servlet of portal or the default servlet of the portlet; only depending on the context path of the request.

Before getting deeper in the request processing mechanism we would like to define the notion of “User Context”. The eXo portal defines 2 types of users for a session : a user context and a remote user(request.getRemoteUser()). The user context is the page configuration and a user profile of an user. The remote user is the user who visits the page. In most cases, when a remote user logs in, the remote user and user context will be the same and the remote user will use the private link. In this case, the remote user has all the admin rights for the user context, the remote user can view all the pages and customize the pages.xml by using the customizer portlet. But, a remote user can still be logged in and visit another user context using public links. In this case , the remote user can only view the public pages of that user context. Finally note that with the eXo portal, you can turn any user context into the default home page by setting some pages of that user as public and map the home page to that user url in web.xml.

  • Filter phase. When a request is sent to the portal, it will be first handle by either a PublicRequestFilter or a PrivateRequestFilter (you can find the configuration of those 2 filters in exo/WEB-INF/web.xml) depending on the url type. We currently define 2 types of url, public and private. The public url has the form /exo/faces/public/portal.jsp?_ctx=user... and the private url has the form /exo/faces/private/portal.jsp?_ctx=user... Both public and private paths you saw in 2 url are virtual paths and they are defined in the web.xml. Both paths are mapped to the same portal.jsp file in exo/portal.jsp. The role of filter is to check the User Context. If the user context, _ctx=user, does not match with the current user context or no user context exists in the session, the filter will destroy the jsf tree in the session, reload the user context according the request, and store the user context in the session.
  • Reconstitute JSF tree phase. After the filter phase, the request is forwarded to the FacesServlet. The FacesServlet will get the Lifecycle instance and will execute the faces lifecycle. One important phase in the faces lifecycle is the process reconstitute part. It will check for a tree id in the session - id associated with the request tree - and will reconstruct the component tree if the two trees are not the same. With exo, you always make the request to /portal.jsp tree so the jsf tree is always cached in session. The JSF tree is destroyed and reconstructed only when the request user context does not match with the one in the session (done in the filter phase). Note that the exo jsf tree is constructed based on the pages.xml configuration file of the user, basically you will have a ui component tree that reflects the xml file.
  • JSF Decode phase The next phase of the faces cycle is the decode phase or apply request phase. In this phase, the JSF implementation iterates over the components in the component tree and calls each component's decode() method. That method extracts information from the request and stores it in the component. With the eXo implementation, the UIpage component will check for the portal action parameter, if the action is “change page”, it will go on and check for the page id. If the page id matches the current UIPage component id, it raises an action event. The PageActionListener will catch the event and will set the selected page flag of the UIPage object to true. Finally, it will ask the parent ui component (UIPortal) to set the selected property of the other UIPage components to false.
      public void decode(FacesContext facesContext, UIComponent uiComponent) throws IOException {
        UIPage uiPage = (UIPage) uiComponent ;
        HttpServletRequest request = (HttpServletRequest) facesContext.getExternalContext().getRequest();
        String portalAction = request.getParameter(Constants.PORTAL_ACTION);
        //check for the change page action
        if (portalAction != null && "changePage".equals(portalAction)) {
          String pageId = request.getParameter(Constants.PAGE_ID) ;
          // check the request page id with the current uiPage instance
          if (pageId.equals(uiPage.getId())) {
            //pass the event to the PageActionListener
            ActionEvent event = new ActionEvent(uiComponent, portalAction);
    The UITab decode() method will do the same thing as UIPage decode() method : it checks for the portal change tab action and sets the selected tab property to true and the property of the other tabs to false.
      public void decode(FacesContext facesContext, UIComponent uiComponent) throws IOException {
        UITab uiTab = (UITab) uiComponent ;
        HttpServletRequest request = (HttpServletRequest) facesContext.getExternalContext().getRequest();
        String portalAction = request.getParameter(Constants.PORTAL_ACTION);
        // Check if the portal action is change tab action
        if (portalAction != null && "changeTab".equals(portalAction)) {
          String tabId = request.getParameter(Constants.TAB_ID) ;
          // Check if the change tab action is addressed to the current UITab instance
          if (tabId.equals(uiTab.getId())) {
            //Create Action event and pass it to the TabActionListener
            ActionEvent event = new ActionEvent(uiTab, portalAction);
    The UIPorlet decode() will do 3 main tasks :
    • Check for the “change mode” event : if the event is detected and the request component id matches the current UIPorlet component id, it will raise an event and will delegate it to the PortletActionListnenter class. The listener will reset the mode in the UIPortlet Component.
    • Check for the “change window” state event: if the event is detected and that the request component id matches the current UIPorlet component id, it will raise an event and will delegate it to the PortletActionListnenter class. The listener will reset the window state in the UIPortlet Component.
    • Check for the portlet action type: According the portlet spec, we have 2 types of action : one is the action type and the other one is the render type. If the type is action , the processAction(..) method of the portlet will be called and then the render(....) method is called. If the type is render, only the render(..) method is called. It is mandatory that the processAction(..) has to be called before any render(..) method is called. The reason for this requirement is because a portlet can process an action and send a message to another portlet. Indeed, it would not make any sense if the render(...) method of the other portlet has already been called. Once again we can see how JSF technology fits very well with portlet technology, by defining many process phase. This way, it will make sure that each processAction(..) of each portlet will be called first and each render(..) method of each portlet will be called in the render phase
      public void decode(FacesContext facesContext, UIComponent uiComponent) throws IOException {
        UIPortlet uiPortlet = (UIPortlet) uiComponent ;
        HttpServletRequest request = (HttpServletRequest) facesContext.getExternalContext().getRequest();
        String type = request.getParameter(Constants.TYPE_PARAMETER);
        String portletMode = request.getParameter(Constants.PORTLET_MODE_PARAMETER);
        String windowState = request.getParameter(Constants.WINDOW_STATE_PARAMETER);
        String componentId = request.getParameter(Constants.COMPONENT_PARAMETER) ;
        // check for the portlet mode and if a mode change is detected , raise a change mode event
        if (portletMode != null && componentId.equals(uiPortlet.getWindowId())) {
          ChangePortletMode event = new ChangePortletMode(uiPortlet, "portletMode", portletMode);
        // check for the window state and if a change is detected , raise a change window state event
        if (windowState != null && componentId.equals(uiPortlet.getWindowId())) {
          ChangeWindowState event = new ChangeWindowState(uiPortlet, "windowState", windowState);
        //check for the action type and component id
        if (type != null  && componentId.equals(uiPortlet.getWindowId())) {
          //if type = action, raise a PortletAction event and pass it to the PortletActionListener
          //the listener will create the input/output object and pass the request to the portlet container
          if (type.equals("action")) {
            HttpServletResponse response = (HttpServletResponse) facesContext.getExternalContext().getResponse();
            PortletAction event = new PortletAction(uiPortlet, "portletAction", facesContext, request, response );
          }  else {
            //else type = render, Simply copy the parameter map and store it in UIPortlet component
            //in the render phase, use this parameter map for the request
            Map renderParams = request.getParameterMap() ;
            Map temp = new HashMap(10) ;
            Iterator keys = renderParams.keySet().iterator() ;
            while (keys.hasNext()) {
              String key = (String) keys.next() ;
              temp.put(key, renderParams.get(key)) ;
            renderParams = temp ;
  • JSF Render phase. Finally the Render phase creates the html page by calling the methods encodeBegin(..) encodeChildren(..) and encodeEnd(..) of the root component. The parent UIComponent will control the render phase of its children. With the eXo portal jsf tree, the root component is UIPortal component. As you can see in the code below, the UI portal renderer renders a table, then it renders the header portlet, the the current selected page and finally the footer portlet.
      public void encodeBegin(FacesContext facesContext, UIComponent uiComponent) throws IOException {
        ResponseWriter writer = facesContext.getResponseWriter();
        writer.write("<table class='portal' cellspacing='0' cellpadding='0' border='0' width='100%' height='100%'>");
      public void encodeChildren(FacesContext facesContext, UIComponent uiComponent) throws IOException {
        ResponseWriter writer = facesContext.getResponseWriter();
        Renderer pageRenderer = renderKit_.getRenderer(UIPage.RENDERER_TYPE);
        UIPortalPages uiPortalPages = (UIPortalPages) uiComponent ;
        Iterator iterator = uiPortalPages.getChildren();
        while (iterator.hasNext()) {
          Object component = iterator.next() ;
          //if instance is the portlet , it is either header or footer portlet
           if (component instanceof UIPortlet) {
            UIPortlet uiPortlet = (UIPortlet ) component ;
            Renderer portletRenderer = renderKit_.getRenderer(uiPortlet.getRendererType());
            writer.write("<tr><td valign='top' style='padding: 0px'>");
            portletRenderer.encodeBegin(facesContext, uiPortlet);
            portletRenderer.encodeChildren(facesContext, uiPortlet);
            portletRenderer.encodeEnd(facesContext, uiPortlet);
          } else if (component instanceof UIPage) {
            //If the componenent is UIPage instance and the page is selected
            //Then render the page.
            UIPage uiPage = (UIPage ) component ;
            if (uiPage.isSelectedPage()) {
              writer.write("<tr><td width='100%' height='100%' valign='top'>");
              pageRenderer.encodeBegin(facesContext, uiPage);
              pageRenderer.encodeChildren(facesContext, uiPage);
              pageRenderer.encodeEnd(facesContext, uiPage);
    The UI page renderer will render all its children that are UIMainColumn instances.
      public void encodeBegin(FacesContext facesContext, UIComponent uiComponent) throws IOException {
        UIPage uiPage = (UIPage) uiComponent ;
        ResponseWriter writer = facesContext.getResponseWriter();
        writer.write("<table width='100%' height='100%' class='") ;
        writer.write(uiPage.getStyle()) ;
        writer.write("' cellspacing='0' cellpadding='0' border='0'>n");
      public void encodeChildren(FacesContext facesContext, UIComponent uiComponent) throws IOException {
        ResponseWriter writer = facesContext.getResponseWriter();
        Renderer mainColumnRenderer = renderKit_.getRenderer(UIMainColumn.RENDERER_TYPE);
        UIPage uiPage = (UIPage) uiComponent ;
        Iterator iterator = uiPage.getChildren();
        while (iterator.hasNext()) {
          UIMainColumn uiMainColumn = (UIMainColumn )iterator.next() ;
          writer.write("<td class='");
          writer.write(uiMainColumn.getStyle()) ;
          writer.write("' width='");
          writer.write("' height='100%' valign='top'>");
          mainColumnRenderer.encodeBegin(facesContext, uiMainColumn);
          mainColumnRenderer.encodeChildren(facesContext, uiMainColumn);
          mainColumnRenderer.encodeEnd(facesContext, uiMainColumn);
      public void encodeEnd(FacesContext facesContext, UIComponent uiComponent) throws IOException {
        ResponseWriter writer = facesContext.getResponseWriter();
    In UI Portlet Renderer, since the UIPortlet has no children, only encodeBegin(..) is required to be called. In the code below, you can see that we create the RenderInput object, it contains all the information of the request, and delegates it to the portlet container. The portlet container will then invoke the method render(..) and return an OutputObject. Then the portal renders the portlet header and the portlet body using the content returned by the portlet container in the Output object. Note that in the portal page you have many portlets but each request only targets one portlet. Therefore, the parameter map sent to the container is cached as a parameter map in the associated UIPortlet object. Only the portlet you send request to use HttpServletRequest parameter map or a parameter map produced by the processAction() method. All of those steps are processed in the decode phase.
      public void encodeBegin(FacesContext facesContext, UIComponent uiComponent) throws IOException {
        portletContainer_ = (PortletContainerService)
        HttpServletRequest request = (HttpServletRequest)
       facesContext.getExternalContext().getRequest() ;
        HttpServletResponse response = (HttpServletResponse)
        UserProfile up = (UserProfile) request.getSession().getAttribute(Constants.USER_BEAN) ;
        StringBuffer baseUrlBuf = new StringBuffer() ;
                   append('?').append(Constants.PORTAL_CONTEXT).append('=').append(up.getUserName()) ;
        String baseUrl = baseUrlBuf.toString();
        UIPortlet uiPortlet = (UIPortlet) uiComponent ;
        Map renderParams = uiPortlet.getRenderParameters() ;
        log_.debug("map from uiportlet = " + renderParams) ;
        if (renderParams == null) {
          renderParams = new HashMap() ;
        RenderInput input = new RenderInput(baseUrl, uiPortlet.getWindowId() ,
                                            up.getUserName(), up.getUserInfoMap(),//user map
                                            uiPortlet.getWindowState(), "text/html",
        RenderOutput output = null;
        String portletContent = "There is an error" ;
        try {
          output = portletContainer_.render(request, response, input);
          portletContent = output.getContent() ;
        } catch (Throwable ex) {
        String portletTitle = uiPortlet.getTitle() ;
        if (portletTitle == null) {
          portletTitle = output.getTitle() ;
        String portletHeight = uiPortlet.getHeight() ;
        if (uiPortlet.getWindowState() == WindowState.MINIMIZED) {
          portletHeight = null;
        ResponseWriter writer = facesContext.getResponseWriter();
        writer.write("<table class='");
        writer.write(uiPortlet.getStyle()) ;
        writer.write("' cellspacing='0' cellpadding='0' border='0' width='100%'");
        if(portletHeight != null) {
          writer.write(" height='");
        } else {
        renderPortletHeaderBar(writer, uiPortlet, portletTitle, baseUrl, up) ;
        if (uiPortlet.getWindowState() != WindowState.MINIMIZED) {
          renderPortletBody(writer, uiPortlet, portletContent) ;
        renderPortletFooterBar(writer, uiPortlet, portletTitle, baseUrl, up) ;

Multi application server support : the universal deployer

In our previous article, we presented a JMX based deployer for the JBoss application server (beta2). That made us quite couple with this specific server. In beta3, we released an express version based on Tomcat. Here too we modified Tomcat code to allow portlet hot deployment. Unfortunately this approach was only possible with open source application server.

The previous deployers were used to acquire the ClassLoader and ServletContext objects of the portlet application deployed as a WAR archive. Those objects were then registered to the portlet container that directly instantiated the portlets using the correct class loader. The new universal deployer is based on the RequestDispatcher object. When a portlet application is now deployed, only the portlet.xml and web.xml files are registered into the container (actually we register the object representation of those XML files using JAXB). This phase simply uses a ServletListener object that register and unregister the files when the context of the application is deployed and undeployed.

Figure 15. The Portlet Container architecture

The Portlet Container architecture

Therefore, when a user sends a request to the portal web application (1), the portal decodes the incoming parameters to extract the portlet application name and portlet name (2) and redirects the request using the RequestDispatcher include() (4) method. What is necessary to understand here is that the request dispatcher is accessed using the portlet application context obtained using the portal ServletContext.

ServletContext portletContext = portalContext.getContext("/" + windowInfos.getPortletApplicationName());
RequestDispatcher dispatcher = portletContext.getRequestDispatcher(SERVLET_MAPPING);
try {
  dispatcher.include(request, response);
} catch (ServletException e) {
  throw new PortletContainerException(e);
} catch (IOException e) {
  throw new PortletContainerException(e);
} finally{


In the portlet application, a servlet used to wrapped portlets is then accessed. It extracts the information on which portlet to invoke with which incoming data and then delegates the work the PortletApplicationHandler class. This object obtains the portlet instance from the PortletApplicationProxy that instantiates it and calls the init() method if this is the first request to call the portlet. Then the handler calls either the processAction() or render() methods of the portlet.

Note that we have splited the ServletWrapper and PortletApplicationHandler in two in order to be able to unit test the portlet-container without launching the application server. This design which implied some more work was really a good choice as we highly used unit tests to develop the container. We are almost sure that without unit tests our first TCK tests score would have been much lower. The following code content is not that important, as such; it is to show that we have made custom code to avoid request dispatching and consequently the use of a servlet engine.

if (Environment.getInstance().getPlatform() == Environment.STAND_ALONE) {
  try {
    URLClassLoader oldCL = (URLClassLoader) Thread.currentThread().getContextClassLoader();
    URL[] urls = {new URL(PORTLET_APP_PATH + "WEB-INF/classes/"),
                  new URL("file:./lib/portlet-api.jar"),
                  new URL(PORTLET_APP_PATH + "WEB-INF/lib/")};
    Thread.currentThread().setContextClassLoader(new URLClassLoader(urls));

    try {
      return standAloneHandler.process(portalContext, request, response, input, output, windowInfos, isAction);
    } finally {
  } catch (MalformedURLException e) {


We insist again : eXtreme Programming (XP) and Test Driven Development (TDD) are more than good practices, they are way of life.

After days of brainstorming and a week of work refactoring some important parts of the portlet container we succeeded in building a universal deployer. We actually use the same code to deploy portlets on Tomcat, Jetty and JBoss. We are also about to make it work on IBM's Websphere Application Server (WAS). We will then be able to distribute the enterprise version of the portal as an EAR. Other application server will then be targeted.

IoC everywhere

The Inversion of Control design pattern really makes the developer's life much simpler and forces you to use interfaces instead of classes. This produces a much cleaner and maintainable code. As we explained in our previous article, the first aim of the pattern is not to let the object create itself the instances of the object it references.

The IoC type 3, as used in pico-container and now supported in the Spring framework, gives to the object, the other objects to reference within the constructor arguments.

public class PortletToTestIoC extends GenericPortlet{

  private LogService logService;
  private Log log;

  public PortletToTestIoC(LogService logService) {
    this.logService = logService;
    log = logService.getLog("exo.portal.container");

  public void processAction(ActionRequest actionRequest, ActionResponse actionResponse)
       throws PortletException, IOException {
    log.debug("Portlet is an IoC type 3 component");
    actionResponse.setRenderParameter("status", "Everything is ok");

  public void render(RenderRequest renderRequest, RenderResponse renderResponse)
       throws PortletException, IOException {
    log.debug("Portlet is an IoC type 3 component");
    PrintWriter w = renderResponse.getWriter();
    w.println("Everything is ok");


It is highly advisable that the constructor uses interface types instead of classes implementation. Indeed you can register into any pico-container the class implementation and then, when the object is instantiated pico can resolve the implementation of the interface that the constructor needs. Therefore, just by changing the implementation registered into pico you can completely modify the behaviour of your object without any change in it.

One of the main advantage of the IoC containers is to provide a simple way to unit tests your component. Indeed, by just registering a mock component implementation into pico in the setUp() method of a JUnit test, you can test the object easily outside the scope of any framework, server or anything like makes unit testing complex.

public void setUp() throws Exception {
  try {
    if (initService_) {
      ServicesUtil.addService("LogService", null, "exo.services.log.impl.LogServiceImpl",


The way we use pico container is quite interesting. We first define a set of services API composed of WorkflowServices, DatabaseService, CMSService, HibernateService, CacheService, PortletContainerService, OrganizationService, XMLProcessingService, EcommerceService, MonitorService, CommunicationService...We try to extract abstract behaviours using interfaces and value objects to completely decouple the API from the implementation in order to be able to change the concrete behaviour of a service without any change in any other services or java objects that use it.

The implementation of each service is packaged as an independent JAR archive with an exo-service.xml file bundled with it. This XML file defines the classes to be registered in our ServiceContainer object :

<?xml version="1.0" encoding="ISO-8859-1"?>

    <description>Database service</description>
    <description>Hibernate service</description>


Here is the schema of this XML file :

Figure 16. The eXo services scheme

The eXo services scheme

Each given class name represented is simply an implementation of one of the service API interfaces. When the ServiceManager singleton object is called for the first time it searches for all the exo-services XML file located in the classpath and register the service implementation class into a pico-container instance. With this automated discovery mechanism you only have to replace the implementation JAR to change the concrete implementation, the ServiceManager and pico-container takes care of the rest.

public class ServicesManager {

  private static ServicesManager ourInstance;
  private DefaultPicoContainer container;
  private Map servicesContext;

  public static ServicesManager getInstance() {
    if (ourInstance == null) {
      synchronized(ServicesManager.class) {
        ourInstance = new ServicesManager();
        if (Environment.getInstance().getPlatform() != Environment.STAND_ALONE){
          ourInstance.installServices() ;
    return ourInstance;

  private ServicesManager() {
    servicesContext = new HashMap();
    container = new DefaultPicoContainer();


  private void installServices() {
    try {
      ClassLoader cl = Thread.currentThread().getContextClassLoader() ;
      JAXBContext jc = JAXBContext.newInstance("exo.services.model");
      Unmarshaller u = jc.createUnmarshaller();
      Enumeration e = cl.getResources("exo/services/exo-service.xml") ;
      while(e.hasMoreElements()) {
        URL url  = (URL) e.nextElement() ;
        InputStream serviceDescriptor = url.openStream();
        Services services = (Services) u.unmarshal(serviceDescriptor);
        ServiceContext serviceContext = new ServiceContext( cl, services);
    } catch (Exception ex) {
      ex.printStackTrace() ;


For example our PortletContainerService is based on the CMSService which is itself based on the DatabaseService; they are also all based on the LogService. All these services have well defined API interfaces which are used by the other dependent classes. Therefore if you have a log4j implementation of the LogService and that you would like to use a commons-logging one, you just have to change the LogService implementation, bundle that with an XML file that defines the new class to register in the ServiceManager in a JAR archive, and finally deploy it in the application server instead of the previous JAR. No other modifications are necessary in any other services that depends on the log one. And this is the case for all other services!

The ServiceManager object is implemented as a singleton that wraps pico-container. This lets us define a single IoC components repository, allowing for non IoC components like AspectJ aspect to also lookup the services. Each time we would like to make a component use the services in a type 3 fashion we just need to add it to the ServiceManager. This is what we have done for Portlets, PortletFilters, MessageListeners and ActionHandlers classes, they all are IoC type 3 components and can lookup the services from the unique repository within their constructor. Here is the example of the already shown PortletFilter that is - this time - given the LogService :

public class LoggerFilter implements PortletFilter{

  private LogService logService;
  private Log log;

  public LoggerFilter(LogService logService) {
    this.logService = logService;
    log = logService.getLog("exo.portal.container");

  public void doFilter(PortletRequest portletRequest,
            PortletResponse portletResponse,
         PortletFilterChain filterChain)
 throws IOException, PortletException {
    log.debug("------------->LOG FILTER  PRE");
    filterChain.doFilter(portletRequest, portletResponse) ;
    log.debug("------------->LOG FILTER  POST");

  public void destroy() {
    log.debug("------------->LOG FILTER DESTROY");


AspectJ aspects can not be registered in the ServiceManager, so to lookup a service from there some more work has to be done :

aspect PortletCacheAspect extends PortletBaseAspect {

  private PortletContainerConf conf;

  public PortletCacheAspect() {
    conf = (PortletContainerConf)ServicesManager.getInstance().getService(PortletContainerConf.class);



We had quite a lot of feedback after our previous article, most of it about the complexity of the concepts we used - mainly AOP and IoC. We think these ideas are now better understood, but we have included more examples this time, especially in the IoC part. You can actually only grasp the power and ease of use of these paradigms having played a little with real life applications.

More than a simple marketing presentation, we hope to promote the best practises that IoC, TDD, XP and even AOP represent, by showing an open source application that tries to apply all these concepts. Total Quality is our ambitious aim!

We have split the article in several sections that may interest different readers : managers, portlet developers and platform architech/developers. Of course we hope that all parts can be read by everybody! Do not hesitate to contact us about the article, to comment on the TSS thread, and please feel free to ask any questions on our forum on the www.exoplatform.org community site.

Let's finish with an overview of the eXo platform functionalities

Figure 17. The eXo platform overview

The eXo platform overview


JSR 168 : the portlet API . First, if we had only one introduction article to suggest, we would advise you to read the chapte 4 (Concepts) of the portlet API specifications. This is a very clear presentation of what is a Portal, a Portlet Container and the way they interact.

JSR 127 : Java Server Faces

Inversion of Control resources

Related projects :

Dig Deeper on Web portal design

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.