Discussions

News: Guy Pardon: Transactional J2EE Apps with Spring

  1. In this multimedia presentation from BeJUG, Guy Pardon details writing a transactional J2EE application with Spring, detailing the Spring framework itself and how transactional applications can be configured and deployed.

    View "Transactional J2EE Apps with Spring"

    Threaded Messages (34)

  2. Nice presentation[ Go to top ]

    Nice presentation, Guy Pardon :) BTW Are there any other standalone TPMs like Atomikos except those provided by the application servers ?
  3. Nice presentation[ Go to top ]

    Nice presentation, Guy Pardon :) BTW Are there any other standalone TPMs like Atomikos except those provided by the application servers ?

    In theory you can use JOTM with Enhydra's XA DataSource from ObjectWeb.org, but I haven't been able to make it work. Anyone had success with this?
  4. Nice presentation[ Go to top ]

    Nice presentation, Guy Pardon :) BTW Are there any other standalone TPMs like Atomikos except those provided by the application servers ?
    In theory you can use JOTM with Enhydra's XA DataSource from ObjectWeb.org, but I haven't been able to make it work. Anyone had success with this?

    Yes it works for me well. The tricky part IIRC was to use StandardXADataSource wrapped in StandardXAPoolDataSource. Like below:


      <bean id="jotm" class="org.springframework.transaction.jta.JotmFactoryBean"/>

       <bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
          <property name="userTransaction"><ref local="jotm"/></property>
       </bean>

      <bean id="innerDataSource" class="org.enhydra.jdbc.standard.StandardXADataSource" destroy-method="shutdown">
       <property name="transactionManager" ref="jotm"/>
       <property name="driverName" value="oracle.jdbc.driver.OracleDriver"/>
       <property name="url" value="jdbc:oracle:thin:@host:1521:sid"/>
      </bean>

      <bean id="dataSource"
            class="org.enhydra.jdbc.pool.StandardXAPoolDataSource"
            destroy-method="shutdown">
            <property name="dataSource" ref="innerDataSource"/>
            <property name="user" value="user"/>
            <property name="password" value="passwd"/>
            <property name="maxSize" value="2"/>
      </bean>


    Artur
  5. Nice presentation[ Go to top ]

    Nice presentation, Guy Pardon :) BTW Are there any other standalone TPMs like Atomikos except those provided by the application servers ?

    Sure. You can use Geronimo's Transaction Manager in any Spring application easily using Jencks.

    e.g. here's how to use the Transaction Manager
    http://jencks.codehaus.org/Transaction+Manager

    to use pooled XA DataSources...
    http://jencks.codehaus.org/Outbound+JDBC

    outbound JMS...
    http://jencks.codehaus.org/Outbound+JMS

    or Message Driven POJOs...
    http://jencks.codehaus.org/Message+Driven+POJOs

    James
    LogicBlaze
  6. Nice presentation[ Go to top ]

    You can use Geronimo's Transaction Manager in any Spring application easily using Jencks. e.g. here's how to use the Transaction Managerhttp://jencks.codehaus.org/Transaction+Managerto use pooled XA DataSources...http://jencks.codehaus.org/Outbound+JDBCoutbound JMS...http://jencks.codehaus.org/Outbound+JMSor Message Driven POJOs...http://jencks.codehaus.org/Message+Driven+POJOsJamesLogicBlaze

    Thanks, I didn't know Jencks... Does it (or Geronimo) do restart/crash recovery? Correct me if I am wrong, but I don't know any open source project that does this correctly, though I admit that I didn't follow up on them as much as I used to.

    Guy
  7. Nice presentation[ Go to top ]

    Thanks, I didn't know Jencks... Does it (or Geronimo) do restart/crash recovery?

    Yes; its using the Geronimo transaction manager under the covers & Howl for the transaction log so recovery is implemented; which is kinda crucial - XA without recovery isn't really XA.

    Note though is you have to explicitly configure the Transaction Manager with a Howl journal to enable recovery, since you need to specify where on the file system to store the journal - the default is to use a RAM journal if one is not specified which will not implement process restart recovery.

    James
    LogicBlaze
  8. And what about database isolation levels[ Go to top ]

    Nice presention Guy.

    Could you also explain if it is possible, and if so how, you can specify a specific database isolation level for different data access methods. For example for many select queries a "dirty-read" (fast!) is ok. For other methods a much more restrictive (hence slower) query is needed.

    thank you in advance
  9. Nice presention Guy. Could you also explain if it is possible, and if so how, you can specify a specific database isolation level for different data access methods. For example for many select queries a "dirty-read" (fast!) is ok. For other methods a much more restrictive (hence slower) query is needed. thank you in advance

    Hi,

    Isolation levels are set at the individual connection level; we (Atomikos) don't add any extra support for it otherwise. Maybe that's a good suggestion for the future; something to look into.

    Thanks for the tip,
    Guy
  10. Nice presentation. One minor nit from slide 27 & 28 on using JMS with JDBC or 2 JMS providers...
    Avoiding JMS message loss and duplicates requires JTA!

    This is not quite true. You can avoid JTA/XA entirely and still not loose a message though you may get duplicates.

    http://activemq.org/Should+I+use+XA

    XA is really slow so often folks use application level duplicate detection to avoid paying the XA performance slowdown cost. Duplicate detection is an efficient solution as well since it avoids all those slow sync-to-disks of XA and you can perform the duplicate detection if the JMS message has Message.isJMSRelivered() returning true.

    i.e. for normal operation, there is no added performance hit - you only pay the hit of duplicate detection if a message is being redelivered. The downside is that you need to add application level duplicate detection - the upside is a massive performance boost.

    James
    LogicBlaze
  11. XA overhead[ Go to top ]

    Nice presentation. One minor nit from slide 27 &amp; 28 on using JMS with JDBC or 2 JMS providers...
    Avoiding JMS message loss and duplicates requires JTA!
    XA is really slow so often folks use application level duplicate detection to avoid paying the XA performance slowdown cost. Duplicate detection is an efficient solution as well since it avoids all those slow sync-to-disks of XA and you can perform the duplicate detection if the JMS message has Message.isJMSRelivered() returning true. i.e. for normal operation, there is no added performance hit - you only pay the hit of duplicate detection if a message is being redelivered. The downside is that you need to add application level duplicate detection - the upside is a massive performance boost.JamesLogicBlaze

    Indeed, duplicate detection may be one way, idempotence could be another way (assuming that you are receiving and not sending). Idempotence meaning: the effect of repeated message consumption is the same as just consuming it once. Both ways are application-specific solutions.

    Concerning XA overhead: I agree that XA has more overhead than regular database access, but still modern systems are often very fast in XA. Oracle uses write-ahead logging and frankly surprizes me every time. The same holds for our JTA: we have added a lot of optimization in there. As an illustrating example: we typically process transactions (and XA) faster than Tomcat can service HTTP requests. So even if XA is slower than non-XA, I would say that we're unlikely to slow you down.

    Guy
  12. XA overhead[ Go to top ]

    Nice presentation. One minor nit from slide 27 &amp;amp; 28 on using JMS with JDBC or 2 JMS providers...
    Avoiding JMS message loss and duplicates requires JTA!
    XA is really slow so often folks use application level duplicate detection to avoid paying the XA performance slowdown cost. Duplicate detection is an efficient solution as well since it avoids all those slow sync-to-disks of XA and you can perform the duplicate detection if the JMS message has Message.isJMSRelivered() returning true. i.e. for normal operation, there is no added performance hit - you only pay the hit of duplicate detection if a message is being redelivered. The downside is that you need to add application level duplicate detection - the upside is a massive performance boost.JamesLogicBlaze
    Indeed, duplicate detection may be one way, idempotence could be another way (assuming that you are receiving and not sending). Idempotence meaning: the effect of repeated message consumption is the same as just consuming it once. Both ways are application-specific solutions.

    Good point. I guess both options come down to, can the application deal with duplicates; if it can you don't need XA.
    Concerning XA overhead: I agree that XA has more overhead than regular database access, but still modern systems are often very fast in XA.

    Like most things in IT, it all depends on your requirements. In high performance environments where you need to process thousands of transactions a second the XA overhead is huge - since it involves many sync-to-disks which add considerable latency which can increase database lock time & so increase contention.

    The Transaction Manager itself must sync to disk state at the prepare and commit stages; plus each resource must sync to disk at each stage so they can recover. One of the slowest things you can do these days is wait for a sync to disk - streaming lots of data to disk gets faster and faster, but syncing to disk seems near constant these days.

    Note that you really need to sync to disk, not just write into your file systems RAM buffer; since the whole point of XA is that you can pull the plug from your computer at any point in the protocol and things will work themselves out at recovery time on startup.

    Its quite easy to think you're syncing to disk but in reality you're just writing into an operating system RAM buffer which will at some point in the future be asynchronously sync'd to disk which is fine in general but not good if you really want XA recovery.

    Having said all that, the performance of XA does go up considerably the more concurrent you get as transaction logs get way faster the more threads use them as threads piggy back on each others sync to disk.

    Oracle uses write-ahead logging and frankly surprizes me every time. The same holds for our JTA: we have added a lot of optimization in there.

    You've got your own Transaction Manager? :)

    As an illustrating example: we typically process transactions (and XA) faster than Tomcat can service HTTP requests. So even if XA is slower than non-XA, I would say that we're unlikely to slow you down.Guy

    Well maybe Tomcat's being really slow? :)

    BTW are you sure you really are syncing to disk at each stage of the XA protocol (in the Transaction Manager and in your 2 XA resources)?

    To really measure XA performance you've really gotta ensure stuff is sync'd to disk properly at each stage of the XA protocol. If you miss that step out, XA should be crazy fast, since its just a few method calls that don't do very much :)

    James
    LogicBlaze
  13. XA overhead[ Go to top ]

    Like most things in IT, it all depends on your requirements. In high performance environments where you need to process thousands of transactions a second the XA overhead is huge - since it involves many sync-to-disks which add considerable latency which can increase database lock time &amp; so increase contention.

    BTW one trick to improve throughput if you must use XA with JMS and database is to use batching; since the XA protocol adds a relativley fixed overhead, you can process 100 or 1000 messages in a single XA transaction which greatly improves throughput.

    Some JMS Resource Adapters support batching which helps too.

    James
    LogicBlaze
  14. XA overhead[ Go to top ]

    BTW one trick to improve throughput if you must use XA with JMS and database is to use batching; since the XA protocol adds a relativley fixed overhead, you can process 100 or 1000 messages in a single XA transaction which greatly improves throughput. Some JMS Resource Adapters support batching which helps too.JamesLogicBlaze

    Along the same lines, WebLogic provides transaction batch settings for MDBs and for its messaging bridges (both work with pretty much any JMS compliant vendor that implement the javax.jms.XA* interfaces).

    It is interesting to note that transaction batching applications can actually far outperform non-transactional applications - this is because well written transaction logs, databases, and JMS servers efficiently combine the multiple operations in a single transaction into a minimum number of disk writes.

    And for exactly-once forwarding between WebLogic destinations, WebLogic uses another trick which has already been mentioned elsewhere in this discussion: WebLogic's configurable "SAF" agents use an efficient internal dup-elimination algorithm rather than XA.

    Tom Barnes
    WebLogic Performance Team, BEA
  15. XA overhead[ Go to top ]

    I understand your point about XA recovery. What about using "last resource" trick. Don't use last resource as XA compliant, prepare all TXs, write status of all TX to last resource (eg database) and commit last resource. Here are some intermediate steps however if you don't find your prepared TX at recovery time in database just rollback.
    It's useful only for situation where everything or nothing has to be commited and database is available at recovery. But for 99% cases it is what you expect. Don't you?
  16. XA overhead[ Go to top ]

    Nice presentation. One minor nit from slide 27 &amp;amp;amp; 28 on using JMS with JDBC or 2 JMS providers...
    Avoiding JMS message loss and duplicates requires JTA!
    XA is really slow so often folks use application level duplicate detection to avoid paying the XA performance slowdown cost. Duplicate detection is an efficient solution as well since it avoids all those slow sync-to-disks of XA and you can perform the duplicate detection if the JMS message has Message.isJMSRelivered() returning true. i.e. for normal operation, there is no added performance hit - you only pay the hit of duplicate detection if a message is being redelivered. The downside is that you need to add application level duplicate detection - the upside is a massive performance boost.JamesLogicBlaze
    Indeed, duplicate detection may be one way, idempotence could be another way (assuming that you are receiving and not sending). Idempotence meaning: the effect of repeated message consumption is the same as just consuming it once. Both ways are application-specific solutions.
    Good point. I guess both options come down to, can the application deal with duplicates; if it can you don't need XA.
    Concerning XA overhead: I agree that XA has more overhead than regular database access, but still modern systems are often very fast in XA.
    Like most things in IT, it all depends on your requirements. In high performance environments where you need to process thousands of transactions a second the XA overhead is huge - since it involves many sync-to-disks which add considerable latency which can increase database lock time &amp; so increase contention. The Transaction Manager itself must sync to disk state at the prepare and commit stages; plus each resource must sync to disk at each stage so they can recover. One of the slowest things you can do these days is wait for a sync to disk - streaming lots of data to disk gets faster and faster, but syncing to disk seems near constant these days.Note that you really need to sync to disk, not just write into your file systems RAM buffer; since the whole point of XA is that you can pull the plug from your computer at any point in the protocol and things will work themselves out at recovery time on startup. Its quite easy to think you're syncing to disk but in reality you're just writing into an operating system RAM buffer which will at some point in the future be asynchronously sync'd to disk which is fine in general but not good if you really want XA recovery.Having said all that, the performance of XA does go up considerably the more concurrent you get as transaction logs get way faster the more threads use them as threads piggy back on each others sync to disk.
    Oracle uses write-ahead logging and frankly surprizes me every time. The same holds for our JTA: we have added a lot of optimization in there.
    You've got your own Transaction Manager? :)
    As an illustrating example: we typically process transactions (and XA) faster than Tomcat can service HTTP requests. So even if XA is slower than non-XA, I would say that we're unlikely to slow you down.Guy
    Well maybe Tomcat's being really slow? :) BTW are you sure you really are syncing to disk at each stage of the XA protocol (in the Transaction Manager and in your 2 XA resources)? To really measure XA performance you've really gotta ensure stuff is sync'd to disk properly at each stage of the XA protocol. If you miss that step out, XA should be crazy fast, since its just a few method calls that don't do very much :)JamesLogicBlaze

    We definitely test whether XA is synced out. Positively so, unlike many other JTA implementations. No release goes out without recovery being in place and persistent.

    We even sync out compensation-related (and application-specific) information these days, to improve web service transaction site autonomy.

    Concerning Tomcat's performance: maybe it's slow, but I have seen many tests where it came out better than most other web containers. Of course, web containers are not JMS container - I will be the first to admit:-)

    Best,
    Guy
  17. Duplicate detection is an efficient solution as well since it avoids all those slow sync-to-disks of XA and you can perform the duplicate detection if the JMS message has Message.isJMSRelivered() returning true. i.e. for normal operation, there is no added performance hit - you only pay the hit of duplicate detection if a message is being redelivered. The downside is that you need to add application level duplicate detection - the upside is a massive performance boost.

    While I completely agree that XA is slow and sometimes quite dangerous (I remember Oracle blocking the whole table because of XA transaction that wasn't committed), duplicate detection is not always an alternative.

    There are a few catches:

    1. The way it is proposed - rollback JMS when you can't update JDBC will result in JMS message re-delivered. But in 99% of cases JDBC will fail again (nothing has changed since previous try, chances of failure are quite high anyway), and JMS message will end up in dead letter queue. Unless you provide some applicative recovery for DLQ it won't work as expected.

    2. There are scenarios when JDBC and JMS are used on the opposite side of the queue - when messages are _sent_. In this scenario you shouldn't send JMS message if JDBC wasn't updated properly. Without XA, you need to design your app so that JDBC gets updated first and _all_ JMS messages are sent then. But if it is JMS who fails, then your JDBC is updated and JMS messages are lost - unless, again, you provide some recovery mechanism at the level of your application. For example, you might consider something as stupid as storing JMS messages in the intermediate (backup) storage in the same JDBC with separate thread sending them. Or some way to "re-generate" JMS messages from the last known JDBC state. Or just accept the risks and provide manual recovery tool to the application administrators.

    Anyway, there's no easy way around simultaneous consistent updates of heterogenous datasources. If the costs of losing an JMS message are quite high, it's must be XA with all the negative sides of it. Otherwise you'll end up writing your own transaction coordinator :)
  18. Whe you don't need it[ Go to top ]

    Would you agree that in majority of cases just a DAO would do?

    How often do you have 2 dbs? (or a db and mq?)

    .V
  19. Whe you don't need it[ Go to top ]

    Well, using two DB in a project may not be a very common scenario(Although I do use 3 databases in my current J2EE project and many large financial apps do), but I do believe using JMS and Database is quite common. Atleast in the projects that use JMS, usually it's pretty common to have MDBs process the messages and make updates to DB. I always knew that's good candidate for XA. Thanks to James Strachan, now I know better.

    Bijan
  20. Whe you don't need it[ Go to top ]

    Well, using two DB in a project may not be a very common scenario(Although I do use 3 databases in my current J2EE project and many large financial apps do), but I do believe using JMS and Database is quite common. Atleast in the projects that use JMS, usually it's pretty common to have MDBs process the messages and make updates to DB. I always knew that's good candidate for XA. Thanks to James Strachan, now I know better.Bijan

    Agreed - lots of companies have many systems connected together using some MOM/JMS. Then each system typically has its own database; so there's a need for a process to consume/send JMS messages and perform work on a database in a reliable way - so the 1 DB and 1 JMS scenario is quite common.

    Of course if you have a simple system all in a single database which does not communicate reliably with any other system then sure use a single DAO with JDBC transactions & that'll work fine.

    James
    LogicBlaze
  21. Whe you don't need it[ Go to top ]

    Would you agree that in majority of cases just a DAO would do?How often do you have 2 dbs? (or a db and mq?).V

    The DAO will work if you only have one database (for instance, if you can decide on that kind of architecture).

    Bigger companies often end up having multiple data sources that are spread throughout the company at different locations/departments. In that case, integration of two or more databases is a fact of life.

    In addition, the adoption of SOA for intranet integration may very well invalidate the DAO as being sufficient, in favor of web service transactions. If the DAO is a service and the client is somewhere else, you may very well want transactions after all.

    Best,
    Guy
  22. Why not use EJBs??[ Go to top ]

    IMHO EJBs provide a very developer freindly solution. Having used EJBs in my last project for updating database I feel EJBs provide an elegant solution to Transaction management.
  23. Why not use EJBs??[ Go to top ]

    EJBs - which kind. To say "EJB" is like to say universe. Stateful sesssion, stateles session, entity, message, Bean managed, containter managed.
    Good practices are always welcome, but please be more specific.
  24. Why not use EJBs??[ Go to top ]

    Using EJBs is certainly an option, but in my experience they have the following problems (at least when I use them):

    -The (pre-3.0) EJB programming model doesn't match sound domain-related OO modeling; in addition there is too much plumbing code for my taste (of course that is personal and subjective). I am sure that I am not the first one to complain about this.

    -What's worse: the EJB persistence (at least for the pre-3.0 editions) can be a pain to configure properly. The deployment descriptor doesn't specify all, leaving many server-dependent and cumbersome mappings to be defined. Even the standard XML in the deployment descriptor can give server-specific parsing problems. If you use Spring then you get one XML and one parser for all platforms. That is less error-prone. That single argument was enough to make me go for Spring.

    But if EJB works for you then that's great... In the end, everyone should use the tools that work for them.

    Best,
    Guy
  25. Why not use EJBs??[ Go to top ]

    I used Stateless session Beans which for Transaction management for updating 2 different databases , with failover recovery. Certainly with EJB 3.0 , things would be lot better. But surely Transaction management with Spring is worth trying.

    Are we ever going to have one best option for everything in years to come...for persistence,Transaction management and other things??
  26. Why not use EJBs??[ Go to top ]

    EJB session beans (stateless and stateful) provide a good programming model for transactional boundary control and packaging of business logic.

    As someome mentioned before, it is important to be clear on which type of "EJB" one is referring to...I've been amazed at the frequency that "EJB" is used when people really mean entity beans. Fortunately the EJB 3.0 spec makes this separation much clearer than in the past -- for JavaEE v5 the persistence API was actually moved into a spec of its own: Java Persistence API version 1.0, or JPA.

    The EJB 3.0 spec gives stateless and stateful session beans the POJO and dependency injection features that have made Spring popular, while providing a standards-based solution. Likewise, the JPA 1.0 spec gives entity beans the POJO and mapping features that have made Hibernate and TopLink popular, while providing a standards-based solution.
  27. More information on JTA use cases[ Go to top ]

    Hi,

    First of all: thank you all for the feedback!

    I have had so many email requests for more information on the JTA use cases that I have prepared some entries in our FAQ; see Transacitons and Reliable JMS for more.

    Best,
    Guy
  28. More information on JTA use cases[ Go to top ]

    If you will store message number somewhere in database you can easily avoid duplicate message processing. When number in message is already in your database, you can discard incoming message because it was for sure processed.
    More problem is with duplicate response because you aren't able to recognise if response was send before crash. But you can use same message number inside message. For receiver it will look like duplicate message. However you will have to store also response message in database in same TX as incomming message was processed.
    Golden rule is - duplicates in messaging should be avoided by application code without TX.
  29. Duplicate elimination by application[ Go to top ]

    Golden rule is - duplicates in messaging should be avoided by application code without TX.

    Hi,

    Thanks, I like your point about duplicate responses.

    Putting duplicate detection in the application is possible, but it is easy to make mistakes. In the end, everything JTA does could also be hard-coded into each application, but then you lose the advantage of having a tested and reusable system that takes care of things for you. A transaction manager is like an insurance on your data correctness. Whether you want to pay for one depends on how much you have to lose if things go wrong, and how high that risk is...

    Guy
  30. Another scenario[ Go to top ]

    I am not sure if this is transaction related but I ran into lots of problem related to the following:

    Thread 1:
    1) Starts a new XA Transaction
    2) Inserts a record into DB
    3) Sends a JMS message
    4) Commits the XA transaction

    Thread 2:
    1) Receives message from the above
    2) Queries the db for the record just inserted

    In the second thread, there are times when the messages are received before the new db record can be retrieved (i.e. JDBC would return no record), even though the JTA transaction manager has already commited in thread 1. I am using Weblogic 8.1 JMS and Oracle 9i with XA transaction enabled.

    What i do is to implement a retry logic and continue to query the db in the second thread util the record is found.

    Anyone run into similar problem and how do you solve it?

    Vincent
  31. Re: Another scenario[ Go to top ]

    What you are running into is s race condition in the 2nd phase of the 2pc. WLS can send the 2nd phase commit message in parallel and often, especially if you're using a JMS filestore, JMS commits much faster while the DB is still chugging along. The MDB finds the message a bit too soon as your new record in DB hasn't shown up yet.

    Other than the retry you're doing there is not really much of an option given that this is an insert. This was a very common scenario in the internal BPM implementation in WebLogic Integration 7.0 and retry was the only option.

    If this were a data update you could have had the 2nd thread MDB issue a read for update by turning on a special flag in WLS deployment descriptor.

    Apparently there are TMs which allow you to impose ordering on the commits...
  32. Re: Another scenario[ Go to top ]

    I know of a couple of other options to avoid the race condition.

    WebLogic 9.0 provides a JDBC option that safely avoids XA protocol overhead with the database and also happens to guarantee that the database's portion of the transaction is always committed first. The http://e-docs.bea.com/wls/docs90/jta/llr.html" target="_blank">Logging Last Resource</a> option works with non-XA drivers and generally improves performance of most transactional database applications. It is fully ACID -- unlike other commonly available transaction optimization schemes (such as the last participant optimization).

    In addition, I think that some app server vendors (not WebLogic) provide the option of transparenly combining JMS message persistence to a database and application SQL into single local transactions. WebSphere in particular has the option, although I'm given understand it is restricted to use with entity beans.

    Tom Barnes
    WebLogic Performance Team, BEA
  33. Another scenario[ Go to top ]

    The problem is that you are assuming the database is committed before JMS when committing the JTA global transaction; I dont think it's valid to assume a specific ordering of XA resource commits within a global transaction. Instead of thread 1 inserting a record into the DB to be read by thread 2 when it is dispatched by the JMS message, you could pass the record in the JMS message to thread 2. Thread 2 could insert the record into the DB if it needs to be persisted.
  34. Another scenario[ Go to top ]

    Thread 1:1) Starts a new XA Transaction2) Inserts a record into DB3) Sends a JMS message4) Commits the XA transactionThread 2:1) Receives message from the above2)

    It just means your JMS session doesn't work in XA mode or isn't transacted at all.
    In XA mode your thread 2 just can't get anything from JMS until XA transaction is committed by thread 1.
  35. What I don't like too much is the fact that Spring forces me to build a lot of code *around" the basic EJB model to make applications run outside an application server. That's not what I would call "easy to use". Check out the CUBA project at Sourceforge (http://cuba.sourceforge.net) which allows to create components which can be run in managed and unmanaged environments with the need to code anything around. And of course including CMT!