Vignette and Trilogy, two heavy weight e-Bussiness solution providers chose sides in the app server wars. Vignette allied itself with BEA and Trilogy with IBM. These alliances will affect the market shares of Weblogic and WebSpehere since solutions providers deploy a large portion of sites.
The BEA press release states that Vignette will be "providing a best-of-breed suite of customer-driven Internet applications optimized for the BEA WebLogic platform".
The IBM press release states "Trilogy is continuing to work with IBM to develop version 3.0 of the Trilogy MultiChannel Commerce solution optimized for the soon to-be-released version 4.0 WebSphere".
Read the IBM press release
Read the BEA press release
I think that as the software stacks become more and more proprietary this trend will continue. Supporting multiple software stacks will be too expensive. This really puts the smaller J2EE vendors (Persistence, Sybase, iPlanet, Iona etc) at a disadvantage as solution providers will want to bet on the market leaders, i.e. BEA or IBM.
Solutions providers don't want to be in the middleware business. They want to build on someone elses middleware and focus on whats gives their solution business value.
J2EE provides a basis for these software stacks but it is not sufficient for building these complex solutions. Vendors are adding a lot of software to the basic J2EE (Personalization, Process Automation, B2B etc.). All of this extra software in the stack is proprietary. The myth that J2EE provides an open solution is only true for very simple systems. You will be locked in to a vendor if you take advantage of the extra software in the stack. J2EE vendors such as BEA, Sybase, Iona and to a lesser extent IBM are all building software on top of their J2EE servers but are not saying that you can run it on another vendors J2EE server. I think that vendors such as Sybase and Iona should port their suites to run on other J2EE servers as 70% of the market don't use their J2EE servers and therefore trying to sell a software stack based on a different J2EE server will be a difficult proposition due to interoperability problems with the existing deployed 'company standard' application servers.
I think it's becoming clear that the main advantage of J2EE versus Microsoft is that it gives you choice of platform. You can run it on Solaris, NT, Linux etc but cross vendor portability is becoming elusive as the software stacks develop much faster than the J2EE standards can keep up.
I think you're right -- it takes a lot of care (that is, cost) to develop software which is vendor-neutral. It is, though, something which is pretty high-priority when developing.
One thing I've done on my J2EE projects is to be **very** wary of programming directly to a weblogic.* or com.ibm.* or other vendor-specific class or interface. I get the most comfort if there's a java.* or javax.* interface which i can use instead -- doing this at least tells me that I've got a hope of **compiling** my application under another vendor.
The next best thing is to wrap it in my own code, so I isolate exposure to whatever BEA or IBM is up to. The main stuff will recompile, but I'm accepting responsibility for keeping my wrapper up-to-date in terms of new versions from my vendor or switching vendors. Doesn't fix the problem, but at least it limits exposure.
Sun would be doing everyone a favor and keeping J2EE implementations vaguely clean-feeling if they would produce standard interfaces for the vendors to implement.
Why is Sybase a "smaller" J2EE vendor when the polls on your own site have shown twice that Sybase has the same number of votes as IBM has? This is futher exposed by the GIGA market share report showing Sybase with within 1% of the market share of IBM?
Internet Applications Division
Nobody should take the numbers on my site seriously. My focus on WebSphere definitely skews the numbers towards an IBM audience and equally Sybases lobbying of their developers through their newsletter to vote for Sybase also helped inflate Sybases vote count. Both these influences may influence the mix of voters but BEA still manage to come out on top regardless and this I think is testament to their current domination of this market.
I listed Sybase as a smaller vendor based on previous "professional" reports on market share. Of course, it's hard to know what reports to believe as they seem to vary widely. The only thing that seems clear from the reports is that BEA is consistently the leader. As far as the number two and three vendors, I've no idea as the reports are not consistent. The only thing I will say is that when analysts ask BEA during earnings calls who their biggest competitor is that BEA consistently answer IBM. I think that has gotta say something about who the number 2 vendor really is.
My comment should in no way be perceived as a slight against EAServer which I believe to be one of the stronger J2EE servers on the market. The fact I think that you should port Event Broker to other J2EE servers can only be taken as praise for the EB product which is a very useful tool to have in the bag. But, if the customer has standardized on a non-Sybase J2EE server then I think it will be a difficult sell which is a shame.
Good observations Billy.
"Component stacking" leads to no good and is always bad news for us users. Hopefully recent Sun developments like Sun ONE and JCA will promote inter-dependency and not bulk software.
Sun One makes things worse as far as I can see. They are just playing me too and building their component stack on Forte and iPlanet but I don't see where Sun have said that they will make J2EE accomodate standards necessary to avoid this component stacking lockin.
Billy, I both agree and disagree with you. You're right that J2EE currently only covers the very basic applications and if you want to develop an enterprise app that fulfills modern business requirements, you're probably going to have to use a thin layer of components on top of J2EE that are specific to each app server or write your own. But remember that J2EE is a constantly evolving standard. As time goes on, I expect Personalization, Commerce, Fullfilment, etc. to become part of the standard platform APIs. And, this means that the app server vendors will have to continue to innovate. They have to move fast or be crushed by the competition. That's what really differentiates this from the Microsoft stack. The Open Source implementations like JBoss will push the commercial vendors to continually add value or risk losing out to the free solution. Already, JBoss integrates Soap, XML data binding, and other advanced features.
I don't see where you disagree. I said merely "that the app server vendors are adding functionality faster than the J2EE standard can keep up."
I didn't say that the J2EE standard wouldn't expand but if customers buy in to a vendor stack now, it's an expensive proposition for them to rewrite later when the APIs all change as it gets standardized. This was ok in the past because the stack was small but as the stacks get more complex this isn't feasible any more. How willing the vendors will be to rewrite would also be a question to ask.
This is leading to vendor lockin, plain and simple.
There is no vendor lock if vendors submit their APIs as JSRs that are later ratified by Sun's Community Process. That's exactly what the JDOM team proposes to do with JDOM once it's out of beta. JDOM will probably be a JSR to JAXP 2. So, there's a concrete example on how the process can incorporate "proprietary" APIs into the Java platform over time.
I disagreed with the statement "that the app server vendors are adding functionality faster than the J2EE standard can keep up", by the way, because I don't believe it's 100% accurate. I can think of numerous examples were the Sun Spec is ahead of most vendors. How about the Servlet 2.3 APIs? The J2EE Connector architecture? Full EJB 2.0 compliance? Congratulations to Weblogic, Orion and JBoss, btw, for their lightning fast adoption of new standards.
Another thing I don't understand about your post is why additional functionality has to come at the expense of vendor lock. At a fine grain level, can't we ( meaning vendors ) just build JavaBeans and EJBs that do everything we need as climb up the stack? New APIs and components can be layered on top of J2EE and be completely portable across app servers. If a new "official" API comes out that does the same thing, it should be a matter of changing a few interfaces around and recompiling, right? I don't see the vendor lock.
I think the real question you're asking is, will vendors continue to stick to the J2EE standard as it evolves? I think they will. Why shouldn't they?
But, adding support for the things you mention is all well and good and can and was done reasonably quickly but the other components from say BEAs suite, Process Integrator, Process Collaborator, Personalization, Commerce Server etc can be more difficult. Other groups such as the CFM and the OMG have tried in the past and had limited success or adoption rates for these larger, more complex components.
This is where the lockin is currently. J2EE vendors could BUT ARE NOT cross selling this stuff (yet) on other vendors J2EE servers.
My original post said that I thought that J2EE vendors SHOULD and probably COULD do this but were NOT for what ever perceived commercial advantage.
As for the question "will vendors stick to J2EE?" there is no doubt in my mind that they will. Thats not the 'real' question I was asking.
The 'real' question is simply that the size of the stack needed to implement these B2B/B2C/Integration Portal type components and given the reluctance of (mainly the J2EE vendors) to cross port this layered J2EE middleware, this results in effective vendor lock in.
This isn't necessarily a bad thing so long as these component stacks use open standards for their external net interfaces with the corresponding stack from another vendor and this looks like it is indeed the case, then the whole Web Services thing still works. Web Services, SOAP, UDDI, WSDL, ebXML, PKI are the standards that will make them interoperable. Everything from one vendor is less integration work and you just negociate a single support contract.
But, I still don't see why you disagree with my lockin statement. If you deploy all of these BEA, IBM or Oracle components, integrate them in to your backend legacy systems then you won't be able to cheaply switch to a different vendors stack without a big investment because a lot of the integration interfaces to these components won't be standard.
There is a lot more to integrating these legacy systems than simply writing a JCA wrapper for them. The glue between JCA and the rest of the stack is where the majority of the work will be if a JCA adapter is even available. Interactions between the stack and the JCA system will probably be controlled with workflow/process engines (again, outside the spec).
JCA adapters will need to be customised for a particular application server (security implementation is probably the most obvious example here, this is outside the JCA spec, interfaces are specified but the details are left to the J2EE server and the JCA provider). This means that depending on partnerships etc, you may not even get a JCA for your backend server for a different J2EE server. Maybe, you'll get JCA SDKs from backend vendors allowing you or some consulting group to do the final customization for your J2EE server.
Now, the question of how much of this type of glue code and customization there is, is probably where we may have our disagreement. Maybe, I'm exaggerating the amount, maybe your minimizing it, maybe its really somewhere in the middle, maybe someone will figure out a way to generate all of it from a meta model, who knows.
But, I do think you maybe too optimistic about simply changing a couple of interfaces and recompiling.
Just a quick clarification, the JCA security spec isn't always dependant on the J2EE server security implementation. For simple security scenarios (container provided or component managed) it works like JDBC pools do today but for more advanced security implementations such as impersonation etc then some customisation will likely be needed.
Excuse me for not making this clear in the original comment.
Billy, I have to apologize for assuming that I knew the "real" question you were asking. That was rude of me.
I think you've hit the nail on the head. The exact point that we disagree on is the amount of porting required. And, I would argue, that if you're a good software designer, the amount of porting is minimal. Let's break the problem down into two scenarios:
1) One J2EE vendor has a great non-standard API that does X really well an no others have it. It's also not part of the current J2EE spec. No other J2EE vendors are including X in their stack and customers are crying they need X or they can't live another minute. Every engineer at this J2EE vendor gets rich beyond their wildest dreams selling X powered App Servers. Eventually, enough customers demand X and it gets incorporated into the J2EE standard and suddenly, it's a commodity. Everyone has to implement it or be labelled a non compliant rogue. The only way the original vendor can continue to survive is to provide the absolute best, most cost effective implementation of X and/or go on to develop Y. Ain't progress a beautiful thing? Looking up and using connection pools is a good example. How many Java app servers do you know that don't use JNDI as an interface to a connection pool? Not many. This used to be a proprietary feature that was implemented in a thousand different ways ( and still is - the implementations didn't change much I bet - only the APIs ).
2) Mutiple J2EE vendors deliver functionality X to the market in vastly different APIs. There is no J2EE standard yet. A standard set of APIs evolves that does the same thing or close to the same thing and Sun release a reference implementation. All the J2EE vendors need to provide new Interfaces that support the new standard or be labelled non compliant rogues. Their back end code doesn't change, only a new thin layer is added that hooks the new J2EE Interfaces back into the existing code base. It is up to the app server vendor to make sure code written to their now proprietary APIs is still supported going forward. But, assuming the old interfaces are still in place, this in a no brainer. A good example of this scenario is the ATG Dynamo Nucleus concept. It was and still is a great paradigm but they now support the J2EE paradigm fully as well. So, the developer has a choice, for now. I expect this to change as some of the excellent concepts developed by the ATG folks get incorporated into the standard J2EE platform. As J2EE evolves, it feeds off the great ideas of it's participants.
It all starts and ends with good design, in my opinion. As long as you have Interface based, loosely coupled Object Oriented systems, modifying them is generally easy. And from the code I've seen so far, I think that J2EE vendors generally design very modular, plugggable architectures. This is part of how they differentiate themselves. Notice how Weblogic supports most new J2EE standards before they even come out of Sun's beta process. Now I bet they have some competent Architects working over there.
I think this process has worked, works now, and will continue to work in the future.