Oracle Rebundles AppServer Offerings with new Java Edition

Discussions

News: Oracle Rebundles AppServer Offerings with new Java Edition

  1. Oracle has announced Oracle9i Application Server Java Edition, a rebundling of all of Oracle’s enterprise Java offerings in a single product, at a $5000 price point. Java Edition includes Oracle9iAS Containers for J2EE, HTTP Server, TopLink O/R mapper, JDeveloper IDE and Oracle Enterprise Manager.

    Read the press release.

    Threaded Messages (25)

  2. More Information:

    The new Oracle9iAS Java Edition combines all of Oracle’s enterprise Java in a single product: Oracle9iAS Containers for J2EE + HTTP Server + TopLink + JDeveloper + Oracle Enterprise Manager.
    More information on Java Edition

    Where can I find out more about the news "Switch and Save" program, which provides free migration from BEA WebLogic to Oracle9iAS Java Edition?
    Switch and Save program

    I am a developer. Where can I download free copies of Oracle products?
    http://otn.oracle.com

    Thanks,
    Sudhakar
  3. great joy for all Oracle shops !
  4. What's new?[ Go to top ]

    What's new here?

    The only change I see here is the $5000.00 price tag for Oracle's Orion application server; Orion is still less expensive.

    But, now you get JDeveloper in the package, no thanks. Also, Toplink, I'm one of those fortunate few, which know how to use entity beans - so, no thanks and that Oracle Enterprise manager. Wow what a deal, sorry guys, one hour of using Oracle Enterprise Manager will make a grown man cry for hours.

    Larry, you are just a database company and every other product you peddle just creates more solid evidence that you are just a database company.

    I'm just sick and tired of Oracle advertising their software as the best without making it work first.

    I'm I the only person that can see this? Is someone out there using the above-mentioned tools in a "REAL" high intensity environment? Something like over 200 entity beans with session beans facades producing JDOM for XSLT processing.

    I have an open mind, so please tell me how Oracle really has value outside of the database?
  5. What's new?[ Go to top ]

    I'm about to embark on an OC4j project. I'll let you know how it goes. I'm doing everything according to the way Orion is supposed to be used (not using any bizaare oracle recommendations) so I don't expect too many problems over a standard Tomcat or jboss deployment.

    2 years ago I would have killed for Toplink. Now I use Hibernate and I'll never need to deal with another O/R tool (or even CMP) ever again.
  6. What's new?[ Go to top ]

    This is a very good news.Now with this we perhaps have more flexibility.In the present state of economy you may not know how the small J2EE vendors who may have a price advantage will sustain and survive the test of time.

    We were evaluating Pramati and Jrun .Oracle's new bundling and pricing seems to be attractive although JRUN is much cheaper.One another thing is only large vendors and very few small J2EE vendors have good local support office/teams in Asia and Europe for a responsive support.

    --Rob
  7. What's new?[ Go to top ]

    Oracle "non-database products", lead all ecperf benchmarks (and wait for latest specjapp2002 server results :-)) and that includes best price/bbop. I challenge you to find any J2EE implementation that gives you a 2 tier cluster, a persistence framework as solid as Toplink and all the HIAV features included with Oracle Process Managenement and Notification (like re-start of death procs, including HTTP server) and just for that price (i hate to sound so commercial, but...it is just the truth :-)). Plus... a development tool with support for web services, profiling, one-click deployment, UML Class Modeling, Struts Integration and ... don´t have enough space...

    That´s value (objectively :-)) out of the database

    As for the references you ask about, i would never recommend using entity beans with a session facade for xsl processing (ever heard of something called the XML Database? :-)) Even if i didn´t have an xml database, i wouldn´t use a session facade with entities to generate xsl transformations...That´s maybe why you will not get many answers there :-) I can talk to you about some over 100,000 tx per hour entity bean application (already up and running)if that´s serious enough. All-Oracle-non-database-products implementation by the way :-)

    One more and last thing: OC4J is not Orion. They are quite different products: Oracle simply licensed Orion source code 2 years ago, nowadays they are very very different: jms implementation, cluster implementation, LI implementation ... is Orion auto-restartable in case of crash? :-)

    Hope it clarifies things to ya...
  8. What's new?[ Go to top ]

    Fermin,

    "One more and last thing: OC4J is not Orion. They are quite different products: Oracle simply licensed Orion source code 2 years ago, nowadays they are very very different: jms implementation, cluster implementation, LI implementation ... is Orion auto-restartable in case of crash? :-)"

    I think you mix OC4J and 9iAS. OC4J *is* Orion 1.5.x with up to 10 new Oracle classes. 9iAS is the framework that adds the functionality you're talking about like clusters. I think the auto-restart is OPMN, not OC4J.

    And honestly, I would recommend you not to blindly believe ECPerf results. Do your own benchmarks on your own environment and you will probably get different results. I did.

    9iAS is a nice package though and comes with many nice products and features, but I would not really say performance is its best asset and I would not say it is bug-free either :). 100.000 tx/h with an entity bean-based application... This is what I get with iAS too (150.000). But on another application server, I get 2.600.000 tx/h on the same machine with the same CMP-based application, and it can support 10 times more simultaneous users.

    But I would not use CMP in production anyway... :)

                    Yann
  9. What's new?[ Go to top ]

    "I think you mix OC4J and 9iAS. OC4J *is* Orion 1.5.x with up to 10 new Oracle classes. 9iAS is the framework that adds the functionality you're talking about like clusters. I think the auto-restart is OPMN, not OC4J."

    The Java Edition we are talking here about, includes OPMN so that is what i am trying to compare here: what is included in the product and what not. (and definitively: Orion doesn´t include any shadow monitoring by different development groups for two years already). I could point a couple differences in almost every single J2EE API.

    "And honestly, I would recommend you not to blindly believe ECPerf results. Do your own benchmarks on your own environment and you will probably get different results. I did."

    The fact is that i have run quite a few of those...:-) And i mean a few :-) Guess what where the results? :-)

    "But on another application server, I get 2.600.000 tx/h on the same machine with the same CMP-based application, and it can support 10 times more simultaneous users.

    But I would not use CMP in production anyway... :) "

    So you run a benchmark and got 2.600.000 tx and hour???...hummm interesting...And you didn´t go production??? Wow...even more interesting...:-)

    Of course Oracle 9iAs is not bug free (haven´t found a product yet that has no bugs) but i can assure you that is faster than others. When someone shows me a single real test with the opposite results, in same tunning conditions and comparable configuration i will change my mind. Meanwhile, the data I deal with and the benchmarks that i have run myself, show me a different thing.

    As for the matter of this thread, which should be the subject of our discussion also :-), I was just pointing out that i would like anyone to show me a similar bundle of enterprise level products for that price...

    Cheers
  10. What's new?[ Go to top ]

    Sorry all, I accidentally deleted a couple lines there:

    Where i said:
    "(and definitively: Orion doesn´t include any shadow monitoring by different development groups for two years already)"

    I meant, that

    "(and definitively: Orion doesn´t include any shadow monitoring procs.
    OC4J and Orion have been modified by different development groups for two years already)"

    Once again: OC4J and Orion share the same code base (verion 1.4 by the way and not 1.5), but have evolved quite diffenrently in two years.

    cheers again
  11. What's new?[ Go to top ]

    Fermin,

    "Once again: OC4J and Orion share the same code base (verion 1.4 by the way and not 1.5), but have evolved quite diffenrently in two years."

    All right... I really thought that it was 1.5.x as it was the current Orion version when Orion was bought out by Oracle (and hired the Orion team for quite some time I think). The only useful documentation I could find about CMP, proprietary EJB-QL and deployment descriptors (apart from Oracle support, which is very good by the way) was on the Orion website, which means that they are not quite different, even after 2 years.

                    Yann
  12. What's new?[ Go to top ]

    Fermin,

    "So you run a benchmark and got 2.600.000 tx and hour???...hummm interesting...And you didn´t go production??? Wow...even more interesting...:-)"

    The reason is simple: the application that reached this performance was only written for a benchmark to compare application servers. I actually used the Rubis benchmark application and seriously refactored it. These are the figures I truly get. And this is not in read-only mode, but with optimistic locking and systematic check for data staleness (as would be in a real-life environment). Also, without CMPs and using DAO only, I get 4.500.000 tx/hour on the same machine. Finally, it is interesting to see that OC4J alone is more performant than iAS (single instance) with WebCache.

    "The fact is that i have run quite a few of those...:-) And i mean a few :-) Guess what where the results? :-)"

    I don't know what your results were: I can only assume you know how to tune iAS but perhaps you are less skilled for tuning other application servers (no offence, this is perfectly normal). As far as I'm concerned, the benchmarks were conducted here with benchmarking experts from the companies that develop the application servers, and all I can say is that all of them are extremely skilled in that respect (in other words, they do benchmarks or tuning as a living). I believe the application servers were tuned to their best in all cases.

    "Of course Oracle 9iAs is not bug free (haven´t found a product yet that has no bugs) but i can assure you that is faster than others. When someone shows me a single real test with the opposite results, in same tunning conditions and comparable configuration i will change my mind."

    This is exactly what we have done here: same environment, detached network, same machines, unbiased conditions, bottleneck checking, tuning experts from the application servers companies, as we are simply fed up with the hoax and ECPerf results which are not very useful in that they try to validate results that are run on totally different conditions. Of course, the applications used the same code (excepted for CMP where dates are not supported in standard EJB-QL and where local interfaces cannot be looked up other than with EJB references java:comp/ejb/... I had to tweak this a little), were implemented using 2 different design patterns (DAO, and stateless SB and CMP). The objective was to have *real* benchmark results with a typical database-driven web application. We did our tests using single instances and clusters.

    9iAS and OC4J are extremely fast as far as deployment and cluster configuration are concerned (really impressive). For pure performance though (throughput, tx/s, average response time and number of simultaneous users), 9iAS and OC4J are neither the fastest, nor get the best figures in our tests, and I can assure you that we did our best to keep everything unbiased. We tested several different JDBC drivers, several JVMs and several configurations with and without Apache (one or many instances), WebCache and OPMN (for iAS), over 5 days for each vendor. I know for a fact that our benchmmark conditions were not skewed.

    I would be very interested in continuing this discussion with you in private. Please contact me on caroffy at hotmail dot com.

                    Yann
  13. What's new?[ Go to top ]

    Sorry for the delay in my answer George and Yann. Had to run to the doctor´s after seeing TSS bought by MS :-).

    ---"Also, without CMPs and using DAO only, I get 4.500.000 tx/hour on the same machine. Finally, it is interesting to see that OC4J alone is more performant than iAS (single instance) with WebCache. "

    Well it is a quite well known fact that a DAO implementation can be quite fast but will always be LESS DISTRIBUTABLE, SCALABLE, AND "TRANSACTIONALLY" CONTROLLABLE than a CMP. And over all: will require much more code from you. Trying to implement over 200 business objects over DAO can become a real development nigthmare. If 9iAs with webcache is slower than just oc4j alone, it means that your application is much more transactional than read-and-show. Webcache is appliable only in most-read applications or modules and in those cases where you don´t have much BW to reach the app (in those cases you can use webcache´s compression. For low bandwidth access to apps i have seen over 30% improvement in performance with only the compression that webcache achieves)

    ----"I don't know what your results were: I can only assume you know how to tune iAS but perhaps you are less skilled for tuning other application servers (no offence, this is perfectly normal). "

    I tune Oracle 9iAs and BEA´s or SUN´s guys tune the other app. servers :-)

    ----"9iAS and OC4J are extremely fast as far as deployment and cluster configuration are concerned (really impressive). For pure performance though (throughput, tx/s, average response time and number of simultaneous users), 9iAS and OC4J are neither the fastest, nor get the best figures in our tests, and I can assure you that we did our best to keep everything unbiased."

    Well if you want to, we can send each other the results we have out of this thread. You can use this address quiniq at yahoo dot com. (have already sent you an email from there)I can share with you quite a few different real-production benchmarked appllications where Oracle 9iAs proved to be faster and, which is more important, more scalable.

    ---"You and Fermin answered my original question of the capabilities of OC4J 9.03 for high transactions. I would love to here more about your knowledge with that XSLT application. Soon, I can tell you what kind of performance we’re achieving. "

    George, what database are you using? I mean, i have seen projects where people struggle to get things out of the database, then create business objects, then create xml objects, then use xsl processing and then show something on the screen. Almost every single database allows you to directly process relational data as xml and perform the xsl transformation directly from sql, there´s no single application development that will go faster than that.
  14. What's new?[ Go to top ]

    Fermin,

    "Well it is a quite well known fact that a DAO implementation can be quite fast but will always be LESS DISTRIBUTABLE, SCALABLE, AND "TRANSACTIONALLY" CONTROLLABLE than a CMP. And over all: will require much more code from you. Trying to implement over 200 business objects over DAO can become a real development nigthmare."

    I don't agree with your assertions. First of all, I would not put money on a "well-known fact" without making sure myself, hence the benchmark.

    Now, as far as truly distributed applications (meaning tiers are distributed across different machines and are accessible through a RPC protocol such as IIOP), this is almost never a requirement. We did our benchmarks here taking a typical web application running atop a database, which represents 95% of our applications and which does not need to be distributed. Also, in distributed architectures, there is rarely a need to distribute the data layer, you usually end up with a distribution of functionality above that data layer. Therefore, in order to distribute your DAO functionality if you really need to, just use a layer of stateless session beans and you have the same level of distribution as a CMP. By the way, it is a bad practice to use CMP with remote interfaces (which is the only way to distribute CMPs AFAIK). If you want to use declarative transactions, apply them to these session beans and again you reach the same level of transaction.

    Scalability is a different beast. Upwards scalability is the ability to keep performance levels acceptable with a greater number of users by upgrading hardware (through the use of clusters for instance). Note that the results we achieved should be more than enough for again 95% of our applications so we would not need to scale up. And in practice, all CMP-based applications I have seen so far resulted in extremely poor performance on huge servers (due to CMPs, as confirmed by various thread dumps while the servers were going slow).

    And programming DAOs for databases can be very fast as SQL is a known standard, they can easily benefit from database proprietary features (not always supported in proprietary EJB-QL). Entity beans are fast to work with using IDEs but then, I would tend to prefer object-relational mapping tools because they offer much better functionality. Honestly, any application with 200 business objects can become a nightmare in case of bad programming practices. In such case, <advertising>use the Spring framework at http://sourceforge.net/projects/springframework>. :)

    "If 9iAs with webcache is slower than just oc4j alone, it means that your application is much more transactional than read-and-show."

    Odd. This is exactly the contrary in our situation... Yet, OC4J proved to be very performant alone.

                    Yann
  15. What's new?[ Go to top ]

    Regarding Web Cache, I agree with Fermin. Web Cache will markedly improve the performance of a dynamically generated page when the number of reads/writes on that page is greater than 1. It will also reduce the load on the rest of your infrastructure. The larger the ratio, the better the gain. Compression can also be used to improve response times for both cached and non-cached responses.

    But it all depends on the nature of the application...

    Yann, I would be interested in learning more about the application, architecture, and test methodology that you used to compare OC4J with Web Cache. If you want to discuss this with me offline, my email id is jesse dot anton at oracle dot com. BTW, I'm the product manager for Web Cache.
  16. What's new?[ Go to top ]

    Hey Jesse,

    I was just about to post something like "If the number of reads/writes is greater than 1 during a given time interval, then you get a benefit from using Web Cache. So even if the results are cached for 15 or 30 seconds but the page is accessed more than once, there can be a significant benefit. If you want to get fancy, you can use partial-page caching to cache portions of a page that change infrequently, and then set very short expirations or even no-store policies for the fragments that change often."

    :-)

    Yann, with Jesee here you now have the chance of get as much as possible from webcache :-) Take advantage of it :-)
  17. What's new?[ Go to top ]

    Fermin,

    "As for the references you ask about, i would never recommend using entity beans with a session facade for xsl processing (ever heard of something called the XML Database? :-))"

    True, I should have said high transactional CMP with over 200 entity beans with session facade *or* something like XSLT processing with JDOM using DAO pattern without entity beans. Of course, our (yours) business requirements drive the appropriate solution.

    My J2EE project provides the common framework for different types (Web, Window Clients, Unix, etc.) of applications. These applications all access a common database. The Web-based applications are expected to have 500 concurrent users performing CRUD and analytics. The class diagram has over 200 entities and growing, thus the case for entity beans is made. The entity beans are somewhat acting like database views with common methods for all domains. The stateless session beans implements specific use cases for the particular domain.

    So, you've point out the 100k tx per hour entity bean application with Oracle non-database products, good for you...

    I forgot to mention I'm a Borland bigot :-)

    How are you using the entity beans? For instance, how are the entity beans accessed (facade, local, JMS, etc.) and does the entity beans reference each other.

    Also, where is the transaction managed, session bean or entity bean?

    Thank you, I appreciate the time you took to answer me.
  18. Okay, now that we've established that Orion and OC4J have a lot in common, what exactly is new here? Is this just a rebundling of the existing product line?
  19. What's new?[ Go to top ]

    George,

    "Is someone out there using the above-mentioned tools in a "REAL" high intensity environment? Something like over 200 entity beans with session beans facades producing JDOM for XSLT processing."

    Your environment sounds pretty scary! :) Just a few questions then... What kind of machine do you use to support that design ? And how many transactions/s do you achieve ? How many simultaneous users do you support with an average response time of 3-5 seconds ? Why did you choose this design in the first place ?

    Cheers,

                    Yann
  20. What's new?[ Go to top ]

    Hello Yann,

    Actually, my previous reply was directed to you as well.

    "Your environment sounds pretty scary! :)"
    Yes, but I love it this way. No fear, I'm well aware of the dangers. I'm not mapping entity beans to database tables one to one. I'm not producing a single XML tag for inter-processing.

    "What kind of machine do you use to support that design?"

    A cluster of SUN servers, currently we have 2 V480's, 2 450's and 2 E250's, all are well equipped

    "And how many transactions/s do you achieve?"

    Don't know yet for this task, but we use OpenSTA for load testing. So, I'll be running the load test as part of the build process.

    "How many simultaneous users do you support with an average response time of 3-5 seconds?
     "
    The 3 - 5 seconds requirement is correct, we can't imagine more than 500 concuurent users at one time for all domains.

    "Why did you choose this design in the first place?"

    Earlier I mentioned of using JDOM and entity beans to supply XSLT processing. I didn't mean to imply all EJBs with facades cranking XSLT.

    However, I do like the JDOM instead of Data Access Objects (DAO) for passing data between layers. I prefer XSLT instead of JSP for Web pages.

    My design approach is to provide elegant plug and play architecture. For example, one requirement for a rules engine should be extensible to all aspects. Applying a rule to a use case, batch process and an external system should have the same interface. XSLT and SOAP are everywhere; so imagine a core J2EE business engine serving all aspects.

    Another requirement of a "distributed" system is also to provide for the unknown as well. This is what my requirements are telling me; perhaps other systems know everything already...
  21. OT: What's new?[ Go to top ]

    George,

    May I recommend you "J2EE Design and Development" from Rod Johnson ? I don't know whether it is still available as it is a Wrox book but it's probably worth it...

    With such a cluster, I think you will reach your performance objectives but it sure sounds like an awful lot of overengineering and a little bird tells me that you could perhaps meet your elegant componentisation objectives without resorting to entity beans and JDom. But then I don't know what your requirements are... I simply already know about such an environment in production (many entity beans + JDom for inter-tier communication) and I can tell you that even though it is tuned properly and runs on a monster machine with gigs of memory, it can hardly reach the required performance objectives and pisses every user off. Just be careful with your design. :) But this is getting off-topic now.

    Good luck for your implementation. In addition to OpenSta (which is a nice little OSS product), you should also use a profiler: that's a very useful tool to use in case of performance issues.

    If you haven't started implementation yet, may I also recommend that you implement a vertical slice of your application (from XSLT presentation layer to entity beans) and do performance testing as early as possible ?

    Good luck again.

                    Yann
  22. Expert One-on-One J2EE[ Go to top ]

    Expert One-on-One J2EE is still available. One way or another it will continue to be available even if the current print run (the second) is exhausted, as several other publishers are interested in picking it up.

    Regards,
    Rod Johnson
  23. What's new?[ Go to top ]

    Yann,

    "If you haven't started implementation yet, may I also recommend that you implement a vertical slice of your application (from XSLT presentation layer to entity beans) and do performance testing as early as possible?"

    Exactly, performance testing is part of the development cycle. I don't know of how many times I hear development teams trying to track down performance problems at the end of the project.

    I've looked at Rod's book, (Borders Cafe reading, sorry Rod) and I'm tempted to buy - yet another J2EE how-to book. XSLT is very important for our requirements and Rod's book just doesn't go there. I think Rod suggests staying away from XML processing if all possible. But, I recommend the book to anyone starting to work with J2EE.

    I've been developing large distributed systems for a long time, I remember being told the same things with geographically distributed Windows clients accessing centralized very-large database servers. Did it, long ago with Delphi 1 for hundreds of users.

    My point:
    Understand the cost and limitations of what we are trying to achieve to satisfy the requirements. By cost, I don't necessary mean just money, for example, what is the cost (processing, memory, performance, etc.) for XSLT functionality in terms of feasibility and benefits. Limitations would include experiences, J2EE knowledge, resources, time and money. Obvious things to consider, then why so few ever think about these concerns ever?

    You and Fermin answered my original question of the capabilities of OC4J 9.03 for high transactions. I would love to here more about your knowledge with that XSLT application. Soon, I can tell you what kind of performance we’re achieving.

    I had to switch from Borland Enterprise Server to Oracle9iAS 9.02 about six months ago because of client’s standards and that’s why I was concerned.
  24. What's new?[ Go to top ]

    What one of our customers found was the best way to do XML transformations was on the client, if the browser supported it. The reason is that it can use a lot of CPU cycles, and even increase the bandwidth utilization of the "pipe" (both client, which is dial-up or DSL, and data center, which pays by the gigabyte).

    If that isn't possible, it's much better to put a farm or cluster of servers doing the transformations and other expensive XML / data processing stuff, instead of choking up a few high-powered very reliable IBM servers, for example. So with $3-4k / box, they could build a very effective cluster of dual-processor 1U boxes to do the "heavy chewing", and keep their Power4 servers for their transactional processing loads.

    The net result is much faster page delivery (cheap P4 Xeon is much faster for XML transform than Power4 and easy to scale out), and a cost ($) per page that dropped by an order of magnitude, without losing the highly reliable infrastructure managing the "24x7" transactional part of the app.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  25. What's new?[ Go to top ]

    Hi Cameron,

    Client processing the XSLT would be sweet, we have an captured audience, so it is possible.

    We have clustering capabilities with 6 mid-range SUN servers and a bunch of de-commissioned Proliant 1 GHZ dual-processors.
     
    I'll be testing Oracle's application server capabilities for a lot of the scenarios you mentioned.

    Perhaps, separating the Web container from the EJB container; then process XSLT on the Web tier?

    Would like to chat with you more on the subject. I'm in Hartford, CT, I know you're not that far away, let me know when the next BEA user groups meetings is.
  26. Expert One-on-One J2EE[ Go to top ]

    <quote>
    I've looked at Rod's book, (Borders Cafe reading, sorry Rod) and I'm tempted to buy - yet another J2EE how-to book. XSLT is very important for our requirements and Rod's book just doesn't go there. I think Rod suggests staying away from XML processing if all possible. But, I recommend the book to anyone starting to work with J2EE.
    </quote>

    It's not "yet another J2EE how-to" book: it questions a lot of the ideas that are trotted out in most J2EE books, but don't really deliver.

    I don't cover XSLT in great detail, but I don't disparage XML processing in general. I suggest that if you can live with the performance hit, XSLT is an excellent view technology.

    Regards,
    Rod