Sonic Software Demonstrates "Massive Scalability"

Discussions

News: Sonic Software Demonstrates "Massive Scalability"

  1. Sonic Software announced the publication of results from the OSS through Java(TM) Initiative (OSS/J) Massive Scalability focus group.

    They demonstrated a self-service telecommunications portal handling 16.2 million concurrent customer queries per hour and over 270,000 new order requests per hour.

    The J2EE/XML system used products such as:

    - Sonic XIS & SonicMQ
    - BEA WebLogic Enterprise Server
    - Sun ONE Application Server 7
    - IP Value's OSS/J compliant service activation system, premioss.

    Sonic now claim to "have proven that we are far past the days when scalability was an issue for Java and XML".

    Read the press release

    Download the scalbility whitepaper

    Go to the oss scalability home page

    Are we at a stage where Java is "fast enough"?

    Threaded Messages (14)

  2. No EJB[ Go to top ]

    I don't see any mention of EJB in the PDF. Looks like they didn't use it.
  3. MDB =[ Go to top ]

    Message Driven Bean.
  4. MDB, but nothing else[ Go to top ]

    Yes, I saw the use of MDB. But, I meant no EJB nor Session beans.
  5. MDB is an EJB[ Go to top ]

    Message Driven Beans are a type of EJB - added in a recent J2EE release (1.3?)
  6. No RDBMS either[ Go to top ]

    The "application" in this exercise appears to have been largely mocked up. They describe their use of (interchangeably) ObjectStore and SonicXIS to cache data and prevent round trips to the "data cluster", but there is no mention anywhere of an actual RDBMS. I understand the limitations that Oracle and others place on third-party publication of benchmark results, but they could at least still indicate the use of "a major enterprise RDBMS" in the appendix.

    The application here seems to exist solely for the purpose of generating the calls to the service activation tier to simulate a volume of OSS/J traffic (and, in the process, highlight the scalability of the BEA and Sonic offerings). An actual application would have required far more hardware in the servlet and data cluster tiers than actually used here.
  7. Disk sync off or what?[ Go to top ]

    This tx rate stinks like disk syncs off -- and yes, they state to have used the SonicMQ in default config (sync off) with no changes.

    Mike Spille, your turn... ;-)

    -- Andreas
  8. Disk sync off or what?[ Go to top ]

    After skimming through this, it seems the entire point of the exercise is to show scaling of J2EE components. Things like failover/recovery are purposely glossed over. From the scope section (1.1):

    \1.1 Scope:\
    Produced deployment strategies and best practices concentrate on addressing scalability and performance issues within a typical OSS/J environment only. Issues such as high availability while supported by the techniques described within this paper are outside the scope of this work.
    \1.1 Scope\

    Further, in discussing requirements:

    \Business Application Environment\
    · A constant stream of orders must be processed at all costs. That is a service provider needs to know that it can provision subscribers at a predictable rate and with a predictable response time.
    · The systems must also accommodate a spike in activity without affecting the constant stream of orders. Trouble tickets raised through violations in the network will affect a subset of customers, affecting either orders in progress or services that are already provisioned. This has the affect of aggressively increasing the number of affected customer queries on trouble ticket or order status. Therefore introducing an unpredictable spike of activity into both systems.
    \Biz App Env\

    And finally, storage is keyed by the customer's area code (?):

    \4.4.2 Data Partitioning\
    To remove as much contention as possible the goal again is to have a virtual customer store that is physically distributed across a number of disks. Customer information is segmented and partitioned across the available physical disks by a scheme that makes sense to the data set.
    In our case customers are segmented into groups based on their area code. The unique reference to a customer is his/her phone number, and the first 3 digits denote the customer?s area code, the next 3 digits denote the exchange within the area code that the customer is associated with, and the remaining digits are unique to the customer within the exchange.

    [...]

    Servlet requests to the customer store first access a routing table that maps area codes (representing physical data stores) to disks attached to specific machines. The servlet requests
    are then routed to the appropriate machine/store and the customers details retrieved as described above.

    \4.4.2\

    Note the last piece - they're using the area code to find out the "physical data store" where the data is located!

    In general, Sonic et al have gone all out to push performance as high as they can, with a very strong emphasis on scalability - and none on component failure scenarios, failover, and recovery.

    Anyone who had the requirements stated in the requirements area of their document would be a fool to deploy this system verbatim. When you're concerned about such high transaction rates and scalability, and response under load is so critical, you _have_ to pay attention to how you're going to recover to internal failures just as much as to how well you're going to scale.

    To quote an old buddy of mine working on a system similar to this: "This sucker's blazingly fast when it works. Emphasis on 'when it works'...."

        -Mike
  9. OK, but bad generalization[ Go to top ]

    Just a note, Sonic in particular has addressed failure and recovery in their core components quite nicely.
  10. OK, but bad generalization[ Go to top ]

    \Chun\
    Just a note, Sonic in particular has addressed failure and recovery in their core components quite nicely.
    \Chun\

    I don't doubt that. But proper error handling, failover, and recovery tends to permeate all layers of an enterprise application, and needs to be thought out just as carefully as design issues around scalability do. And, inevitably, some tradeoffs are often required. The system shown here can be misleading because it only highlights efforts around scalability and performance, and doesn't show what compromises, tradeoffs, or other decisions that might come out of adding high availability into the mix.

        -Mike
  11. No RDBMS?[ Go to top ]

    As far as I can tell they are not using an RDBMS? It looks like ObjectStore and SonicXS are storing the data. Since all the reads are done from the in-memory caches this probably performs well.
  12. Next with DotNet[ Go to top ]

    I looked at jobs section of Sonic. Requisition no 346 for Sr Architect mentioned the need for an engineer with MS skills. They talked about better integration of Sonic MQ with MS products and .net.
    http://careers.peopleclick.com/Client40_ProgressSoftware/BU1/External_Pages_Sonic/jobsearch.htm

    Will they next take up the same test with .net & Win2k3?
    Some more acrimony to follow?
    Its very odd that benchmark results always seem too good to be real. One possible explanation given is that most benchmark application are very unreal as they do only lightweight activities. Is there any way we can quanitify the complexity (thus the resource intensiveness) of an application?

    I was wondering it would be helpful if the benchmarks also publish resource consumption by single user on a low configuration server. That might give some idea about resource intensiveness of the application. Also sometimes I think it would be better for benchmarks to prove how many users they can run on a 2 or 4 CPU machines rather than proving on E2Ks or E10Ks ( I guess they are 64 CPU machines).

    Few thousands of concurrent users doesn't seem to titillate anyone any more. That is, unless, you are asked to do it for your own application with a very large database.

    regards,
    Vikram
  13. Go to www.tpc.org

    This seems to be the most representative benchmark, in that they
    tightly control reporting and the transactions are designed to be a spread of lightweight calls, relatively heavyweight calls and very heavywieght transactional calls with deep server resources required.

    tristan at webtec dot co.za
  14. Benchmarks are meaningless[ Go to top ]

    Period. They mean whatever the people who run them want. You can never compare them scientifically.

    Meanwhile, the rest of us get on with the real world...
  15. From an OSS/J perspective Java/J2EE is not a goal in itself but a means to an end.

    OSS/J makes some strong technology, architecture and implementation choices. These choices are mostly driven by business criteria applied
    to the entire value chain involved in designing, implementing, deploying, operating and maintaining OSS solutions.

    Then OSS/J verifies technically these choices from several angles, such as the usefulness of the functional APIs, the ease of integration
    with legacy systems, the interchangeability of certified products, the manageability and security of deployed solutions, etc.
    For that matter, OSS/J develops verification systems. These are always built from real products and real business scenarios and data, to ensure
    usefulness in real life. But OSS/J does not try to build one gigantic test case that will address all the problems from all the different
    angles. This would be unrealistic and would probably not provide any valuable result on any particular aspect. Therefore OSS/J builds several
    different verification systems each focusing on one aspect.

    The massive scalability white paper is the output of one of these verification systems, clearly focusing on the ability of solutions
    built with OSS/J certified products with regard to massive deployments and predictible growth. This verification system does not pretend to
    do more than that, and moreover has been carefully designed to avoid addressing more than that.

    Therefore the purpose of the Massive Scalability effort is not to compare technologies, but to validate that OSS/J choices would meet
    market requirements for performance and scalability. OSS/J is interested in the numbers (customers, transactions per second, etc.)
    and just as importantly the ability to incrementally scale in a predictable manner.

    The scenario and products used are typical of this kind of telco application. In a real deployment there are obviously many surrounding
    tradeoffs for availability, security, etc. There are also other elements in the picture, including more legacy systems and human operators or
    manual tasks that could obviously slow down the process. Everybody is fully aware of that.

    The results summarized in the white paper demonstrate that the OSS/J APIs and underlying J2EE platform do not introduce performance overhead, or scalability bottlenecks, but that they bring in a convenient infrastructure for end users to accurately predict resources requirements against deployment and development plans.

    OSS/J plans to have additional verification systems to further address scalability aspects with new business scenarios, as needed to identify
    best practices to cover OSS market requirements.
    This will be announced at http://java.sun.com/products/oss, and new contributors are welcome.

    Thanks,

    Philippe Lalande
    Sun Microsystems, Inc.
    OSS through Java Initiative - Program Manager