Discussions

News: SPECjAppServer2001 (formerly ECperf 1.1) Launched

  1. The benchmark formerly known as ECperf 1.1 has just been released as SPECjAppServer2001, part of www.spec.org. It is a modified version of the ECperf 1.1 benchmark spec and toolkit, changed to comply with SPEC run and reporting rules. All future SPECjAppServer2001 results will be announced on TheServerSide but posted on SPEC's site.

    The new SPEC results will also include a price/performance metric, like ECperf did.

    Check out the SPECjAppServer2001 homepage.

    See the original ECperf homepage on TheServerSide.com.

    Read the Press Release.

    For developers wishing to use ECperf for their own tests for free, you can still get ECperf 1.1 from here.

    Threaded Messages (11)

  2. Sun ONE app server[ Go to top ]

    I am eager to see benchmark results for the Sun ONE app server.

    When will Sun publish some results?

  3. Has SPEC made the test more meaningful? The bang-per-buck numbers were total fiction, based on "special" pricing for the various bits of software.

    Yes, Lord Ellison of Barking, I'm looking at you.
  4. The SPEC committee modified the pricing rules to require a
    minimum 3-year cost of ownership for all software.

    This will help significantly to address the imbalance that
    was seen in ECperf 1.0.
  5. new name, same worthless test[ Go to top ]

    Since no one ever tests two app servers on the same hardware/OS/database, the results are utterly meaningless. This leaves us in the situation we already had: run your own tests if you want to know. None of the vendors are going to help you.
  6. new name, same worthless test[ Go to top ]

    I think this 'test' is hardly worthless.

    Price/performance or "# of app. server CPUs" can be very
    helpful for making reasonable comparisons. In the #CPUs case,
    you may want to 'adjust' results using some other CPU metric.

    If you are going to run your own tests, you could always
    consider running a 'standard' benchmark like ECperf 1.1 or
    SPECjAppServer2001.
  7. new name, same worthless test[ Go to top ]

    How can price/performance possibly be useful when multiple parameters are varying in every test? There is no way to compare. You don't know if it got a good score because the app server is good, or if the app server is terrible but the hardware is cheap, etc. This is basic stuff, and unless they start doing tests with only one paramater varying these numbers will always be meaningless.
  8. new name, same worthless test[ Go to top ]

    Hi Perrin,
    you're right about the "need" to deploy on the same OS/HW/DB when comparing different application servers. I believe academic and commercial research can, is and will help in this.

    Feel free to look at my thesis.

    Kind regards,
    Pieter Van Gorp.
  9. new name, same worthless test[ Go to top ]

    <snip>
    This is basic stuff, and unless they start doing tests with only one paramater varying these numbers will always be meaningless.
    </snip>
    Price/perf is just one measure. Overall performance is the more important measure. While it is probably easier to interpret the results if the tests are performed on the same hardware, the key point is that this is a serious workload that pretty well represents a real-world app and real-world workload.

    Given the strong support for SPECjAppServer2001 from all the vendors, this is a serious and credible tool to assess performance. From submitted results or from private assessments- any customer wanting to evaluate performance can either use the publicly available results or using the information in these, re-run the benchmarks on a given hardware and assess for themselves.

    Cheers,
    Ramesh
    - Pramati
  10. new name, same worthless test[ Go to top ]

    If performance were more important than buck-per-buck, then we'd all be running our applications on Starcat clusters.

    The price-performance ratio is _the_ most important measure. But having it measured by the vendors themselves is never going to yield real-world numbers.
  11. new name, same worthless test[ Go to top ]

    Again, the best aspect about this benchmark is that it is very easy for anyone to setup and run. And, if a vendor has made atleast one submission, then the optimal config/tuning for that vendor can be got from the submission disclosures. The serious participation from all leading vendors will ensure that there are submissions available (on some platform).

    Using the above, anyone interested ina head-on comparison (for a purchasing decision) can very easily do so. This will negate the vendors, all not having submissions on a given platform.

    Strong support from all vendors is the biggest USP for this benchmark (in terms of utility to the J2EE server users/developers/deployers)

    Cheers,
    Ramesh
  12. Are this guys planning publish Numbers on Tomcat, Jasper and jBoos? In this economy a lot of IT shops are looking into open source; the Numbers and pricing for BEA, IBM would not help us.

    L8r.

    -M-