As performance and geographic reach requirements expand, it becomes increasingly difficult to scale the Web site infrastructure. IT managers must continually evaluate capacity plans to keep pace with the expected peak demand, and planning must consider events such as marketing promotions, news events, and other events that inevitably create more planning uncertainty. Errors in planning can result in overloads that can crash sites or cause unacceptably slow response times and lead to lost revenue.How to Improve J2EE Performance and Reliability
Pre-provisioning extra capacity as insurance against overload is financially unacceptable for most enterprises. Ideally, enterprises want the needed resources when and only when they are needed; they do not want to buy extra resources that sit idle when they are not needed.- "On-demand" computing provides better utilization of computing resources and represents a model in which computing resources are brought into service as needed.
Kieran Taylor, of Akamai Technologies, has written about the Akamai view of building performance and reliability. There is a lot of talk on building "on-demand" applications and presentation components on the Edge.
- Posted by: Dion Almaer
- Posted on: July 16 2004 13:55 EDT
- How to Improve Akamai Sales by Cameron Purdy on July 16 2004 18:00 EDT
- How to Improve J2EE Performance and Reliability by Juozas Baliuka on July 19 2004 03:39 EDT
- Simplistic to say the least.... by Karl Banke on July 19 2004 04:17 EDT
Nice article, and Akamai stuff _is_ useful. I think the title should have been "how to handle stupid amounts of concurrent web requests without having to personally build your own Akamai." The answer is "use Akamai." ;-)
On the other hand, most of the sites that need Akamai already use it.
Coherence: Clustered JCache for Grid Computing!
I thought this was basic common sense ? Basically spread your web tier across your enterprise.
I am not too certain that claim about UNIX servers being 10% utilised hass any base either.
It kind of feels like some words that sound kewl 'grid/distributed' have been bundled together and shoved randomly into an article.
As I understand from Figure 1 bottlenecks are "Web Servers" and in "Application Servers", so the best way to Improve J2EE Performance and Reliability is to drop bottlenecks.
Bottle necks can be due to networks, databases, operating systems, web servers and / or applications servers or anything in the path from the client at tier 0 to databases at tier 3, in a simple 3 tier architecture.
I recently was tuning a system and was assured that the network was fine. We could not reach our target performance - until some employees left the building for someones birthday celebration at their local bar.
All of a sudden the performance of the system jumped and we met our target.
So it can and usually is more than web servers and application servers to blame ;)
...well sure the Akamai Stuff is good for certain things. But let's have a quick look at the other implications of the article. So we are supposed to use web services for backend communication? Not bad, but how will they scale? What about transaction requirements?
Not to mention the myriads of interdependencies such an architecture will have for application deployment and versioning? What about using shrink wrapped products like IBM WebSphere commerce server or BEA WebLogic or Oracle or or or.....
Finally can one justify to reenineer the overall architecture of a web layer just to allow Akamai to earn some more money.......