I run into this problem all the time. A customer doesn't know how much bandwidth or CPU cycles he needs; instead he'll say something like "We expect the new application to get about X users using it about Y frequently, and the application has Z features. What hardware do we need to buy?" This is especially frustrating because all-too-often they want to order the hardware concurrently to our developing the software, so we can't build the app and _then_ test what the needs will be.
Are there any rough rules-of-thumb for making estimates of this sort? At what point is a servlet-and-database application going to be need changing to an EJB solution? At what point will my EJB solution need clustering? And so on.
1. Java apps are memory-hungry - we get 1GB DDRAM
2. Java apps are processor-hungry - we get 2.0-2.5 Ghz Pentium 4
3. Java apps are not disc hungry - we get 2x20Gb Hds in Raid-0
4. Java server apps are not network hungry - we get 2 100Mbps network cards (1 inet / 1 database).
5. Server should be reliable - we get 350W power supply && well cooled case
6. Motherboard - any known to be reliable and fast.
Than pick any bransd of your choice, supply requirements and get with one that fits into the budget.
I'm not entirely sure that the above post is always applicable (e.g what if the customer is an international bank trying to expose an online banking system: I don't think 1 GB DDRAM is going to hack it somehow and they may well be reluctant to buy Intel based processors)
I don't think there is a general function that you can apply to obtain the figures you are looking for. The only thing I know of, is that there are a number of such functions to scale from one processor to distributed machines, so if you can load test it on a small scale system and then ramp up the numbers that works fine.
On the other hand, I guess that is one of the real advantages of J2EE. Assuming you write it to be distributed accross machines, adding more hardware later is easy.
Performance requirements are rarely useful in my experience. They say, as mentioned above, that X users need to do Y things at the same time, but more often than not it does not say anything about what "Y things" are or whether that is the average load or the peak load, etc.
I think the only real thing you can do is prototype the parts you are expecting to cause the most performance problems and see how they perform and then scale from there.
That's good advice from both of you; thank you very much for the pointers.
Forgot to say that for J2EE server you actually need at least 2 processors to get better scalability.
Quite interesting, but more often J2EE people prefer to deal with Sun equipment... :)
James, of course, everything depends on how fast your application gonna be. Anyway, if you don't already have a very powerful Enterprise 10000 server in your disposal, you should develop your application with clustering issues considered. In that case, you will always have an ultimate possibility to add more nodes to your cluster if scalability limits are reached.
Very roughly, on an average-sized J2EE application that is following common J2EE design patterns you'll need to have at least 1 Megabyte of RAM and 5 MHz of Intel CPU speed (for Sun CPUs it's usually 500 users per 400 Mhz UltraSparc II CPU (1000 users per 900 Mhz UltraSparc III)) for every concurrent user. So, for example, if you have a requirement for 1000 concurrent users at once, you should have at least 1 GB of DDRAM (2 GB is better) and 5 GHz of CPU power (can be handled as 2x2.5 GHz Pentiums - it you can't increase number of CPUs, add nodes to your cluster).
Again, it's very rough approximations, you'll get the optimal ones only after testing your application. For that purposes I recommend you to implement the main frameworks you intend to use in your application and create a simple application that realizes 1-3 simple use cases in order to test it on the hardware.