Capacity planning for high availability - Need help


Performance and scalability: Capacity planning for high availability - Need help

  1. Capacity planning for high availability - Need help (3 messages)

    I need to plan the hardware configuration for deploying our J2EE based solution. I need to come up with a minimum configuration for the server machine (mainly for the application server). Does anyone have any suggestions or can point me to some useful links on the net. Or suggesting some good books on this topic would also be helpful.

    Currently I am playing with the following different parameters:

    1. Application requirement - Number of simultaneous users, number of simultaneous transaction, Amount of data, Probably growth factor for next 1 year, 2 years (and may be 5 years)

    2. Hardware configuration - Server machine, # of CPUs, Memory, HD et al.

    3. Database server configuration.

    I understand this is a very broad question, but any guidance would be greatly appreciated.

    Regards and Thanks in advance,
  2. it depends on which platform you are talking about.
    I mean operating system, database server name, application server. so if you can provide more details that will help a bit.
  3. This is indeed a broad question, but here's some very quick tips on how to proceed.

    First, capacity planning is simply the exercise of estimating the amount of hardware required to service a projected workload. In order to accomplish this objective, you need to know a couple of things:

    1) The nature of the workload. Can it be expressed in terms of concurrent users and the frequency with which they perform their operations? (For example, CSR's in a call center handling one customer account query every two minutes.) Note that in a web application, concurrent users is a fairly meaningless metric, since it indicates nothing about how many actual HTTP requests they issue per unit of time. Also, can the workload be boiled down to just a few primary functions (the old 80-20 rule: 80% of the load is a result of 20% of the functionality)?

    2) The cost of the workload. How much memory/CPU/disk/IO is consumed to process a single unit of work in the workload? What platform did you measure this on (e.g., number and speed of CPU's, disk configuration, etc.)? You don't have to perform a load test to get an accurate measurement here; simply executing the given function repeatedly over a measured duration and then dividing the busy time (total # of CPU seconds consumed to process the work) over the measured duration (i.e., busy time divided by clock time). Then divide that by the number of iterations tested, and you've got your estimate of cost per unit of work.

    3) The nature of the target platform. What's the CPU speed? What's the disk configuration (RAID, etc.)? You may have to play some what-if games here.

    If you have the answers for #1 and #2 above, you can calculate the amount of hardware needed based on the system used for measuring #2. If the target platform is different (e.g., faster CPU's, etc.), you'll need to do some extrapolation. If the platform is drastically different (say, target is Sun but the test system is Wintel), you may need to refer to some third-party comparison between the processors, such as the SPECint benchmark. It's imperfect, but at least it provides some means to compare different CPU's.

    With regard to recommending software configurations (e.g., database, application server, etc.), that comes only after careful testing. There are recommended starting points published by most vendors in their documentation, but the optimal settings for tuning parameters is highly specific to the application and platform. Most applications will run well enough with the initial settings, so start with this for the initial sizing, but plan for a thorough performance test to prove it all before you go live!

    Hope that helps!

  4. Capacity planning books[ Go to top ]

    Look at these books: