What's maximum number of sessions a Web server can handle?

Discussions

Performance and scalability: What's maximum number of sessions a Web server can handle?

  1. Anyone experiences the maximum number of sessions in a Web server/application server without noticeble impact on the performance of the service?

    Thanks
  2. Anyone experiences the maximum number of sessions in a Web server/application server without noticeble impact on the performance of the service?Thanks

    We use WebSphere in a cluster with a lot of instances and a lot of machines, and we have not experienced any performance problems. You may note that the information of the sessions is really stored in a relational database, so in fact, it does not matter if you have 2 sessions, or 10 thousand sessions, because they are merely registers in a database, and only a percentage are loaded in memory when they are needed.


    Jose Ramon Huerga
    http://www.terra.es/personal/jrhuerga
  3. Thanks for the reply.

    The solution you mentioned is good for handling large user-based applications while the number of on-line users are relatively lower than the total registered users. Also it's an easier solution for a cluster because it needs to share the session data across machines.

    But in mission critical application with relatively high in-session users, the database round-trip would post some overheads to affect the performance to certain degree.

    What I like to have are some real-life numbers to make an accessement to our plan.

    Thanks
  4. What about Mission Crirical Applications?[ Go to top ]

    Thanks for the reply.The solution you mentioned is good for handling large user-based applications while the number of on-line users are relatively lower than the total registered users. Also it's an easier solution for a cluster because it needs to share the session data across machines.

    But in mission critical application with relatively high in-session users, the database round-trip would post some overheads to affect the performance to certain degree.

    What I like to have are some real-life numbers to make an accessement to our plan.Thanks

    It really depends on the level of service you are trying to achieve:

    - If your sessions are small and can be lost without undue impact on the users, then consider sticky load balancing with no session failover, and you can support an "infinite" number of sessions.

    - If your sessions are small and/or you only have a few concurrent users, then storing them in the database can work, and it usually does provide for failover of application servers (depending on the app server) .. but you are correct that it isn't very fast or scalable.

    - Some application servers have in-cluster session management built in. Websphere 5 now supports it, for example. Weblogic has had it for many years. How well the built-in solutions work depends on a number of factors including: session size, number of sessions, amount of load, number of servers, etc.

    For high performance, reliable and very scalable session management in clusters, you should definitely take a look at our Coherence*Web product. We have customers using Coherence*Web to support hundreds of thousands of concurrent sessions.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  5. It depends...[ Go to top ]

    The maximum number of concurrent sessions depends on a number of factors, but the big one is memory. If you are asking about Apache serving static web content, then the number depends on the amount of memory on the server. Apache will start a new process or thread (depending on the version of Apache you run) for each session that gets started. If you are talking about an application server like Websphere, then the number depends on the size of the session and the JVM. Session size depends on how much user specific data your application stores in memory while the user is logged in. The bigger the session, the larger the JVM must be. But, the bigger the JVM, the longer the full garbage collection cycles will be, so you need to find a JVM size that has the GC times you can live with. If that doesn't meet your target for concurrent sessions, then you need to cluster. Performance testing is the only way to find these limits, but if I were in the planning stages and had to estimate I would say 300-500 concurrent sessions per JVM.

    Mike
  6. It depends...[ Go to top ]

    If you are really interested in gauging the size of an standard HTTP session, maybe you can try the following

    1) Serialize the HTTP session into a filesystem or a database system

    2) Retrieve the size of the HTTP session.

    3) If you used Java to serialize the session object, the size of the actual session is <= serialized object.

    Of course, that is if you are really interested.

    cheers,
    Raymond Tay
  7. But in mission critical application with relatively high in-session users, the database round-trip would post some overheads to affect the performance to certain degree.

    WebSphere 4 only uses the database as a backup of the information in-memory of the http sessions, so in case that the JVM crashes, then another JVM can reasume the work. Also, if for example there is a high amount of active http sessions, the server can download some of them to the database, in order to free memory.

    If you are going to have a large amount of live users, and the application is mission critical, then you probably are going to need to use a lot of JVMs in order to get sure that the http sessions are going to be always in memory.

    On the other hand, maybe the cost of restoring the information from the database is not as high as you expect. Using typical sessions - with not a lot of information - in our installation (sun machines with 4 processors and Oracle as the database) we are able to read and restore the information of the http session from the database in merely 2 or 3 miliseconds.


    Jose R.

    http://www.terra.es/personal/jrhuerga
  8. Max Sessions[ Go to top ]

    My figures are for a three tier split:
    - SSL terminates at CSS Load balancer
    - Apache on Linux handles the plugins (BEA/Netegrity)
    - Cluster of BEA application servers with dedicated Oracle box.

    At 2,000+ users the four web server blades are 90%+ idle; we use Stronghold which is Apache 1.3 based and have had to implement multiple instances per blade (four Apache instances per blade) to get the concurrency (250 user max in 1.3. Each Blade is a dual PIII 1.4GHz with 2GB memory

    At 2,000+ users the Bea cluster is running around 20 - 25% idle. We have 12CPU (1.3GHZ Sparc 64) on three servers with 8GB each server.

    At 2,000+ users the Oracle server is also at 25% idle. It has 4 CPU 8GB and uses 1.3GHZ SPARC 64. We have an EMC array and replicate to a standby using dataguard.

    The transaction workload is generated by LoadRunner, and with a 45 sec think time we are doing in excess of 60 pages per second (pages not hits).

    Regards
    Keith
  9. How EBay and Amazon do it?[ Go to top ]

    We have about 100000 registered users and about 5000 users concurrent users at peek. Our application is an archival front end system. All the index information and objects are stored in archive server with API access. We send queries to the archive server for index records and list returned result to users for them to pick documents for viewing. Currently we use only servlet and jsp, no EJB at all. If we cache the qurey result, then the we can have less accesses to the archive server and fast response time to users. But as the cache builts up to certain size, it may have negative impact on the performance. I don't want to get blamed for introducing something that causes trouble.

    Each query returns up to 50 records with an average 1000 bytes per record. If each user has 5 queries cached. It takes 250KB in memory per user, over 1GB for all the users. Besides the memory, as the number of entries grows, the managment of the data structure also adds overheads.

    Another concern with the re-engineering of the project, I want to move it to EJB. There is no doubt that EJB adds more overheads and latency than just servlet and jsp. But the servlet and jsp are hard to manage as the application grows. Again I need to have some real numbers to assess the performance impact. I really appreciate all of your information and opions.

    EBay and Amazon certainly have much larger users base and higher volume of transactions. Anybody knows how they do it?
  10. How EBay and Amazon do it?[ Go to top ]

    Hi Z,
    We have about 100000 registered users and about 5000 users concurrent users at peek. Our application is an archival front end system. All the index information and objects are stored in archive server with API access. We send queries to the archive server for index records and list returned result to users for them to pick documents for viewing. Currently we use only servlet and jsp, no EJB at all. If we cache the qurey result, then the we can have less accesses to the archive server and fast response time to users. But as the cache builts up to certain size, it may have negative impact on the performance. I don't want to get blamed for introducing something that causes trouble.

    When using Coherence's Partitioned Cache there is no limit to the amount of data that can be cached. The entire cache is partitioned across all nodes participating in the cluster, this alleviates the negative impact of performance. We have taken this one step further by providing the ability to "partially realize" the MFU/MRU data on each individual node giving the application a greater chance of retrieving the data from the JVMs heap, this is what we call Near Cache Technology.
    Each query returns up to 50 records with an average 1000 bytes per record. If each user has 5 queries cached. It takes 250KB in memory per user, over 1GB for all the users. Besides the memory, as the number of entries grows, the managment of the data structure also adds overheads.

    By combining the partitioned cache, near cache and distributed query technologies you could effectively store all your data objects in Coherence _and_ be able to query that cache. This approach will increase the scalable performance of your application because due to the partitioning as you increase the size of the dataset or the number of concurrent users you need only add more nodes to the cluster, thus increasing the overall processing power of the cluster.

    Later,
    Rob Misek
    Tangosol, Inc.
    Coherence: It just works.
  11. What could be the Max number of concurrent user, which can be handled without server crash or without performance bottleneck for single machine with single clone.
    Let Say Application Server is Websphere-5.x and HTTP Web Server 2.x
  12. What could be the Max number of concurrent user, which can be handled without server crash or without performance bottleneck for single machine with single clone.
    Let Say Application Server is Websphere-5.x and HTTP Web Server 2.x.

    Regards,
    SS
  13. What could be the Max number of concurrent user, which can be handled without server crash or without performance bottleneck for single machine with single clone.Let Say Application Server is Websphere-5.x and HTTP Web Server 2.x.

    It totally depends on the application (e.g. what requests go to WebSphere and how expensive are those requests to process) and the users (how often do they click), but I've seen J2EE apps on a single server supporting up to ten thousand concurrent users or so.

    Peace,

    Cameron Purdy
    Tangosol Coherence: Clustered Shared Memory for Java