OS vs Application Server Clustering

Discussions

Performance and scalability: OS vs Application Server Clustering

  1. OS vs Application Server Clustering (3 messages)

    What different objectives we achive by doing clustering at OS level and Application Server level (though both of them will improve the availability and scalability of the application) .What extra benifits we get by doing clustering at OS level rather than doing it at Application Server Level
  2. In general, operating-system level clustering (aka hardware clustering) is designed to manage hardware and os-level failures. These typically work by starting a backup server when a primary fails in such a way that it fully assumes the role of the primary. Failover generally involves re-assigning the failed server IP-Address to the backup (IP-takeover), re-permissioning file system access to the backup (if using a shared file system instead of replication) , and then running a script that you setup yourself to startup all your applications. This technology is older, takes more time to perform a failover, and is less able to fully utilize all of your hardware resources. Application Server Clustering, or, more generally, software clustering, is far more capable and dynamic. First of all, the backup server is usually in at least a warm-standby mode and hopefully hot, meaning that it can immediately assume the primary's responsibilities with very little delay. Second, advanced software-level clustering also supports load-balancing, so you never have "backup" hardware sitting idle. Instead of re-assigning IP addresses, applications that connect to your clustered environment must already be designed to check for service failure/availability on more than one destination host. Alternatively, you can use some kind of load balancer/traffic router that exposes the cluster as a single IP address. By focusing on the availability of the application service, any lower-level problems in the OS or hardware are automatically covered as well (assuming, of course, that they cause the software to crash). A final difference is the dynamic nature of newer software clustering techniques. You can generally add or lose capacity on-the-fly with little or no visible impact to dependent applications. Hardware-level clustering is quite difficult to correctly setup and modify and requires fairly painful and regular testing. Data Fabric technologies such as GemStone's GemFire are designed to cluster-enable most types of applications with all of the positive benefits listed above. Cheers, Gideon GemFire - The Enterprise Data Fabric http://www.gemstone.com/downloads
  3. Hi Naveen, Do have a look at my small writeup on Scheduling jobs on a web farm which answers most of your queries. Essentially, OS clustering gives you the following advantages: 1) If you have a bunch of applications which must run on the same machine, OS clustering can ensure that all these must run on the primary node in the cluster. 2) If your applications are dependant on "local file system" like databases need to manage their files locally - the os cluster can ensure that this file system fails over with the primary node of the cluster. 3) If you dont have a NAS or a file server, and you need a shared file store, you can create a share file store on the OS cluster for use by machines outside the cluster. Of course if you have a file server or are storing information in a database, then its not valid. In the typical architectures I have seen and designed, we have OS level clustering ONLY for the DB server while the app servers are load balanced using external hardware or software load balancers.
  4. Naveen, The simplest application server clustering will still force you to build your application more carefully. For example, with just a load balancer and a bunch of servers (that are not even "clustered" together per se), the application programming model is going to be different from what many developers are used to. For example, the use of singletons or the "synchronized" keyword will only have any affect within a particular server. The result is that applications that work well on a single server can have trouble when they are deployed in a multi-server environment. After reviewing your application and determining that it will indeed be able to run correctly in a multi-server environment, you need to evaluate the business requirements for availability, scalability, etc. While there are easy ways to cluster applications (e.g. Tangosol Coherence), you should first determine if any server-to-server clustering is even needed, and understand exactly what your requirements and goals are for clustering. Common approaches include things like HTTP session clustering and failover. Again, even if your application server provides it for you (or if you use something like Coherence*Web), you should still review your application and determine if it will indeed be able to run correctly in a clustered environment. For example, in order to support the failover of sessions, session attributes likely need to be serializable, but even if they are serializable, are you certain that you want all of that information being updated (sent over the network) on every change? Regarding "OS clustering", it is safe to say that it does none of these things. As the other poster suggested, it is valuable for providing HA assurances for database servers. Peace, Cameron Purdy Tangosol Coherence: Clustered Shared Memory for Java