Application scalability and Java HPC

On-demand computing models support Web application scalability

By Jason Tee

TheServerSide.com

The on-demand nature of cloud computing has put scalable Web applications within easy reach for businesses of all sizes. There’s little to no barrier to entry for even the smallest companies to access the kind of computing power and data storage that was once only available to enterprise customers. It’s not just cheaper than ever to get all the server space you want – it’s also easier. The cloud has made ordering up more resources as simple as ordering a fast food meal at a drive through.

However, just because a business can now “supersize” its infrastructure in the cloud doesn’t mean IT should just keep hitting the button to order more resources. Low-cost isn’t no-cost. Many businesses find out the hard way that simply making the switch to a cloud-based infrastructure doesn’t save them nearly as much as they anticipated.

“On-demand” billing that is usage-based may seem fair and easy to understand. But this approach still requires careful planning about how and why to use resources. Otherwise, a business can get stuck in a cycle of using far more resources than necessary without even realizing it. Scaling up our out isn’t the answer to every system slowdown or failure caused by a heavy workload. Most bottlenecks in performance can be addressed in other ways.

Questions to ask before scaling with the cloud

  • Do you really need more cloud server space to achieve greater computing power, or do you really need to streamline your existing server use with appropriate load balancing?
  • Do you need a larger or different platform, or just ways to make better use of your existing databases?
  • Do you need to deploy more instances of your application, or should you redesign your app to use fewer resources?

The ‘cloudy’ financial mindset

One of the pitfalls of cloud computing is the ease with which decision makers get in the habit of using extra resources when they are billed as a service. Compared to the cost of building out more on-site infrastructure, those monthly charges look tiny – and each incremental increase seems so reasonable and affordable. This kind of recurring business expense becomes more and more palatable over time, even when it’s bleeding the budget dry.

It may seem more difficult to justify the up-front expense of a full analysis and revamp of how existing resources are employed. But this approach can actually lead to immediate and ongoing savings. For example, if you find ways to increase the effectiveness of your existing resources by 20%, that’s not just 20% fewer cloud resources you’ll need to add to your monthly IaaS or PaaS bill right now. When you do scale up to add more cloud resources, you could be using those added resources at 20% greater efficiency as well.

In contrast, consider what happens when applications and databases are not designed to scale up well. The amount of computing resources required to maintain acceptable performance may rise much faster than expected when scaling to meet a substantial increase in demand. That’s fine with the cloud providers. It’s just more money in their pocket when applications eat up more resources than necessary. It’s up to the in-house developers to take control of the efficiency of business applications by fixing code and optimizing database queries before deployment in the cloud.

Let’s talk about scaling back

On-demand scalability theoretically implies the ability to cut back usage of cloud resources during times of low demand. Sadly, this doesn’t always happen. Load balancing is designed to spread workload among many different nodes in the most efficient manner possible. But during low traffic times there will be some servers (or instances) that don’t need to be used at all.

The drop in traffic should trigger the process of “bleeding off” connections so that no additional requests are distributed to these servers/instances. The load balancer would simply allow current users to finish up any activity and then automatically de-provision the servers or instances that are no longer required.

This is an important topic to discuss with potential cloud providers. Is the de-provisioning process manual or automated? Can the de-provisioning protocol be configured to meet custom requirements – or is there just one out-of-the-box solution for automation? The more control over how and when resources are automatically scaled back stays in-house, the better.

With either automatic or manual scaling, there can still be costly surprises if an organization does not monitor cloud usage daily. For example, IT staff might provision additional resources for a particular high volume event and then forget to de-provision afterward. Or, a DDoS attack could spike cloud costs by ramping up usage very suddenly. It may be worth considering automated cloud usage tracking software to help control spending and keep cloud service charges from spiraling out of control.

 It’s definitely possible to save money in the cloud , but doing so requires careful planning and thorough research.

10 May 2012

Related Resources

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.