One good thing about the fact that so many organizations are moving their data centers to the cloud is the fact that it forces IT professionals to take a much closer look at how their applications are working, and how those applications are consuming resources. It's been observed several time that there is a huge proliferation of monitoring solutions in the cloud computing space, as organizations have to take a new approach to performance monitoring. "There's a demand for visibility. People want to know what's going on in their production environments," said Java Champion and performance expert Kirk Pepperdine. "As organizations move to the cloud, there's a bigger need to see what's going on and know that the cloud really working—that it's doing what it's advertised to do." Of course, just because you look, doesn't mean you see.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Some key aspects of functionality related to hardware can be even harder to conceptualize when you couple cloud deployment with the way today's apps are actually being accessed and used. Pepperdine points out that even if the hardware serving up these applications is in a faraway set of widely distributed data centers, it does still exist. And it's feeling the pain. "We're using hardware in a way that it wasn't designed to be used. You have to do things differently." As he's been telling clients for years, "You can't virtualize yourself into more hardware. You need a certain amount of hardware to support whatever it is you're doing."
Virtual machines still need concrete hardware
As a very simple example, let's say you have a calculation that requires a certain number of CPU cycles to run. There's no getting around the fact that you can only run a finite number of cycles on your available CPU in a fixed period of time. Adding more virtual machines doesn't mean there is more CPU. You've got to either add more CPU or rewrite your algorithm to be more efficient.
In the past, CPU was a bigger problem. Today it rarely poses a challenge when building up virtual environments. There's a fresh hurdle to overcome—the dreaded network bottleneck. Today's users have an attention span that can be measured in fractions of a second. They may wait in nail-biting agony for hours or days to log on to a certain government run website that shall remain nameless. But they won't give a commercial or enterprise app more than a couple of moments before they start chomping at the bit. Having enough network connectivity to support consistently lightning fast speed regardless of the number of simultaneous users is everything.
Kirk described one client that was very savvy about prepping their hardware for high volume apps. They were building out their servers on site and making sure every machine came with 10 network cards on it. This company was asking the right questions that apply whether deploying on-site or in the cloud: "What is your limiting hardware resource? What's causing you grief with performance issues or preventing you from achieving your performance goals?"
Identifying what limits your ability to scale or manage more work on a server environment is critical. That brings us back around to the topic of monitoring. When you monitor a network and see trouble there, that's when you know things are about to come crashing down. The network is the new canary in the coalmine. But what good is monitoring if you can't do anything about the problem? If the hardware is in the cloud vendor's hands, how can you add more network bandwidth when you want it?
This is far from a new problem. Researchers at Syracuse University toyed with the questions surrounding the economy of dynamic bandwidth provisioning back in 2000. Specialists in Europe were developing algorithms to resolve Quality of Service problems in 2006. Today, we may be looking at an actual workable solution from IBM. In October, the computing giant received a patent for the concept of "Dynamically Provisioning Virtual Machines". The company hopes to improve system performance, efficiency, and economy with this new tool to dynamically manage network bandwidth in the cloud. It's initially being targeted at sites like eBay, search engines, news media, and government clients. But enterprise users won't be far behind—and neither will other cloud vendors offering their own solutions.
How is your enterprise managing network bottlenecks in virtual environments? Let us know.