Maximize system scalability with "Scalability Rules"

Special Report:

Maximize system scalability with "Scalability Rules"

By James Denman

TheServerSide.com

The current software elite are obsessed with hyper growth. By hyper growth I mean sites like Facebook.com, which – according to Wikipedia – started out with fewer than a million users in 2004 and by 2008 had over 100 million active users. These systems require scalability on a level some of us don't even dare to dream about. Growing too much too quickly is certainly a good problem to have, but it's still a problem, and it's potentially life-threatening. But I'm willing to bet if you asked the Mark Zuckerbergs of the world, they wouldn't be willing to trade it for anything.

In the Silicon Valley, internet startups are a still a dime a dozen (and by a dime I mean ten hundred thousand dollars, which is about what it would cost to start up a dozen new ecommerce endeavors). Many of these new companies are Johnny-come-latelies that are trying to compete with existing internet phenomena like Twitter or YouTube – either by targeting a niche market or trying to provide some gimmicky extra. Others are providing some new services that haven't yet hit the mainstream.

Most of them will die on the vine but once in a while – probably less than one percent of them – a new startup will explode into the mainstream and grow beyond their wildest dreams. These are the big jackpots that keep the rest of us throwing our dimes into the ephemeral one-armed bandit. Success of this magnitude means either the system was built for massive scalability, or else they figured out how to make it work in a hurry. On the other hand, who knows how big Friendster could have been if they knew how to scale up and meet big-time demand?

Scalability Rules: 50 principles for scaling Web sites, by Martin Abbott and Michael Fisher, is a concise primer on the basics of building scalable Web applications that can accommodate incredible growth rates. Packed into a paperback about the size of a Stephen King novel are 50 simple rules for dealing with complicated systems. Abbot and Fisher's approach provides the quick, high-level overview that managers need, the depth that developers want and everything in between.

When I first read the title, Scalability Rules, my mind tricked me into thinking it ended with an exclamation point (Scalability Rules!). While it's certainly true that scalable systems are pretty awesome, this book actually provides solid rules for building scalable systems. Abbott and Fisher have provided carefully worded explanations for  what the rules are, why they make sense, when to use each rule and how to go about applying it.

Then they take the time to explain in plain prose, with clear graphics where appropriate, how the rule works in real-world scenarios. They neither hand down mandates from on high nor overload the reader with technical details; instead, they have taken the time to convince us that we should follow their rules.

But don't take my word for it. Read this short sample taken straight out of Chapter One:
 


Rule 6—Use Homogenous Networks

Rule 6: What,When,How,and Why

What: Don’t mix the vendor networking gear.

When to use: When designing or expanding your network.

How to use:

  • Do not mix different vendors’ networking gear (switches and routers).
  • Buy best of breed for other networking gear (firewalls, load balancers, and so on).

Why: Intermittent interoperability and availability issues simply aren’t worth the potential cost savings.

Key takeaways: Heterogeneous networking gear tends to cause availability and scalability problems. Choose a single provider.


We are technology agnostic, meaning that we believe almost any technology can be made to scale when architected and deployed correctly. This agnosticism ranges from programming language preference to database vendors to hardware. The one caveat to this is with network gear such as routers and switches. Almost all the vendors claim that they implement standard protocols (for example, Internet Control Message Protocol RFC792, Routing Information Protocol RFC1058,12Border Gateway Protocol RFC427113) that allow for devices from different vendors to communicate, but many also implement proprietary protocols such as Cisco’s Enhanced Interior Gateway Routing Protocol (EIGRP).What we’ve found in our own practice, as well as with many of our customers, is that each vendor’s interpretation of how to implement a standard is often different. As an analogy, if you’ve ever developed the user interface for a Web page and tested it in a couple different browsers such as Internet Explorer, Firefox, and Chrome, you’ve seen firsthand how different implementations of standards can be. Now, imagine that going on inside your network. Mixing Vendor A’s network devices with Vendor B’s network devices is asking for trouble.

This is not to say that we prefer one vendor over another— we don’t. As long as they are a “referenceable” standard utilized by customers larger than you, in terms of network traffic volume, we don’t have a preference. This rule does not apply to networking gear such as hubs, load balancers, and firewalls. The network devices that we care about in terms of homogeneity are the ones that must communicate to route communication. For all the other network devices that may or may not be included in your network such as intrusion detection systems (IDS),firewalls, load balancers, and distributed denial of service (DDOS) protection appliances, we recommend best of breed choices. For these devices choose the vendor that best serves your needs in terms of features, reliability, cost, and service.

12 Sep 2011

Related Resources