This performance stuff is easy – especially if you follow best practices. Allow me to explain how preparation means everything, especially before a big online event like the Super Bowl.

We have analyzed the performance of advertisers’ web sites during the Super Bowl and posted the MVPs as well as the underperformers. This blog post highlights the best practices we can derive when looking at it through the eyes of Application Performance.

It doesn’t matter if the slowness originates in the page structure, the network, the cloud, application logic, database queries, or other areas that compromise an otherwise fast, highly functional, scalable web site. What matters is the application is tuned, and it’s at that point your customers will usually reciprocate with the desired objective – higher conversion.

Looking at the right metrics and using tools that provide these metrics feels like reading the other team’s playbook – it not only provides leverage, but also a significant advantage. Performance specialists have been talking about the performance playbook for years. Namely, the formula of fewer connections, fewer hosts, fewer bytes transferred, combine and minify CSS and JavaScript, establish caching rules for static content, ensure compression headers are set, load test to peak-plus traffic levels, analyze method code for redundancies, and on and on. Let’s have a closer look at 3 Best Practices and their implementations, how GoDaddy and Co implemented these Best Practices and how you can follow them as well.

Best Practice #1: Keep it Slim

One Super Bowl advertiser that followed the performance formula was GoDaddy. Its strategy this year was agile across the board. Let’s define agile for web site deployment – the ability to point to specific web pages dependent on specific lead generation (for example, ads, commercials, social media, etc.) based on the incoming request of your visitor (geography).

A quick and well-coordinated approach involved the deployment of a game time version of the godaddy.com landing page, which reduced the average response time by 60.2%, average bytes transferred by 11.8%, average # of objects by 30.0%, average # of connections by 60.6%, and average # of hosts by 69.2%. At the same time they maintained a perfect availability rate of 100%. This is no small feat – a 100% continuum of uptime given the agility of changing the web page is remarkable and speaks volumes to the importance of careful engineering.

Keep reading the rest of the blog ...