Home

News: Diagnosing Obamacare Website Performance Issues

  1. Most of my friends in the US are looking forward to the new Healthcare website that allows them to select the health insurance best for their needs. As with any new website that has been anticipated by a lot of people, it was not a big surprise there were glitches when millions of US citizens tried to use the new portal after its launch.

    Now – there are many different reasons why websites that need to scale for that many users don’t deliver on the promise of good end user experience. A general “cultural” problem is that performance and scalability are pushed towards the end in favor of more functionality resulting in problems that don’t allow the end user to consume these great features. Changing this culture with the support of tools that integrate in your continuous delivery process is mandatory to avoid these types of problems. We have blogged about this in the past based on discussions we had with companies that made that transitions.

    Let’s put the spotlight back on Obamacare: unfortunately we can’t look behind the scenes – but – we can do a quick 101 session on Web Performance Analysis and Optimization using our free available tools and highlight the top problem patterns that are responsible for the bad user experience reported by users and heavily discussed in the US and also global media.

    The Analysis

    One of my US colleagues walked through different use case scenarios on healthcare.gov and sent me his AJAX Edition session files for analysis. Here is an overview that shows that most pages lack basic WPO (Web Performance Optimization) aspects

    Continue with the rest of the blog ...

  2. The major problem with every newly established and very well advertised service is the initial overcrowding. This is the same thing which was happened to blackbery messaging service for apple and android phones.

    You design a website for 100,000 concurrent users (which based on the estimates is a very good number) but at the initial launch 2 million concurrent users start hammering the website. After the nitial registration etc. are finished the demand returns to what you expected (i.e. 100,000).


    The fact is that we cannot design the specific service for the 2,000,000 concurrent users, which happens a few times a year. Exactly the same, we cannot afford to build city roads for the burst trafic of morning and afternoon since that would cost 10 times more (+maintenance) and the excess capacity will remain dormant the other times.

    Black berry applied a queueing method in which you would enter your email, your name would enter a queue and users would be informed gradually (when the servers' load would allow).

     

     

  3. I can understand such a practical viewpoint but I would argue that some steps could have been taken to mitigate the damage caused by an excessive surge in demand which is not directly adressed by optimizing as outlined in the article. I doubt that all those that did log on during the early days interacted in a non-transactional manner (and intent) which could indeed have been handled better...at least that is what I would have expected.

    Queueing as you indicated would have addressed some of the issues and this did not neccessarily need to manifest itself at the user interaction layer. The site could have implemented this within the software using adaptive control valves (http://www.jinspired.com/research/adaptive-control-in-execution) as well as QoS (http://www.jinspired.com/research/quality-of-service-for-apps).

    Monitoring is NOT management and humans are ill-equipped to manage the complexity and speed of change these days. We need to move to more self managed runtimes and services.

     

     

  4. Amazon EC2[ Go to top ]

    I don't know the exact reason but if what you say is true then they should've used Amazon EC2.