Alan Z. Uster - Fotolia

Will a lack of hardware knowledge lead to a DevOps doomsday?

DevOps professionals are highly detached from the big iron that drives their cloud-based data centers. Will this lack of hardware knowledge lead to a DevOps doomsday?

A while back, I was developing an event-driven application that relies upon AWS SNS/SQS as the underlying messaging architecture. The application was developed using the popular Pub-Sub design pattern. A client evokes an event that in turn generates a message describing the event. The message is then sent onto a publisher in a message broker on the cloud. Subscribers consume the message from the publisher and act accordingly.

It's a vanilla undertaking that's been done hundreds of times before. The only problem was that when I tested message queues, they didn't work. The test sent the message onto the publisher all right, but when the test checked subscriber behavior, nothing was happening.

Turns out the problem was one of latency, one which would have been recognized faster if the team had more hardware knowledge pertaining to the cloud-based deployment.

The test was running fast, but the queue was running slow. The subscribers were getting the messages after the test had done the assertion check. So, we moved the SNS/SQS services over from region US-WEST-1 to US-EAST-1. The problem was solved. Messages arrived when expected. The test passed. We had no idea why.

Maybe US-EAST-1 is powered on Dell boxes and US-WEST-1 uses HP. Maybe the server rooms in US-EAST-1 are cooler by a few degrees. Maybe the electricity is better. We'll never know. But, the whole situation got me to thinking: Have we really abstracted hardware away from the consumer so much that we no longer really understand what's going on under the hood?

Has it really come to the point where we think of the cloud -- by some magic -- as a place that provides the services we need? Maybe the magnetic field of the Earth differs by region. Maybe electricity doesn't move the same everywhere. Without knowing for sure, we can only guess.

Pub-Sub
The Pub-Sub design pattern is a common architecture for an event-driven application.

The build-a-box days are gone

Back before the turn of the century, I worked as a developer/writer for a hulking computer company in Iowa. The first day on the job I was told that I needed to build my own machine. I went down to the assembly floor to get all the parts I needed to build my new workstation. I spent the rest of the day putting the unit together. It wasn't the first time I'd built a box, but it was good to know that I was expected to be able to. That's how we rolled.

These days I wonder how many developers, other than gamers and hardware wonks, build their own machines. Considering that most are using laptops, I'll wager not many. How many cellphone users really have the hardware knowledge to understand what's getting them through the day? Maybe the same number that understands how cars work. Jeepers, in 10 years we may not own cars. Transportation will be a driverless service. In fact, there's a good possibility that soon we'll live a world in which everything will be abstracted into a service. We might even end up living in a house that has no kitchen. The food will just show up using the same magic that brings us Uber and Lyft.

The world as a service is a plausible eventuality, one that Amazon, Microsoft, Google and IBM are more than happy to have us believe. But, there's a problem.

Hardware vs. the cloud

Conventional thinking is that hardware is a commodity, that one box is not that different from another. This is the fundamental justification for going with cloud services. "Just put your code up a cloud function and we'll take care of the rest."

That's the value proposition. It might even be true if all the code in the world was just some variant of a WordPress site. But, it's not. Code is becoming more complex and more distinct. The computing that goes with AI or supporting an autonomous container ship on the high seas is a whole lot different than serving up webpages or viewing cat videos on YouTube. For example, a GPU (graphics processing unit) chip made by Nvidia is much better suited for crunching numbers than an off-the-shelf CPU from Intel or AMD. In other words, hardware knowledge does matter.

To quote Zac Smith, CEO of Packet, specializing in providing bare-metal infrastructure in the cloud:

We've analyzed the long term trends and we're confident that in the future we're going to see a lot more hardware wrapped around software.
Zac SmithCEO, Packet

"In terms of macro trends, special silicon counts. IoT and machine autonomy such as driverless cars are going to require services that are based on specialized hardware. Think about it. Are you really going to trust your kid's safety to a driverless vehicle that uses cloud services supported by generic hardware? Or are you going to want the hardware supporting that cloud service to be maximized to the needs of the application?

We've analyzed the long term trends and we're confident that in the future we're going to see a lot more hardware wrapped around software."

So, what does this have to do with DevOps?

Over the last 10 years, we in DevOps have become enraptured with the notion of infrastructure as code. We've pushed hardware away in order to meet the enormity of production demands and requirements of cost savings. Every day we need to get more code out faster and cheaper. And, the code we get out has got to work, no matter what.

We've had no choice but to abstract away and automate as much as possible. But, as IoT becomes the predominant consumer of computing services moving forward, the importance of the hardware running those services is going to return to the forefront. We're going to be wrapping a lot more hardware around a lot more software. At that point, having the right chipset for the job will matter, as will all the other physical assets that will be required to make the given application run efficiently and safely.

The danger at hand is that we very well might have lost a whole generation of hardware wonks who actually know how this stuff works. Most developers are under the age of 35, with an average of 31. These folks have always had the internet and cellphones. And, they've always had laptops, but many of those laptops don't even allow you to change the battery. Few, if any, know how to change a burnout power supply in a desktop computer.

As far as enterprise computing goes, the data center is in a land far, far away. But, this is not going to be the case forever. Soon hardware will be front and center in the application space. Then, we're going to need a whole lot of DevOps people that really understand the details of complex computer hardware. If they're not around, we're going to have a very big problem, one that no amount of infrastructure as code is going to solve.

Dig Deeper on DevOps-driven, cloud-native app development

App Architecture
Software Quality
Cloud Computing
Security
SearchAWS
Close