In our series on cloud native computing, TheServerSide spoke with a number of experts in the field, including a number of members of the Cloud Native Computing Foundation. The following is the transcription of the interview between Cameron McKenzie and Apprenda’s Sinclair Schuller.
An interview with Apprenda’s Sinclair Schuller
How do you define cloud native computing?
Cameron McKenzie: In the service side’s quest to find out more about how traditional enterprise Java development fits in with this new world of cloud native computing that uses microservices and Docker and container and Kubernetes, we tracked down Sinclair Schuller. Sinclair Schuller’s the CEO of Apprenda. He’s also a Kubernetes advocate and he also sits on the governing board of the Cloud Native Computing Foundation.
So the first thing we wanted to know from Sinclair was, well, how do you define cloud native computing?
Sinclair Schuller: Great question. I guess I’ll give you an architecturally-rooted definition. To me, a cloud native application is an application that has an either implicit or explicit capability to exercise elastic resources that it lives on and that has a level of portability that makes it easy to run in various scenarios, whether it be on cloud A or cloud B or cloud C. But in each of those instances, its ability to understand and/or exercise the underlying resources in an elastic way is probably the most fundamental definition I would use. Which is different than traditional non-cloud native applications that might be deployed on a server. Those have no idea that the resources below them were replaceable or elastic, so they could never take advantage of those sorts of infrastructure properties. And that’s what makes them inherently not scalable, inherently not elastic and so on.
Cameron McKenzie: Now, there’s a lot of talk about how the application server is dead, but whenever I look at these environments that people are creating to manage containers and microservices, they’re bringing in all of these tools together to do things like monitoring and logging and orchestration. And eventually, all this is going to lead to some sort of dashboard that gives people a view into what’s happening in their cloud native architecture. I mean, is the application server really dead or is this simply going to redefine what the application server is?
Sinclair Schuller: I think you’re actually spot-on. I think, over this whole application server is dead thing is, unfortunately, a consequence of cheesy marketing where new vendors try to reposition old vendors. And it’s fair, right? That’s just how these things go, but nothing about even the old app server’s dead. In many cases they’ll still deploy their apps to something like Tomcat sitting in a container. So that happens, so that hasn’t gone away per se, even if you’re building new microservices.
Has the traditional heavyweight app server gone away? Yes. So I think that has fallen out of favor. Take the older products like a WebLogic or something to that effect, you don’t see them used in new cloud native development anymore. But you’re right, what’s going to happen, and we’re seeing this for sure, is there’s a collection of fairly loosely coupled tooling that’s starting to surround the new model and it’s looking a lot like an app server in that regard.
This is a spot where I would disagree with you: I don’t know if there’s going to be consolidation. But certainly, if there is, then what you end up with is a vendor that has all the tools to effectively deliver now a cloud native app server. If there isn’t consolidation and this tooling stays fairly loosely coupled and fragmented, then we have a slightly better outcome than the traditional app server model. We have the ability to best of breed piecemeal tooling the way we need it.
Cloud native computing and UI development
Cameron McKenzie: I can develop a “hello world” application. I can write it as a microservice. I can package it in a Docker container and I can deploy it to some sort of hosting environment. What I can’t do is I can’t go into the data center of an airline manufacturer and break down their monolith, turn it into microservices, figure out which ones should be coarse grain microservices and figure out which ones should be fine grain microservices. I’ve no idea how many microservices I should end up with and once I’ve done all that, I wouldn’t know how to orchestrate all of that live in production.
Apprenda’s role in advancing cloud native computing
How do you do that? How do organizations take this cloud native architecture and this cloud native infrastructure and scale?
Sinclair Schuller: Yeah, so you actually just described our business. Effectively, what we notice in the market is that if you take the world’s largest companies that have been built around these monolithic applications, they have the challenge of “how do we decompose them, how do we move our message forward into a modern era? And if we can, of course, some applications don’t need that, how do we at least run them in a better way in a cloud environment so that we can get additional efficiencies, right?”
So what we focused on is providing a platform for cloud native and also ensuring that it provides, for lack of a better term, error bridging capabilities. Now, in Apprenda, the way we built it, our IP will allow you to run a monolith on the platform and we’ll actually instrument changes into the app and have it behave differently so that it can do well on a cloud based infrastructure environment, giving that monolith some cloud architecture elements, if you will.
Now, why is that important? If you can do that and you can quickly welcome a bunch of these applications onto a cloud native platform like this and remove that abrupt requirement that you have to change the monolith and decompose it quickly into who knows how many microservices, to your point, it affords a little bit of slack so the development teams in those enterprises can be more thoughtful about that decision. And they can choose one part of the monolith, cleave it off, leverage that as an actual pure microservice, and still have that running on the same platform and working with the now remaining portion of the monolith.
By doing that, we actually encourage enterprises to accelerate the adoption of a cloud native architecture since it’s not such an abrupt required decision and such an abrupt change to the architecture itself. So for us, we’re pretty passionate about that step. And then second, it is the how do I manage tons of these all at once? The goal of a, in my opinion, good cloud abstraction like Apprenda, a good platform that’s based on Kubernetes is to make managing 10, 1,000 or N number of microservices feel as easy as running 1 or 2.
And if you can do that, if you can turn running 1 or 2 to kind of the M.O. for running just tons of microservices, you remove that cost from the equation and make it easier for enterprise to digest this whole problem. So we really put a lot of emphasis on those two things. How do we provide that bridging capability so that you don’t have to have such an abrupt transition and you can do so on a timeline that’s more comfortable to you and that fits what you need and also deal with the scale problem?
Ultimately the only way that scale problem does get solved, however, is that a cloud platform has to truly extract the underlying infrastructure resources and act as a broker between the microservices tier and those infrastructure resources. If it can do that, then scaling actually becomes a bit trivial because the consequence of a properly architected platform.
Cameron McKenzie: Now, you talked about abstractions. Can you speak to the technical aspects of abstracting that layer out?
Sinclair Schuller: Yeah, absolutely. So there are a couple of things. One is if you’re going to abstract resources, you typically need to find some layer that lets you project a new standard or a new type of resource profile off the stack. Now, what do I mean by that? Let’s look at containers as an example.
Typically I would have the rigid structure of my infrastructure like this VM or this machine has this much memory. It has one OS instance. It has this specific networking layout and now if I want to actually abstract it, I need to come up with a model that can sit on top of that and give you that app with some sort of meaningful capacity and divvy it up in a way that is no longer specifically tied to that piece of infrastructure. So containers take care of that and we all understand that now.
But let’s look at subsystems. Let’s say that I’m building an application and we’ll start with actually a really trivial one. I have something like logging, right? My application logs data to disk, usually dumping something into a file someplace. If I have an existing app that’s already doing that, I’m probably writing a bunch of log information to a single file. And if I have 50 copies of my application across 50 infrastructure instances, I now have 50 log files sitting around in the infrastructure who knows where. And as a developer, if I wanted to debug my app, I have to go find all of that.
With Apprenda, what we focus on is doing things like, “Well, how can we abstract that specific subsystem? How can we intervene in the logging process in a way that actually allows us to capture log information and route it to something that is something like a decentralized store that can aggregate logs and let you parse through them later? So for us, whenever we think about doing this as a practical matter, it’s identifying the subsystems in an app architecture like logging, like compute consumption, which the containers take care of, like identity management, and actually intercepting and enhancing those capabilities so that it can be dealt with in a more granular way and in a more affordable way.
Preparing for cloud computing failures
Cameron McKenzie: Earlier this year we saw the Chernobyl-esque downfall of the Amazon S3 cloud and it had the ability to pretty much take out the internet. What type of advice do you give to clients to ensure that if their cloud provider goes down that their applications don’t go out completely?
Sinclair Schuller: I think the first part of that is culturally understanding what cloud actually is and making sure that all the staff and the architects know what it is. In many cases, we think of cloud as some sort of, like, literally nebulous and decentralized thing. Cloud is actually a very centralized thing, right? We centralize resources among a few big providers like Amazon.
Now, what does that mean? Well, if you’re depending on one provider, like Amazon, for storage through something like S3, you can imagine that something could happen to that company or to that infrastructure that would render that capability unavailable, right? Instead what I think happens is that culturally people have started to believe that, you know, cloud is foolproof, it has 5-9s and 9-9s, pick whatever SL you want, and they rely on that as their exclusive means for guaranteeing availability.
So I think number one, it’s just encouraging a culture that understands that when you’re building on a cloud, you are building on some sort of centralized capacity, some centralized capability and that it still can fail. As soon as you bring that into the light and people understand that, then the next question is, “How do I get around that?” And we’ve done this in computing many, many times. To get around that, you have to come up with architecture patterns that can properly deal with things like segregation of data and fails, right?
So could I build an application architecture that maybe uses S3 and potentially strikes data across something like Azure or multiple regions in S3? So if you think that if you had a mentality that something like S3 can fail, you suddenly push that concern into the app architecture itself and it does require that a developer starts to think in a way like that where they say, “Yeah, I’m going to start striping data across multiple providers or multiple regions.” And that gets rid of these sort of situations.
I think part of the reason that we saw the S3 failure happen and affect so many different properties is that people weren’t thinking that way and they saw the number of 9s in the SLA and said, “Oh, I’ll be fine,” but it’s just not the case. So you have to take that into consideration in the app architecture itself.
Cameron McKenzie: Apprenda is both a leader and an advocate in the world of Kubernetes. What is the role that Kubernetes currently plays in the world of orchestrating Docker containers and cloud native architectures?
Sinclair Schuller: So when we look at kind of the world around containers a couple things became very clear. Configurations, scheduling of containers, like making sure that we can do container placement, these all became important things that people cared about and certain projects evolved, like Docker Swarm, like Kubernetes, compete with that in a common way.
So when we look at something like Kubernetes as part of our architecture, the goal was let’s make sure that we’re picking a project and working against a project that we believe has the best foundational primitives for things like scheduling, for things like orchestration. And if we can do that, then we can look at the set of concerns that surround that and move up the application stack to provide additional value.
Now in our case, by adopting Kubernetes as the core scheduler for all the cloud native workloads, we then looked at that and said, “Well, what’s our mission as a company?” To bring cloud into the enterprise, right? Or to the enterprise. And what’s the gap between Kubernetes and that mission? The gap is dealing with existing applications, dealing with things like Windows because that wasn’t something that was native to Kubernetes. And we said, “Could we build IP or attach IP around Kubernetes that can solve those key concerns that exist in the world’s biggest companies as they move to cloud?”
So for us, we went down a couple very specific paths. One, we took our Windows expertise and built the Windows container and Windows notes support in Kubernetes and contributed that back to the community, something I would like to see getting into production sometime soon. Number two, we surrounded Kubernetes with a bunch of our IP that focuses on dealing with the monolith problem and decomposing monoliths into microservices and having them run on a cloud native platform. So for us, it was extending the Kubernetes vision beyond container orchestration, container scheduling, and placement and tackling those very specific architectural challenges across platform support and the ability to run and support existing applications side by side with cloud native.
Cameron McKenzie: To hear more about Sinclair’s take on the current state of cloud native computing, you can follow him on Twitter @sschuller. You can also follow Apprenda and if you’re looking to find out more information on cloud native computing, you can always go over to the Cloud Native Computing Foundation’s website, cncf.io.
You can follow Cameron McKenzie on Twitter: @cameronmcnz