How to move a microservices architecture off of AWS

Although many companies use AWS to cut costs and improve scalability, one software developer found that moving its microservices architecture off of AWS was the right thing to do.

Most organizations move to Amazon Web Services (AWS) to trim costs and improve scalability. But by moving away from AWS, at least one company found it could reduce costs and improve control while maintaining a robust server infrastructure. At the KubeCon Conference in San Francisco, Paolo Kinney, CTO of Vinli, explained what they learned when automating the deployment of a Kubernetes infrastructure to facilitate this process.

Vinli is a connected-car platform that streams the data from smart, connected cars into the cloud. The company gives developers access to this data via a set of APIs that makes it easy to create new applications for consumers. Developers can work with low-level APIs around telemetry and higher level APIs around safety, trips, automobile diagnostics and behavioral services.

Kinney said Vinli decided to deploy the entire back-end infrastructure with microservices hosted on Docker containers to make it easy to customize. All the back-end logic and front-end building blocks are released on top of containers. As a result the company spins up and tears down hundreds of Docker images daily across its service infrastructure.

Moving off AWS

The microservices architecture was originally deployed on top of AWS Elastic Beanstalk. It worked well, but Kinney said it can get expensive as the back end grows past 15 to 20 services that need load balancing. Costs started spiraling out of control because Vinli had to load balance close to 200 services.

Kinney decided to experiment with Kubernetes as a way of orchestrating its service infrastructure. Over the course of a weekend, he and a small team refactored the entire microservices architecture to run on top of Kubernetes. The infrastructure consisted of about 45 applications, services and workers running across four software stacks.

A key part of this transition lay in building the last layer of application management based on Kubernetes primitives. One strategy for keeping life simple for Vinli's internal development was to keep the requirements simple. Developers implement their code into Docker containers which can be named and quickly launched. The only internal requirements are for developers and designers to notify the ops team about new services and to enable a health check at the application level.

Thinking like a farmer

Vinli built a number of simple Kubernetes primitives to optimize the cluster manager for its infrastructure. These were named after farm operations: Shepherd, Foreman, Farmer, Burn and Butcher. Kinney said the whole orchestration infrastructure is rather simple. All of these separate processes run in about 200 lines of code. They started by observing the commonality between their services and built the tooling around that.

Shepherd makes it easy to roll existing services over to new updates quickly. Kinney said the standard Kubectl cluster control tool took 30 seconds to a minute to spin up new services. Shepherd was basically optimized for Vinli's infrastructure to execute a service rollover in about five seconds. This lets Vinli move containers along without worrying about configuration management from stack to stack.

Workers are the microservices processes inside of Kubernetes that help to orchestrate the spinning up and down of application containers. These need to scale up and down at a different pace to coordinate changing application loads. Vinli created another internal tool called Foreman for managing the lifecycle of worker containers that orchestrate other Docker processes. It works in conjunction with another tool called Farmer to increase and decrease the replica count of Docker containers in response to service loads.

Vinli deletes pods of Kubernetes containers all the time. "We throw away pods just to keep new pods coming up," Kinney said, "which helps us avoid long running logs or memory issues." A tool called Burn blows away all the pods in a group on command. It works in conjunction with Butcher to kill one or more pods, handle cron jobs, and post updates to the internal Slack IRC feed used by the developer and operations teams.

Regularly cull the herd to keep things fresh

Once a day Vinli blows all the pods away, and within a few seconds the entire infrastructure is back up and running. Kinney said, "With Kubernetes, everything is extremely disposable. We want to be able to allow things to refresh themselves."

Another key to making the application infrastructure as disposable as possible is to eliminate persistence in the microservices architecture. All the persistent data is stored outside the services using Redis and Elasticsearch. This ensures that no data is lost when new containers or new configurations are pushed into production. "This lets us stay agile," Kinney said.

In what ways has your company found AWS to be a cost-saving platform? Let us know.

Dig Deeper on Software development best practices and processes

App Architecture
Software Quality
Cloud Computing