Choosing the right DevOps tool to tame your polyglot programming

When programs are built using a polyglot of different languages, organizations need good DevOps tools to manage that integration.

It's not unusual for organizations to be using a variety of programming languages, be it Ruby, Java, Scala or C#, to solve their software development challenges. But in a world of polyglot programming, organizations need to start thinking about what DevOps tool might help them tame the software complexity beast.

It is sometimes called a duct tape development when the majority of your applications are from the vast population of programming languages. Modern applications consist heavily of Ruby Gems, JavaScript based NPM based systems, nugget packages and more. And while duct tape development is somewhat of a derogatory term, it is undeniable that the percentage of application code that is based on OSS components, artifacts and frameworks is increasing rapidly. It accelerates functionality and optimizes developer time, but it also introduces more risk and more things to manage. The good news is that wrangling these components can all be done automatically and with minimal effort.

Developers can introduce a new package to their application on a whim and in minutes. And now with Docker container technology not only can they introduce a new component, they can grab entire stacks. The speed and flexibility this affords development teams need not be explored. But the risks associated with it are often overlooked until it is too late. The risks include:

  • Visibility: Do you know who built the component? What code it includes? It is impossible to review all OSS Components' source code. And very difficult to make sure that during the vetting process you fully scour the Web to ensure packages exclude any known issues. With Docker images the problem gets even worse; they are like a component of components. Either accidentally or maliciously there can easily be risky configurations, or even spyware planted on popular public base images. When a developer adds their ingredients (layers) and makes the image core to your application, they have inadvertently introduced the issues to your environment as well.
  • Known vulnerabilities: Even with a single component it is hard to keep up with known issues and updates. Remember heartbleed? Resolution only happens after there is an issue. IT teams then scramble to identify all the infrastructure with risky components and try to update or remove them. When you start adding components, the issue compounds, and it simply is not possible to keep up on known vulnerabilities. The default response has been to wait until something bad happens.
  • Consistency: Developers do not spend the time to review all the versions of all the components on their development environment. It is often hard to know if developers themselves are using and testing against the same stack. And what exactly the stack is that should be in production, the one you know the application works on. In addition, when developers use their local machine or even deploy code to the same instance of a VM, they are introducing variables that have collected over time, establishing virtually no consistency from development to production. This makes addressing any issues more time consuming than it needs to be.

As if the number of components and containers did not compound the problem enough, if an organization is leveraging microservices, it gets tremendously amplified. Each individual service has its own configuration. The amount of service instances that are deployed are numerous.

They sometimes call it a duct tape development.

The only real problem with the above fear I've instilled is if an organization chooses to do nothing. The tooling to mitigate all of the issues exists and is widely adopted. Artifact management tools are able to manage and monitor components automatically and across the entire delivery chain.

The most popular of such tools are Sonatype Nexus, RogueWave OpenLogic and Artifactory, and there are others that are both open source and commercial as well. The DevOps tools in these categories leverage something that developers are all too familiar with, which is a repository. Except in this case the repository is not for code, it is for packaged components and container images, with some smarts behind it.

The ideal DevOps tools will:

  • Have rules and roles for who can and cannot use components from an approved library.
  • Have the ability to integrate with release automation tools and even prevent, or at minimum, warn of a risky components trying to get into production during releases.
  • Automatically update components.
  • For on-premises instances, have the ability to keep a local copy to enhance build performance.
  • Automatically refer to a known vulnerability databases.
  • Perform analytics and reporting on components and all metadata associated with them.
  • Have the ability to organize and search on components.

But the DevOps tools will only do the job if you already have a strategy on how you will use them. As in making sure it's actually used and used properly. And establish a steward of the DevOps tool; someone needs to ensure proper adoption, and make sure that reports are looked at and tools are periodically updated and vetted.

The vetting process and the operation of the tool can be done by different people and do not often fall into typical development roles. QA/QE teams could be good administrators of such a tool or, in some organizations, make a good DevOps team. Even the legal department manages it. Everyone contributes to what should be in the library, with the vast majority of requests coming from developers.

The other aspect of such a tool is how it fits in a delivery chain. It must integrate with your release automation processes; otherwise, its benefits will not be fully realized. And it should fit into all phases -- development, integration and production -- not just development.

There is no way to avoid the onslaught of application components. It is not worth a developer's time to rewrite components that are well known and successful. Rather, they should be focusing on the functionality in your application that is unique. While it might seem scary, the administration of all the new components is relatively straightforward. And all that these monitoring tools need is a little love and proper adoption to solve the risk and complexity automatically.

What DevOps tools do you use to manage your polyglot systems? Let us know.

Next Steps:
DevOps tools and how they impact engineers
Continuous delivery with Chef's DevOps tool
Getting on the same page with DevOps tools

Next Steps

Key sessions as the DevOps Enterprise Summit conference

Dig Deeper on Development tools for continuous software delivery

App Architecture
Software Quality
Cloud Computing
Security
SearchAWS
Close