DevOps tooling only a small part of the enterprise security puzzle

Tools, culture, and other popular topics were the focus of much attention at the DevOps Enterprise Summit this year. Yet security was still the undercurrent of concern running just below the surface. Fortunately, a number of speakers addressed this issue and offered insights and best practices for large scale organizations that want to mobilize DevOps for teams without losing sight of security and risk management objectives.

One massive organization takes two big leaps—securely

Phil Lerner, Senior IT Executive of UnitedHealth Group’s Optum arm, offered a unique perspective on continuous security monitoring from down in the trenches. UHG recently made the decision to adopt cloud and DevOps simultaneously, a bold move that made sense because of the synergy between the platform and the methodology. As part of a highly regulated, compliance conscious industry, the organization put security first in both initiatives.

“We’re bringing good, solid security infrastructure practices into the cloud and surrounding the pipeline with as few tools as possible to make management easy. That brings security to the forefront where it’s transparent for DevOps folks. But we’re constantly looking for risks, determining what the threat levels are, logging, and monitoring. We have gates we’ve built between zones and really took a network security approach to what surrounds the pipeline.”

In Lerner’s view, the tendency to think about DevOps as a set of tools is not necessarily the best approach. Instead of trying to completely retool the enterprise, the UHG approach focuses on optimizing processes and adding specific capabilities as needed. “To me, it’s more about the culture and using the tools we know in the enterprise and leveraging them end to end. We know how to manage them very well. We innovate around them and push our vendors to build APIs to do things we would like to do to innovate in the virtual security space.” With a staff of about a thousand IT security specialists in a team of about ten thousand total IT professionals at UHG, it certainly makes sense to use DevOps with the tools that Dev, Ops, and Sec already know.

Some standards persist, but fresh challenges have appeared

Akamai’s Director of Engineering Matthew Barr alluded to some typical best practices that organizations of all sizes should adhere to. Architecting applications to prevent unauthorized access is a no-brainer. “We don’t send a password to anything that is not one of the active directory servers. You don’t want to use LDAP on the application side, because then you have to worry about having credentials that might be reused.” He spoke further about Atlassian’s newest options for SSO and how they enable greater security for the enterprise across the application stack.

But with the increasing popularity of virtual and remote teams across the enterprise, there are new concerns that sometimes fly under the radar. “Some people may not realize, when you look at the Git logs, you see the committer username and email which are actually set on the laptop. You can change that any time you like. The server doesn’t authenticate that information. Without using GPG keys to sign your commits, there’s no proof who actually wrote something.” This represents a change from svn or Perforce where it would be reasonably accurate to assume that the person committing the code is, indeed, the committer listed. Matthew painted a scenario in which a backdoor might be discovered in code. When security goes looking for the culprit, they will find a name in the Git repository—but they have no way to determine if that was actually the person who inserted the malicious code. It would be far too easy to set up a patsy to take the fall. This is just one of the ways risk management is changing as DevOps teams become more distributed.

Open source continues to pose problems for enterprise security

The Heartbleed incident will likely go down in history as one of the greatest open source debacles of all time. This massive security hole in the OpenSSL cryptographic software library went unnoticed for a couple of years, putting the lie to the idea that having many eyes on open source effectively alleviates the risk of serious vulnerabilities. This is one reason that Automic Product Marketing Director for Release Automation, Scott Wilson, argues that enterprises should not overuse open source.

“You have to ask yourself what you are really in business to do.” For most companies outside the technology space, from banks to healthcare, transportation, and insurance, the goal is not to create software. It is to generate revenue by selling other products and services. Open source should only be used insofar as it enables that objective. This decision entails weighing the risk of undetected vulnerabilities as well as all the ongoing maintenance and customization that open source brings along with it.

What’s the solution? According to Wilson, in many cases it’s good to bring on third party vendors to balance things out. These vendors are devoted full-time to maintaining and tweaking software for their clients, providing support on a continual basis. It’s simple, “They make money supporting you.” And they may be able to do it more cost effectively than taking a DIY approach. Even though it might be true that ‘every company is a software company’, not every company needs to do it all in-house. It takes internal teams, the open source community, and the vendor ecosystem working together for a more secure enterprise IT. Perhaps DOES itself will one day morph into the DevSecOps Enterprise Summit to take things one step farther.

App Architecture
Software Quality
Cloud Computing
Security
SearchAWS
Close