This content is part of the Essential Guide: Neutralizing threats and creating a secure Java software stack

Implementing cloud-native security means going back to your secure coding basics

There’s really nothing new under the sun when it comes to addressing security vulnerabilities in code. While there has been a great shift in terms of how server side application are architected, including the move to the cloud and the increased use of containers and microservices, the sad reality is that the biggest security vulnerabilities found in code are typical caused by the most common, well-known and mundane of issues, namely:

  1. SQL injection and other interpolation attack opportunities
  2. The use of outdated software libraries
  3. Direct exposure of back-end resources to clients
  4. Overly permissive security
  5. Plain text passwords waiting to be hacked

SQL injection and other interpolation attacks

SQL injections are the easiest way for a hacker to do the most damage.

Performing an SQL injection is simple. The hacker simply writes something just a tad more complicated than DROP DATABASE or DELETE * FROM TABLE into an online form. If the input isn’t validated thoroughly, and the application allows the unvalidated input to become embedded in an otherwise harmless SQL statement, the results can be disastrous. With an SQL injection vulnerability, the possible outcomes are that the user will be able to read private or personal data, update existing data with erroneous information, or outright delete data, tables and even databases.

Proper input validation and checking for certain escape characters or phrases can completely eliminate this risk. Sadly, too often busy project managers push for unvalidated code into production, and the opportunity for SQL injection attacks to succeed exist.

The use of outdated software libraries

Enterprises aren’t buying their developers laptops running Windows XP. And when updates to the modern operating system that are using do become available, normal software governance policies demand applying a given patch or fix pack as soon as one comes along. But how often to software developers check the status of the software libraries their production systems are currently using?

When a software project kicks off, a decision is made about which open source libraries and projects will be used, and which versions of those projects will be deployed with the application. But once decided, it’s rare for a project to revisit those decisions. But there are reasons why new versions of logging APIs or UI frameworks are released, and it’s not just about feature enhancements. Sometimes an old software library will contain a well known bug that has gets addressed in subsequent updates.

Every organization should employ a software governance policy that includes revisiting the various frameworks and libraries that production applications link to, otherwise they face the prospect that a hidden threat resides in their runtime systems, and they only way they’ll find out about it is if a hacker finds the vulnerability first.

Direct exposure of back-end resources to clients

When it comes to performance, layers are bad. The more hoops a request-response cycle has to go through in order to access the underlying resource it needs, the slower the program will be. But the desire to reduce clock-cycles should never bump up against the need to keeps back-end resources secure.

The exposed resources problem seems to be most common when doing penetration testing against RESTful APIs. With so many RESTful APIs trying to provide clients an efficient service that accesses back-end data, the API itself is often little more than a wrapper for direct calls into a database, message queue, user registry or software container. When implementing a RESTful API that provides access to back-end resource, make sure the REST calls are only accessing and retrieving the specific data they require, and are not providing a handle to the back-end resource itself.

Overly permissive security

Nobody ever sets out intending to lower their shields in such a way that they’re vulnerable to an attack. But there’s always some point in the management of the application’s lifecycle in which a new feature, or connectivity to a new service, doesn’t work in production like it does in pre-prod or testing environments. Thinking the problem might be access related, security permissions are incrementally reduced until the code in production works. After a victory dance, the well intended DevOps personnel who temporarily lowered the shields in order to get things working are sidetracked and never get around to figuring out how to keep things running at the originally mandated security levels. Next thing you know, ne’er-do-wells are hacking in, private data is being exposed, and the system is being breached.

Plain text passwords waiting to be hacked

Developers are still coding plain text passwords into their applications. Sometimes plain text passwords appear in the source code. Sometimes they’re stored in a property file or XML document. But regardless of their format, usernames and passwords for resources should never appear anywhere in plain text.

Some might argue that the plain-text password problem is overblown as a security threat. After all, if it’s stored on the server, and only trusted resources have server access, there’s no way it’s going to fall into the wrong hands. That argument may be valid in a perfect world, but the world isn’t perfect. A real problem arises when another common attack, such as source code exposure or a directory traversal occurs, and the hands holding the plain text passwords are no longer trusted. In such an instance, the hacker has been given an all-access-pass to the back-end resource in question.

At the very least, passwords can should be encrypted when stored on the filesystem and decrypted when accessed by the application. Of course, most middleware software platforms provide tools such as IBM WebSphere’s credential vault for securely storing passwords, which not only simplifies the art of password management, but it also relieves the developer from any responsibility if indeed any source code was exposed, or a directory traversal were to happen.

The truth of the matter is, a large number of vulnerabilities exist in production code not because hackers are coming up with new ways to penetrate systems, but because developers and DevOps personnel simply aren’t diligent enough about addressing well-known security vulnerabilities. If best practices were observed, and software security governance rules were properly implemented and maintained, a large number of software security violations would never happen.

You can follow Cameron McKenzie on Twitter: @cameronmcnz

App Architecture
Software Quality
Cloud Computing
Security
SearchAWS
Close