Virtualization is the term given to the compartmentalization of physical resources to facilitate sharing among applications or users, including in the cloud. Service-oriented architectures (SOA) provide for highly componentized applications whose features can be mixed and matched to fit them exactly to worker needs. This, in effect, is application-style virtualization. With applications and resources virtualizing at the same time, the question is how application lifecycle management (ALM) principles can hit two simultaneously moving targets to achieve stability in operations, with one set of sound practices.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Achieving stability in ALM principles
Both the SOA model and the virtualization/cloud model are aimed at flexibility and agility, and both break the traditional monolithic application-and-server structure of the past. But no application deploys in a vacuum and even virtual or cloud based applications have a basic model that defines the relationship between their components and the resource pool expected to host them. SOA and ALM in the virtual world begins by considering a new virtual application deployment model, and the major difference between the real server and virtual server models is network connectivity. It’s that issue that SOA ALM has to accommodate.
In traditional applications, deployment is a matter of assigning an application to a server, whose IP address then allows that application to be accessed. Servers are typically networked on a data center LAN and accessed through a VPN or the Internet. When SOA applications are deployed, the major difference is that the SOA componentization will create horizontal traffic between components. In the data center, even in a virtualization-based data center, this new horizontal traffic is still carried on the data center LAN and network performance is unlikely to be a major factor. That simplifies testing and staging of applications.
The challenge of managing horizontal traffic
When virtualization is extended over multiple data centers, as it is with the cloud, the problem is that the horizontal traffic created by componentization has a much more variable network connection. In true cloud applications where the hosting of components can be widely dispersed, the performance differences can be large enough to impact worker QoE and even cause application failures. This means that work distribution has to be first optimized in development and second tested at all the stages of ALM where software versions are validated and staged for advance toward production.
When the cloud is created using high-speed connections (fiber, for example, or 100G Ethernet) between data centers, the risk of performance differences due to differences in where components are hosted will be small. Where the cloud is hybridized between public and private, involves multiple public clouds, or is hosted on geographically diverse resources the testing process will have to reflect a potentially large variation in performance. The greatest differences will be found where cloud data centers are not connected into a common virtual LAN but are each independent IP subnetworks with different WAN routings to each.
Besides the nature of the cloud network, the extent to which SOA will create variables in horizontal traffic is related to the extent to which workflows between components. Applications that make extensive use of message or service busses are more likely to be impacted by differences in where components are hosted than those which have few components and are more integrated than orchestrated.
When SOA applications are likely to be sensitive to horizontal traffic performance, the first goal is to reduce that sensitivity at the network level as a part of the project design. This can be done by insuring that SOA components are hosted locally to each other, meaning in a common data center. Where two SOA applications are integrated and common hosting isn’t practical, try to assign components to data centers in a way that avoids high-volume horizontal traffic between the disparate locations. This will have to be done using the provisioning tools and policies available, whether from a commercial scripting package or a DevOps tool.
Validating policy restrictions
Once a hosting plan for components is completed, the ALM processes must validate the policy restrictions in that plan at each level, but that won’t be enough to ensure application stability. It’s important that the applications be tested with variable connection delay to establish a reasonable upward boundary of performance issues, and that the actual application conditions be tested against this boundary as a part of the ALM cycle. This is particularly important as the application moves into final staging prior to production release. Network QoS can be tested with a variety of tools, and in some cases applications or management systems in place may allow it to be monitored. It’s also possible to use simple management tools to parse the directories that record component addresses to search for cases where the IP subnetworks don’t match expectations. This would be an indication that the cloud had moved the component’s host to a new location where performance might be an issue.
Widely distributed SOA applications may also be exposed to wide variations in the performance of the user-to-application connection. This is particularly true if the application is hosted on geographically diverse public cloud resources. To insure that the transitioning of hosting from one geography to another through the day, for example, doesn’t impact performance by stranding some components in one geography while moving others to a new one, ALM should also test the transitioning process, again focusing the testing on pre-production staging. This will insure that the conditions that could arise in production are adequately addressed before they become a problem.