AI, RPA offerings still stymied by security, governance concerns
As COVID-19 progresses, it’s caused a workforce shift from downtown office towers toward a collaborative online workplace. Organizations have taken a hard look at how to integrate artificial and automated systems into their environments to replace the need for on-site workers and help improve the online work environment.
However, this shift has come at a cost. Employees are overwhelmed with the major adjustments on how they work and interact with others, and they aren’t always in the mood for more changes. As a result, organizations that thought to introduce automated and intelligent systems into their environment must be prepared to deal with associated challenges when they arise.
Let’s explore some potential concern areas for AI and RPA offerings that organizations need to consider before moving forward on a work-from-home approach.
Intelligent automation concerns
Many organizations have sought out AI-based or robotic process automation (RPA) offerings, such as bots that can motivate workers when they seem disengaged, or augmented development systems that can simulate a paired programming environment when it’s impossible to have two developers next to each other.
“When you no longer have a colleague sitting next to you to ask for suggestions or to learn from, AI can become the solution to augment your teams’ skills and their collaboration,” said Diego Tartara, CTO at Globant, an IT consultancy. “In the current pandemic scenario, this becomes critical.”
Tartara found there were several cultural and technical steps that had to be addressed before he added automated and intelligent augmentation systems to the Globant workforce. On the cultural side, there was a need to create a collaborative and accepting environment between humans and AI-based systems. Humans are often adversarial when they see AI or RPA offerings introduced to the workforce. Managers and decision makers need to ensure that developers see AI systems and RPA projects as a collaborative peer rather than an adversary.
As part of the organization’s AI and RPA offering integration, Tartara launched an internal AI course for all developers. The course helped create a mindset that was directed toward how developers could understand the strengths and limitations of machines and humans and how they could best complement each other.
On the technical side, Tartera saw a need to reduce developers’ reluctance to access the AI enhancements that were integrated into existing development tools. For example, this included thinking about how to include AI and robotic processes in application publication and deployment into all aspects of the organization’s infrastructure.
AI integration only works if it’s used throughout the entire application lifecycle. However, that integration needs be natural and not intrusive, otherwise DevOps teams will avoid it. Tartera saw that the AI and RPA offerings had to be a simple extension to what developers were used to doing, not an intrusion into their normal development and deployment processes.
Security, compliance and governance
When an AI-based or RPA offering is introduced into commonly used DevOps tools, there must be special considerations given to security, governance and compliance. RPA and AI augmentation introduces an additional threat vector to any DevOps environment, since any vulnerabilities in this process could expose sensitive code to hackers. Consequently, it’s important to have security teams constantly analyze and monitor these processes to identify and rectify any potential security or compliance issues that may arise.
The key to a successful AI-based and RPA offerings into the enterprise fold is to ensure the potential objections are addressed before the implementation happens. If the human fears are allayed, and the technical concerns on compliance, governance and security are taken seriously beforehand, a rollout of autonomous robots and artificial systems is more likely to be a successful endeavor.