Software development managers may want to focus more on AI development rather than just find the latest algorithms. Enthusiasm will only get organizations so far. In the long run, software development managers will need to find a way to improve the way their organizations develop AI applications. There also needs to be a shift in communication between app developers and executives to build trust with regulators and customers.
It’s up to management teams to identify the proper channels to integrate AI culture into their development teams, and successfully combine all elements of an IT organization into a harmonious relationship for all parties.
Focus on tasks, not technology
IT executives are deluged with messages that AI will change everything, yet they don’t move fast enough, said Kristian Hammond, professor of computer science and journalism at Northwestern University and chief scientist at Narrative Science, which builds tools to transform data into natural language narratives. This can lead to fear, urgency and embarrassment as executives struggle to keep up. As a result, many executives hold unhelpful conversations about what their AI culture and strategy should be.
In parallel to this, engineering teams want to make a shift toward machine learning, and vendors pitch new AI features in their products. While Hammond believes that AI will have a profound impact in the long run, many AI projects will fail along the way.
Instead, what IT executives should do is take a step back and focus on the functionality that the business needs rather than a shiny new technology. “Ask ‘what does this technology do, and what does it do for me,'” Hammond said. “The functionality is driven by business needs.”
One big takeaway managers need to think about is how they might benefit from AI technologies — such as analytics, deep learning or text processing — when they try to improve or create an application. “Always go back to the business problem we are trying to solve, and the task associated with it,” Hammond said. It’s also important to recognize what’s required to achieve a particular result and what functionality is instrumental for the business.
Success requires the right algorithm to work with business-accessible data. It’s not always clear that the data is clean enough for a particular use case, or that it can be used in a new algorithm that meets regulatory requirements.
Executives also need to investigate whether they have useful data. “My guess is that for more than 80% of the problems, you don’t have the data,” Hammond said. It may appear that the data exists on the surface, but there is a big different between having data and having it programmatically available.
Scale is another important consideration. If the task is only executed once a quarter, there is no point to bring in engineers to solve a problem with AI when the task can be completed with a couple of people in an afternoon.
Bring agile culture to AI
The ability to properly scale machine learning and deep learning effectively requires an effectively built AI application development pipeline, said Jason Knight, head of software product for Intel AI Products. Such an approach is what allows Facebook to execute 300 trillion AI inferences per day. The good news is there are a lot of tools available, but it can be a challenge for software development managers to determine how to organize a subset of these into an effective AI pipeline. It’s also important to consider ways to track data and model provenance to ensure that AI models are explainable when required.
In some respects, AI development can draw on the enterprises’ existing experience in agile application development pipelines. Engineers may already have some background with versioning, testing and deploying these applications. But, enterprises need to figure out how to extend these existing practices to data as well.
For example, engineers are well versed on how to version application code with code repositories like GitHub. CIOs, on the other hand, need to figure out how to version data. It’s also important to determine how to track data provenance, which requires you to know where it came from and able to know when it might include biases that could affect the AI models built from this data. These insights are still relatively new, though. “Don’t couple yourself too tightly to any vendor too soon,” Knight said.
Build out AI infrastructure
All of the aforementioned requirements can limit AI adoption to a single department or business unit. “AI will impact every industry,” said Philip Carnelley area vice president for IDC’s European software group. “But at the moment, only a few percent of companies have AI in product enterprise wide.”
Carnelley pointed to Heathrow Airport’s efforts as a model to emulate on how to successfully implement AI culture in a business. The airport runs at maximum capacity, which can cause significant problems when weather problems result in flight delays and cancellations. Specifically, one challenge is that it can take about a half-hour to staff border controls with new people when a plane lands.
The airport started to put all the data feeds into a data lake, so they could create a more responsive business process. “They had all these sources for years, but never brought them together,” Carnelley said. “What they are going to do is use that integrated data and use the AI on top to transform other areas like baggage handling.”
Watch out for new hires
The main issues that hinder AI growth have moved from policy and strategy to data concerns and a lack of skills from employees and employee resistance. The number one skill issue revolves around the recruitment of data scientists. Only a limited number of executives surveyed by IDC said they employ experienced data scientists with a deep knowledge of statistics. As an alternative, other businesses instead turn to software engineers with hands-on knowledge of popular machine learning and deep learning frameworks to build their AI applications. “It is kind of a make do and mend approach,” Carnelley said.
Enterprises might want to consider a more in-house approach to staffing as compared to one that recruits recent college graduates. Carnelley said one CIO he talked with tried to train fresh college graduates to build AI applications, but they would leave for better jobs once they had a little experience under their belt.
As a result, this CIO now takes his older staff with the right background and cross trains them, which has led to better staff retention and better payback from the investments in those people.
“We think that people that are going to thrive are ones that can turn into information into competitive advantage.” Building a data pipeline includes process, organizational and technical challenges. “You have to address it on all fronts,” Carnelley said.
Keep executives in the loop
Another emerging challenge that enterprises face with AI culture lies in the strained lines of communication between AI scientists and executives, said Max Gadney, founder of After the Flood, an AI design firm in London. The underlying algorithms used in AI can be difficult to decipher, and this creates challenges on how to explain how an AI model arrived at a particular decision or how it may promote systemic bias. The way to address this issue requires better communication practices and conversations from data scientists that build the models all the way up to the board room. “The business and boardrooms need to ask these questions, but now don’t have the levers,” Gadney said.
One way to approach this challenge involves building trust as a design problem. Businesses need to think about how they show data provenance used to create algorithms in a way that surfaces imperfections. If they do this, it will make it easier to determine where data and the algorithms built on them work well and where they fail the business, regulatory requirements or consumers.
A business can do this if it creates better labels in the business around the reliability and usefulness of data sets. It’s important that executives understand when a new AI application is built on imperfect data. “People up high need to understand these systems are built on imperfect data,” Gadney said.
A company might notice that the algorithms used to approve loans might generate bias, for example, against particular races based on post codes, even though race was never used to create the algorithms. If a graphic is shared with the board, it should also include the name of the AI researcher involved in its creation.
A step like this will make it easier for executives to get a direct explanation about problems in the data that removes filters from middle management. “Those unsung heroes are toiling away and need to connect with bosses who often don’t know where they sit,” Gadney said. “At some point, the AI is too complicated to explain in graphs. It is important to get the voice of the experts into the boardroom.”
These suggestions won’t solve all AI culture problems, but it is helpful to do due diligence and identify places where developers have difficulties with the data. It won’t be easy, especially if the office culture looks to hang the blame for problems on an individual. “Someone has to be made responsible, but they need to know they will not lose their job if they are honest about stuff,” Gadney said.