From chatbots to IBM's Watson: How software deals with conversational language

The next big thing in software development is conquering the conversational language development hurdle. Here's how the big players are currently doing it.

As the web begins to evolve from flashy front ends, enterprises should consider new patterns of engaging users and driving business processes by pushing more of the grunt work into the background. This is happening through a new style of conversational user interface (CUI) for individuals and teams for both consumer and business applications.

Major chat infrastructure providers have been working on toolkits and interfaces for creating advanced chatbots for years. At the same time leading IT players including Google, Amazon, and IBM are creating platforms for creating conversational driven applications. These different approaches are opening the door for a conversational style of application delivery in line with our experience of the physical world.

Different flavors of conversations

Conversational UIs and chatbots are not just for consumer apps and are not new. But new integration tools from Facebook, Microsoft, Amazon, Google, and IBM are pushing chat integration to the forefront and promise to make these tools more mainstream. Business applications have been leveraging chatbots to improve development lifecycle and better coordinate team communications. Jason Hand, Author of Chatbots, and DevOps evangelist at VictorOps said. “Chatbots used in the tech & software industry like Hubot, Lita, and Errbot have been leveraged to help development and operations teams begin to take action and share context within persistent group chat services and tools. These include Slack, HipChat, and Flowdock.”

These are generally only used by specific people running very specific commands and security is a concern for each organization using them to do more and more powerful stuff from the comfort of the "chat" user interface. These aren't B2B or B2C applications. They are only used internal to an organization to assist in their day to day tasks. Hand said, “It's very much an off-shoot of the DevOps movement and its attempts to place extreme collaboration, situational awareness, and transparency as a very high priority within teams and companies.”

Another family of chatbot interfaces are emerging like the one announced for Facebook. These are more B2C bots where the business can provide a "Chat as a Service" to the end user to create access points to other services. For example, purchasing an item from Amazon by typing in some text in Facebook's Messenger app could be recognized as a request to place that order. Hand noted, “Other AI services or bots (including Siri) might fall in to this family as well. Through the use of API's just about any bot or voice recognition software can be leveraged to take further (authenticated) action on behalf of the user.” 

CUI versus GUI

Conversational UIs can make it easier than running commands from a command line interface, especially for non-technical people. They can be easier because it’s as simple making a request or having a conversation. The underlying app can handle the complexity of formatting the command in the appropriate syntax and grammar. It can be done faster perhaps, because a user can ask a question or make a request faster than navigating through a GUI and clicking on the appropriate fields and buttons.

In essence, a persistent group chat becomes the UI. Not only is it easier but the shared nature of group chat makes it easier to track the progress of projects. Business and sales teams can be alerted when new versions are released, while dev and ops teams can easily dive into the exact code and configurations of new releases.

Be aware of new security challenges

Developers also need to think through the security implications of chat interfaces on several fronts. First, it is important to make sure the person requesting an action is authorized to make the request. Organizations may not want the business team pushing untested code with new features into production. At the same time consumers might not be happy about their kids making unauthorized purchases.

Second, it is important to think through the various implications of pocket dialing. On the one hand, chat can make it easier to launch new code, initiate a business process, or drive a consumer purchase. But this could create problems when a question is misinterpreted as a request. This could become even more of an issue with verbal interfaces that initiate actions based on noise that sounds like, “OK Google,” which buy products or turn on the lights.

A third consideration is the potential for hackers to take advantage of these interfaces. Enterprises need to be careful about making it easier for hackers to retrieve personal or corporate data, drive purchases, or transfer funds. It’s not entirely clear how hackers might compromise vulnerable phones and Microsoft gadgets. But history has shown when there is money involved, hackers can be sneaky. There is also the additional concern about the privacy and security protections of the data hosted on the back end.

When bots go wild

Another good practice is to monitor bot behavior after they have been deployed. In some respects advanced chatbots need to be treated like human workers with governance, risk management, and compliance safeguards to protect the enterprises reputation and keep it out of hot water. For example, Microsoft had to yank its Tay chatbot, which learned how to say racist things as a result of user input. Companies in regulated industries like finance and healthcare could run into legal jeopardy if bots start violating rules about disclosure or give inappropriate advice, or start breaking physical devices connected through the IoT.

Within 24 hours of brining up Microsoft’s Tay AI on twitter, a specific subset of users started priming the AI with reprehensible words and images. Peter Lee, Corporate VP at Microsoft Research said, “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.”

Addressing this kind of issue will require some patience and oversight to work out all the bugs. Lee noted, “To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”

Preparing for infrastructure as conversation

All of these considerations can make it more difficult to understand all the use cases and edge cases can make it more difficult to test. On the one hand, testing these applications in different environments is not that difficult. If the bots are expected to have secure access to specific environments, the some kind of firewall or VPN rules need to be put into place.

In the long run, Hand believes these improvements are leading to Infrastructure as Conversation, a pun on Infrastructure as Code since new business IT infrastructure can now be built and managed via chant. He said, “I can certainly see a world where Ops (or DevOps) minded efforts of building, configuring, deploying, managing, and a growing number of efforts related to infrastructure and software development are all handled via chat, but inputted via voice command. It is hard to say how far off that is, but it doesn't feel far.”

 

Next Steps

The AI evolution and the history of Java

The power of AI technology in enterprise development

Could a standard AI API be a possibility?

Dig Deeper on Core Java APIs and programming techniques

App Architecture
Software Quality
Cloud Computing
Security
SearchAWS
Close