What I learned from the Google I/O 2019 keynote address

Before the start of the Google I/O 2019 keynote address, I wondered what I’d learn in my role as an application developer. But when the keynote begins, I find myself thinking more like a consumer than a developer. Instead of thinking, “What new tools will help me create better apps,” I’m thinking about the new features that will make my life easier as a user of all these apps.

The keynote starts with a demo of a restaurant menu app. You point your phone at a menu and the Lens app shows you an augmented reality version of it. The app indicates the most popular items on the menu and, if you’re willing to share some data, the app highlights items that you might particularly like. As a strict junketarian — someone who eats junk food — the app would highlight cheeseburgers and chocolate desserts.

If you tap on a menu item, the app shows you pictures of it along with pricing and dietary information. When you’ve finished your meal, the app calculates the tip and can even split the bill among friends.

Lens has always been a fascinating app, but now it offers translation capabilities. A user can point a phone at text in a foreign language, and the app will overlay the text with a translated version in your language that mimics the size, color and font of the original text. In addition to the visual translation, Lens can also read the text out loud in your native language.

Last year, the keynote speakers stunned attendees with a demonstration of the Duplex app. A computer-generated voice booked an appointment time with a live person at a hair salon. I was skeptical, and I said so in my report for TheServerSide. But this year, they’ve announced the upcoming deployment of Duplex on Pixel phones in 44 states.

Do I trust an app to schedule my appointment for a root canal? The app asks for my approval before it finalizes an appointment, but I’m not sure that I want to remind the app to avoid early morning appointments with a friendly reminder of “Hey Google, let me sleep as long as I want.”

More from Google I/O 2019

One of the overriding developments from the Google I/O 2019 keynote is the ability to perform speech recognition locally on a user’s phone. Google engineers have reduced the size of the voice model from 100 GB to half a gigabyte. In the near future, you’ll get help from Google Assistant without sending any data to the cloud and can talk to the Assistant with your WiFi and cellular data connections turned off.

Users will see a noticeably faster response time from Google Assistant. In a demo, the presenter used the Assistant to open the Photos app, select a particular photo, and share it in a text message. This happened so quickly that the conversation with Google Assistant seemed to be effortless. Best of all, there was no need to repeat the wake phrase “Hey, Google” for each new command.

Google Maps is becoming smarter too. When Maps is in walking mode, the app will replace its street map image with an augmented reality view of the scene. If you hold your phone in front of you, the app shows you what the rear-facing camera sees, and adds labels to help you decide where to go next. In the near future, maybe you’ll see people on the streets with their phones right in front of their faces. Who knows, maybe it’s time to revive Google Glass.

In the keynote and the breakout sessions, I’m surprised to hear so much about foldable phones. With Samsung’s Galaxy Fold troubles, I got the impression that folding phones were more than a year away. But the Google I/O speakers talked about Android’s imminent folding phone display features, and discussed models that will be available in the next few months.

Google I/O 2019 is a feast for the consumer side of my identity. I came to the conference as a cool-headed developer, but I’m participating as an excited, wide-eyed user.

App Architecture
Software Quality
Cloud Computing
Security
SearchAWS
Close