Smart Compose and the Visual Positioning System impress at Google I/O
This year’s Google I/O conference kicked off on May 8 with a nearly two-hour keynote. The keynote was held at the outdoor Shoreline Amphitheater near Google’s headquarters in Mountain View, California. The presentation was packed with announcements about new app features, some of which were quite mundane while others, such as Smart Compose and the Visual Positioning System for the Google Maps app prompted me to ask, “Where has this been all my life?”
Introducing Smart Compose
The first new feature, called Smart Compose for Gmail, auto-completes sentences that the user begins typing. You’re probably used to the simple word completion that keyboard apps provide, but Smart Compose does much more. Smart Compose offers to auto-type the remaining half of a sentence and does so by reusing terms from the previous sentences in the email. For example, if you type “Please pay for,” then Smart Compose might suggest “the shirts and socks.” It does this because you typed “shirts” and “socks” in some earlier sentences. Apparently, Smart Compose takes hints from your entire document, not just from beginnings of words or from beginnings of sentences. Smart Compose will be available later this month.
Advanced photo filters
In the next few months, Google Photos will be able to make automatic changes to your snapshots. Your phone’s software will decide that a photo should be a bit brighter and, with the touch of an icon, the photo’s brightness will be enhanced.
Even more impressive is the treatment of images containing documents. Imagine that you take a picture of a letter-sized page of paper. No matter how careful you are, the image is bound to be a bit skewed. Maybe the top of the paper is farther from the camera lens than the bottom, so the page isn’t exactly rectangular. Google Photos can adjust the image so that the page fills the screen precisely. The Android app can turn the image into a PDF and you can copy text within the document. You can quickly use the text to trigger searchers and other actions.
Usurping Siri with Google Assistant
In the coming weeks, you’ll be able to conduct a continued conversation with Google Assistant. You won’t have to repeat “OK, Google” at the beginning of each sentence. In a demonstration during the Google I/O keynote, a speaker issued compound sentences with multiple requests. (“Tell me about such-and-such and do this-other-thing.”). Google Assistant understood these complicated commands. Apparently, the Assistant is also capable of determining when a conversation ends without being given an explicit “Stop” command. Unfortunately, this demo went quickly, and it didn’t include much detail about these new Google Assistant features.
There was also a demo, straight out of science fiction magazines, in which Google Assistant made a phone call to a hair salon on behalf of a human user. The demo started with the human user saying something like “OK Google. Make a hair appointment for me.” The Assistant called a hair salon and said something like, “I want to make an appointment for Joan Smith on Thursday around noon.” The person at the hair salon offered alternative times and Google Assistant followed up with intelligent responses. I can’t say for sure, but I’m willing to bet that this demo was very carefully curated, and that most attempts to have Google Assistant carry on long conversations with unknowing storekeepers would end up in less-than-satisfactory results. One way or another, no time frame was announced for the full release of such Google Assistant features.
How often do you check the ratings for restaurants, only to find that the first seven restaurants all get the same 4.5 rating? Sometime this summer, Google Maps will be able to refine the ratings based on your tastes. Do you tend to visit medium-priced restaurants or slightly higher-priced establishments? Have you rated similar restaurants with a 5, or have you given the nearby dive a bad rating? Google Maps will tell you which of the highly-rated restaurants you’ll probably prefer and will tell you why it came to those conclusions. All this mining of information about your habits and tastes might seem a bit creepy but I’m fascinated by the fact that these guesses can be automated.
Google invents the compass
Above all, my favorite new feature was another enhancement to Google Maps called the Visual Positioning System. I often come out of a subway station in New York City at an intersection that’s unfamiliar to me. I know I should walk southward, but I can’t tell which direction is south. On a particular intersection, some street signs might be missing, and if I ask two passers-by which direction is south, I get three different answers. I start walking on a trial and error basis until I have a way of getting my bearings.
A new Visual Positioning System
With a new Google Maps feature called the Visual Positioning System, the user holds the phone up so that the camera lens faces the street in any direction. The Google Maps app overlays the camera’s street image on top of the ordinary map. With all this information Google Earth figures out what direction the phone is facing and draws big fat arrows pointing toward the place where the user should walk. The Visual Positioning System is yet another example of the benefits of augmented reality.
OK, I admit it. Some of these features won’t work as smoothly in real life situations as they did during the Google I/O keynote demos, and a few of them are downright scary in their implications for security and privacy. But I’m a techie and I enjoy living in a world with futuristic conveniences. It makes me feel special.