The focusof Google’s bighardware event this week wasn’t thehardwareat all. It was Assistant, the artificially intelligentdigital helper that catersto your every whim and powersyour every interaction.
Assistant is invisible, in the design-jargon sense. The omnipresent concierge works in the background, predicting your needs, processing your requests, and offeringneatly parceled answers to your questions. You never see the cogs behind it, you merely type (or speak) a command and read (or hear)tailored responsesservedon screen or through a speaker.
This requires more than a smartphone, which explains the gadgets Google announced Tuesday. But as Google likes to say , these are early days for a multi-portal system that includes a phone likePixelandanAmazon Echo-like devicelikeHome. “Five years ago, if we were talking about this, there was the belief that the phone would be the interface to everything,” says Alan Black, a computer scientist at Carnegie Mellon University’s Language Technologies Institute.
That is no longer the case. Google wants its intangible interface everywhere you are, which requires having it everywhere you are―in your pocket, in your car, in your kitchen, and so on―so it can learn everything about you and providea personalized experience. Until now, Google only dabbled in devices, relying largely companies like Samsung and HTC and Motorola to provide the hardware that ran its software.
The way to get the AI in front of you is to embody it in hardware. Jon Mann
Now, to make its invisible AI mainstream, Google mustmake its own products.Two of themare especially important:Pixel, which resembles an iPhone, andHome, which looks a bit like a Glade air freshener. These portals to Google Assistant are attractive, but nothing spectacular. “There’s nothing too Earth-shattering about them. The phone is just a piece of aluminum,” says Mark Hung, a tech analyst at research firm Gartner. “What matters is the fact that you’re able to use them fairly seamlessly, through a conversational interface.”
The devices, in other words, exist merely asvessels. Rick Osterloh, head of Google’s new hardware group, suggested as much when he said Google decided to build hardware so the company can “get things done without worrying about the underlying tech.” In this instance, “get things done” means deliver a rich AI experience―something Google’s spent the better part of its existencepreparing for.
Consider Google’s information bank, calledKnowledge Graph, which has enhanced your search results since 2012. Today, it contains more than 70 billion facts. Assistant can tap that repository, and its conversational UI will only improveas it sees and learns from―how and when people access it.
This explains why Google suggestsplacing aHome in every room. “The way to get the AI in front of you is to embody it in hardware,” says Jon Mann, an interaction designer at Artefact. “You need the access points so that it feels ubiquitous.”Today, your primary access point probably is your phone, and your home is among the few places where it may not be at your side. If Google can convinceyou to sprinkleaccess points around, it can train you to summonAssistant whereveryou want, for whatever you want, whenever you want.
Shifting users towardintent-driven interactions is key to making AI work. Take this typical Spotify interaction: Open your phone, open Spotify, click to search, type what you want to hear. If you’re just listening on your phone, you’re done. Anything elsetakes a bit more work. “If I want to stream music to speakers in my living room, that’s multiple steps I have to take, and I have to work through discovering the trigger points on the app,” Mann says. Designers thoughtfully craft those trigger points, making sure thatyou see controlled amounts of information in a logical order. AI increasingly handles that. Want music? Simply say, “Play SubRosa.”The more portals to Assistant yousurround yourselfwith, the more places youcan ask for it.
This is where Google’s Be Everywhere model starts gettinginteresting. Because the more portals you surround yourself with, the more Assistant can learn about not just how youask for help, but where, and in what context.“If they can execute on that, it’s really going to be quite revolutionary,” Hung says. Indeed, Google already is considering how best to interact with youin a multi-portal environment; if you pose a question aloudand multiple Home devices hear your request, the nearest node will provide the answer.
It’s easy to imagine howthis kind of contextual awarenesswill add an extra dimension to Assistant’s intelligence, making it truly usable. That’s essential to meeting―and exceeding―user’s expectations. “I do think we are now moving to this state where we will expect to be able to say at any time things that will be answered by a speech interface,” Black says. Google’s decision to bake its AI into a web of devices that worktogether certainly indicates as much.