As we turn the page to 2017, we’ve been thinking a lot about the buzzwords and trends of the past year.
Things like chatbots, voice UI, conversational commerce, machine learning, moving from screens to systems all were discussed and debated anywhere and everywhere, including our own blog. We at Intercom also had a handful of massive launches in 2016, includingSmart Campaigns, anew Messenger andEducate, our knowledge base product, and that has us reflecting on a year’s worth of lessons learned.
To make sense of the past year and dig into where product and design are headed next, I hosted a roundtable discussion with Paul Adams, our VP of Product, and Emmet Connolly, our Director of Product Design.
If you like what you hear, check out more episodes of our podcast. You can subscribe on iTunes or grab the RSS feed .
What follows is a lightly edited transcript of the interview, but if you’re short on time, here are five keytakeaways:Given today’s technology,chatbots are best left to handling computation. Things that require empathy or emotion, on the other hand, are still better handled by a human. From Airbnb’s launch of Trips to Instagram’s addition of Stories, products built as systems rather than a set of screens became more prevalent in the past year. As new uses are demanded for your product,your system will have to expand. Breakthrough products target existing behaviors, rather than asking users to break from the norm. 2016 featured two prime examples of the former: Snapchat Spectacles and Tesla’s solar tiles. Product teams must make a philosophical shift after they launch a product. As the team enters iteration,every previous decision is back on the table. Defining conversational commerceas sending texts to a bot is simply too narrow. Product builders must expand that view and look at their product as an ecosystem with many endpoints and messaging is just one of them.
Des Traynor:Today I’m lucky to be joined by Paul Adams, our VP of Product, and Emmet Connolly, our Director of Product Design. 2016 was marked in a lot of ways by bots. We had our own opinions, and we had our own experiments, as did the entire industry.Is the future of product design really gonna sit inside a chat bubble?
Paul Adams:Both Emmet and I wrote a lot ofblog posts aboutbots over the year, and we built a lot of bots too. Some of them saw the light of day, some didn’t. We learned that bots are overhyped.
For very human things like empathy and emotion, bots are terrible.
What we didn’t realize is that bots do work for a very specific set of use cases that are probably narrower than people first imagined. There was a crazy AI vision of the future, where bots are as intelligent as humans, and our biggest realization was that bots are good at some things, and humans are good at other things. Bots are really good at computation. Bots are basically simple computers, so if you need to ask somebody what your next bill was gonna be, a bot can calculate that far faster than a human, who’d have to look up the system, find your account, look at the UI and find the number. For very human things like empathy, emotion and reading between the lines of what someone’s actually trying to say, bots are terrible at that, given today’s technology.
Des:Emmet, from a design perspective, it sounds like you’d have to spend half your time dealing with whether or not the bot knows the answer. In the majority of cases the bot’s probably not going do a good job, right?
Emmet Connolly:We have a system whereby a human or a bot could answer your question, and so it becomes more of a rooting problem than a problem of, “What do I do in this failure case where the bot doesn’t know the answer?” If the bot doesn’t know an answer, or can’t provide a great one, then the human should provide the answer.
Paul and Emmet’s teams designedEducate, Intercom’s new knowledge base product, for bots to supplement a human customer support where appropriate.
A lot of the pitfalls we saw this year were use cases where people building these bots were over-promising what they could deliver. The technology for an English-language level conversation really isn’t there yet, and that has plunged us into this trough of disillusionment. That’s also a good place to be, because it means that we’re getting real about what’s actually possible. If 2016 was the year of hype around this, we could actually see a lot of real life, useful tools and products emerge in the next year.
Des:It’s really convenient, the way these things pick whole years in which they’re going to experience these iterations. We see bots that pretend to be humans, like, “Hi, I’m Barry the airline bot, how can I book you a flight.” And then you see bots that are blatantly bots, like, “I’m the little operator bot, and I’m going to point you in the right direction.” You said the idea of trying to humanize these bots isn’t something that we want to do at Intercom, but what’s the general thinking there?
The degree to which you personify the bot evokes a very different reaction in the end user.
Emmet:Our thinking has actually evolved a lot as we’ve tried out a lot of the experiments that Paul mentioned. Initially the thing that seemed most obvious to me was, “Hey, these are friendly little robots that can interact in your conversation. Let’s make them be tiny Pixar characters.” That’s not what resonated with the users that we put our early bot iterations in front of. The nuance of tweaking a little bit of the language or the degree to which you personify the bot evokes a very different reaction in the end user. Some of our early experiments had people saying, “Hey, I’m ‘bot name’, I’m not a real person, but I have a character.” People didn’t like that at all, because hey felt slightly duped by it. They thought they were here to talk to a person. If you can insert a level of automation and, “Hey, I am an automated bot that’s here to speed up the process,” then people can see the value in that, and it doesn’t feel like a bit of a bait and switch.
Des:We had a command line once upon a time, “Write in the exact word and you’ll get the exact answer.” Do you think people’s behavior changes when they know they’re talking to a bot? Do they still continue the formalities and the civility and the, “Hey, I’m curious about…”, or is it just like, “Flights please”?
Paul:For me, this thing is a scale. At one end of the scale is a command line interface, where it’s clear that you’re talking to a computer. People don’t actually, in many cases, turn around and ask themselves, “What is a bot?” There’s no actual common definition. We, in one of our blog posts, said that a bot is a simple computer program that executes, and then to Emmet’s point, you can give it a face, you can give it a name, and you can make it more or less human-like.
If the command line interface is one end of the scale, at the opposite end is what Facebook was trying to do, whereby you didn’t know if you’re talking to a person or a bot, and clearly there somewhere is the uncanny valley. We didn’t get close to the uncanny valley. We were far down the path of this is clearly a computer program. The minute it started pretending it was anything other than a computer, people reacted very negatively to it.
Bots are most effective today when users know exactly what they’re interacting with