Teaching chatbots regular human language

  
Teaching chatbots regular human language
Credit: Gerdien Wolthaus Paauw

Customer service chatbots are ready to help you night and day. But communication with a bot can be cumbersome sometimes. Christine Liebrecht, Associate Professor of Language, Business Communication, and Digital Media, thinks there is room for improvement. How? By teaching bots regular human language. This is how Tilburg University focuses on technology that works for people.

At some delivery restaurants, you can already order your pizza through a chatbot. And IKEA, Google, and Amazon have virtual assistants on their sites that try to answer your questions. IKEA's assistant is a drawn, redheaded young lady called Anna. You can ask her about products and sizes, about what is in stock, and about opening hours. However, an often-asked first question to chatbots is if a flesh-and-bone person is available. Clients' experiences with computer systems are not always positive. "And yet, frequently asked questions are often answered correctly and quickly by these chatbots," Christine Liebrecht observes. She is doing research on how to make them come across more human. Here, she is interviewed by Marga van Zundert.

More human-like chatbots? Why?

Even though you know you are chatting with a machine, it is nevertheless strange to start a conversation by typing 'start' rather than 'hello.' A conversation runs more smoothly, is more pleasant and more efficient if a machine presents itself in a human fashion. This is only logical, because talking is something you normally only engage in with humans. But at the same time, it is also very natural for people to see something human in lifeless things and to attribute emotions to them. Cars look friendly or cool and, standing in front of them, we see eyes rather than headlights. We do so en masse and unconsciously. If you google 'I see faces,' you will be surprised how strong this tendency is. The same thing happens when we are chatting with a robot. Unconsciously the idea that you are talking to a machine disappears. We call this anthropomorphism.

How do you make a chatbot talk more human-like?

Together with researcher Charlotte van Hooijdonk of VU Amsterdam I analyzed real customer service workers' chat conversations. Clients like a personal touch. It turns out this personal touch lies predominantly in informal use of language, in personalization, inviting words, and empathy. Think of words like 'I' and 'you,' and in Dutch the use of the informal variant of the second person singular 'jij' rather than the more formal 'u,' the use of 'hey' or 'hallo,' but also of reactions like 'Jeez,' 'Yes, I agree that's very frustrating,' or a smiley. The rights words to use obviously depend on the company. A pizza delivery company most likely will require more informal use of language than a mortgage company. Companies can use these language elements in their own online conversations, whether it be with chatbots or with 'real' employees. At the moment, together with a company called OBI4wan, we are developing an automatic tool that helps employees use the right tone of voice. We managed to get project funding for it from NWO, the Netherlands Organization for Scientific Research. The next step is to also teach chatbots this use of language, to have them use it in their conversations, and to subsequently investigate if clients like it better that way.

Are companies interested in chatbot research?

They certainly are. More and more people are using online customer services. KLM receives 400,000 messages each month; it is undoable to answer these by hand. It is nice to have this attention from companies, because we like to use real data for our research. We are currently engaged in exploratory talks for new chatbot research. Companies can still let us know if they are interested.

Can clients already mistake a bot for a human when they are chatting?

That happens very rarely. The bots we have now are not advanced enough for that yet. IKEA's Anna can answer questions about furniture, a travel company's chatbot can answer questions about summer resorts. The first question you get in customer services tends to be about the type of question you are calling about, whether it is category x, y, or z, after which you get the next question. That is a smart trick to get to the right problem and the answer to it. But as soon as clients stray from the prescribed 'path,' they immediately notice that communication gets more cumbersome. In cases like that, a real service worker will often need to be called in after all to answer your question.

But aren't bots becoming more intelligent fast?

Yes, they are. Artificial intelligence is making great strides. We already have self-learning bots that perform better and better because they process new information. But things can go horribly wrong there as well. A few years ago, Microsoft developed self-learning chatbot Tay. The ideas was to have 'her' become more personal and more human by letting her learn from conversations on Twitter. She learned a lot all right, but the input from Twitterers turned out to be so racist and so unconventional that, to everybody's horror, Tay even developed into a holocaust denier. The input a self-learning bot receives is essential to its output.

Won't a chatbot need a spoken voice to make it truly human-like?

That would certainly contribute to their being experienced that way. Colleagues of mine are actually doing research on that subject. However corny it may sound, people tend to experience a female voice as being friendly, helpful and willing to help, and a male voice as reliable and competent. Keeping this in mind, companies can think about how they want to present themselves, the kind of image they want to radiate, also in terms of voice. But for the moment we are concentrating on written text, and there is still a world to be won there as well.

Explore further: XiaoIce: When a chatbot chat moves up to human-sounding flow