Chatbots are on everyone’s lips now. In addition to live chat, where you talk to “real” chat agents, you can now increasingly find chatbots on many sites that are placed in front of them. And so, more and more companies want to use chatbots, for very different scenarios. In any case, it should be a chatbot with as much AI as possible. However, many underestimate how much effort is needed. And it does not always make sense to equip the bot with as much artificial intelligence as possible.
Basically, chatbots can be divided into three categories:
- Rule-based chatbots
- Speech-recognising chatbots
- Self-Learning chatbots
The rule-based chatbot has no intelligence in the actual sense. Instead, a rule tree is set up at the conception stage. This consists of a series of questions with a fixed number of answers. Each answer leads to another question with further answers. The interlocutor can only choose from the predefined answers and is thus guided through the discussion tree. Even though the framework here is very rigid, with a well-designed rule tree 60%-80% of the queries (depending on the use case) can be answered without human intervention. The input of smaller forms, for example to query names or customer numbers, is easily possible.
However, optimisation after going live is a little more complex. To do this, you have to look at dropouts in particular or do a satisfaction survey. Another disadvantage is that these bots cannot be used with most messengers, such as WhatsApp. These do not support the use of fixed responses.
Speech-recognising chatbots are also based on a more or less rigidly defined rule tree. However, instead of predefined answers, the conversation partners can enter free text. From these texts, you must now try to find out their intention. Then compare the assumed intent with the intent defined in the rule tree. If you find the answer, you can then play the next question. This is where artificial intelligence through NLP (natural language processing) comes into play. For example, at the post office, customers can enter the answers “I would like to post a parcel.” or “I would like to send a shipment.” in response to the question about what they want. Both are the same intent, although the words are different. The definition of the intents can be very complex. However, most frameworks already offer basics here, so you do not have to start from scratch.
To ensure the success of the chatbot, however, it must be optimised again and again. To do this, you have to look at the chat protocols to see whether intents are recognised correctly. It may be necessary to define additional intents afterwards.
Self-learning chatbots are no longer based on fixed rules. Instead, an artificial intelligence learns from training data. Based on what is learned, answers are then given to freely formulated questions. In the best case, the training data are chat transcripts of conversations between real people. However, this type of bots requires the most work, both during conception and after going live. This starts with the right choice of training data. These need to be varied enough to cover as many cases as possible. However, it must also be ensured that they do not contain any errors.
In one case, a chatbot was to be used in customer service. For this purpose, the chatbot learned from the chat logs of the last few months. During the first evaluation after going live, it was discovered that the bot very often gave wrong answers. Only after many optimisation attempts did they look at the original chat logs. They realised that these wrong answers came from the human chat agents. The result was that the project was stopped, and the agents got new training.
But even after going live, the bot’s behaviour must be continuously monitored and optimised. Otherwise, it can happen that it develops in an undesirable direction. This requires the appropriate know-how and a lot of time. Initial success rates of < 40% are not uncommon. However, this can be significantly increased through conscientious optimisation.
The choice of the type of chatbot depends on the one hand on the use case, but also on the willingness and possibility to invest time and other resources accordingly. Thus, a rule-based chatbot and manageable effort can already achieve a lot. If you want to use more intelligence, you should be able to fall back on corresponding training data. However, every bot will still need attention after it goes live. Only if you are aware of these facts can you successfully operate a chatbot.
However, there is one misconception that should be avoided: That the chatbot will replace its human colleagues in the short or medium term. It is true that the chatbot can process a large part of the standard enquiries. However, experience shows that the chat agents then do not have less work but are better able to deal with the more complex remaining questions. So, they are relieved, but not replaced – at least not yet.