Conversational thought pipeline

How chatbots can steer conversations by understanding the nuances in user utterances?

Srini Janarthanam
8 min readMar 3, 2021
Picture of a robot
Photo by Phillip Glickman on Unsplash

One of the trickiest problem designing and building chatbots is to build its capability to understand user’s utterances. This is the very first challenge facing the chatbot — understanding what the user wants from what they say. We call this the intent classification problem. What are intents? How can you come up with a set of intents that the chatbot need to be looking for in customer’s utterances? Although, intent classification is an essential part of most popular chatbot solutions (e.g. IBM Watson Assistant, Google DialogFlow, MS LUIS, etc), it is not very clear to the designer, how they should be designed. In this article, I would like to do a deeper dive into this question and come up with approaches to intent design.

Let us look at a few imaginary troubleshooting conversation starts — one where I, the user, is having trouble with my laptop.

Chatbot >> Hello! I am Dr. Tech, your troubleshooting assistant. How can I help you?

User >> When I start my laptop.. I can’t see anything on my screen and can’t hear anything either.

Chatbot >> Here is the number to call for all your troubleshooting problems — 0800 XXX YYY

User >> I think my laptop is broken.

Chatbot >> Here is the number to call for all your troubleshooting problems — 0800 XXX YYY

User >> I would like to fix my laptop asap because I need it to finish my school work.

Chatbot >> Here is the number to call for all your troubleshooting problems — 0800 XXX YYY

User >> Could you give me the troubleshooting helpline number?

Chatbot >> Here is the number to call for all your troubleshooting problems — 0800 XXX YYY

You have probably given up reading this article by now. Why am I showing you conversations where the chatbot is doing nothing but handing over a phone number?? Please bear with me.

By understanding the nuances of the thought pipeline and by identifying the phase in which the user is from user’s utterance, we can provide the chatbot opportunities to steer the conversation in an optimal manner.

The above imagined conversations shows the thought process of the user. The process starts with perception — where the user notices something out of ordinary. It can either be something they see or something they don’t see. It just has to be something out of the ordinary. In this case, I saw that my laptop screen is not lit up and I hear no noise I usually hear when my laptop boots up. This leads to me forming a belief in my head — my laptop doesn’t seem to be working. Let’s leave emotions out of this for a second. Once I formed my belief, I logically try to figure out what I need to do next. This will depend on a number of factors — your immediate commitments, principles, other beliefs, etc. Let’s say I came to a conclusion that I need to get it fixed. Then I try to break the goal (i.e. getting my laptop fixed) into smaller tasks. And my first task is to speak to someone who knows how to fix a laptop because I have no idea how to get started to acheive my goal. So there is a pipeline of thoughts that look like this:

Perceptions > Beliefs > Goals > Tasks

The thought pipeline starts with perception, builds a belief, figures out a goal, plans a sequence of steps to the goal and finally, arriving at the first step of the plan. Here is the insight. The user could start a conversation anywhere during this pipeline of thoughts. Now go back to the imaginary conversations and have a look. Each of them show where in the thought pipeline the user is at when contacting the chatbot for help. And we as designers need to be able to see that so that we can offer the right kind of help/support at each phase.

Traditionally, we’d simply map all user utterances either direct or indirect relating to troubleshooting a laptop to an intent labelled something like — request-fix-laptop. And the chatbot would respond all of these utterances in the same way, like the ones. However, by understanding the nuances of the thought pipeline and by identifying the phase in which the user is from user’s utterance, we can provide the chatbot opportunities to steer the conversation in an optimal manner.

Picture of a phone with a think emoji
Photo by Markus Winkler on Unsplash

But how? Let us look at each phase of the tought pipeline and how the chatbot could have steered the conversation in each one of them.

Conversation starts at the perception phase

What could have happened if the user reached out to the chatbot when he was still in the perception stage?

Chatbot >> Hello! I am your digital laptop assistant. How can I help you?

User >> When I start my laptop.. I can’t see anything on my screen and can’t hear anything either.

Chatbot >> Ah.. I see. It is a common problem. It could have been completely discharged. Is it connected to a power source?

It uses the symptoms reported to further probe the issue to get to the actual problem. Only then it would know for sure, what kind of help the user actually needs instead of jumping into the fix-the-laptop routine right away.

Conversation starts at the belief-forming phase

What if the user started interacting when he was in the stage of forming beliefs about his laptop based on his perception. How can the chatbot help shape the user’s belief in the right way?

User >> I think my laptop is broken.

Chatbot >> Why do you say that?

The chatbot does not take the user’s belief at face value. It could have but it didn’t. There is a choice because it is trying to understand the user in a more nuanced manner. It is looking for further proof or evidence to make sure the user’s beliefs are right. Once validated, it can then proceed to help the user in an appropriate way.

Alternatively, the chatbot provide options to the user on what it can potentially do to help the user in such a situation.

User >> I think my laptop is broken.

Chatbot >> I am sorry to hear that. I can help you with fixing it, recycling it, or find you new deals to replace your laptop. What would you like to do?

User >> Show me the deals.

By suggesting alternate goals to the user, the chatbot could help the user see the options and choose the best course of action. The chatbot should not have to map a belief to a goal without discussing potential goals with the user first. So as you can see, when the statement made by the user is that of a belief, the chatbot can potentially move in two directions — verify the symptoms (perception), or discuss goals.

Conversation starts at the goal-setting phase

Let us assume, the user wants to recycle his laptop. How can the chatbot proceed now based on the set goal?

User >> I would like to recycle my laptop.

Chatbot >> Sure. I can help you recycle your laptop but do you mind telling me why you want to recycle it?

Although the chatbot understands the user’s explicit goal and also acknowledges it, it could choose to take a step back in the user’s thought pipeline to understand the motivation behind the goal. It could help the user explore other goals once it understands the user’s beliefs. On the other hand, it can proceed by accepting the given goal, identify the start state and build a plan to reach the goal state.

Conversation starts at the task level

Finally, let’s look at what opportunities are there at the task level. The user has identified his goals and made a plan. He starts to solve the tasks in the plan but reaches out for help. What could the chatbot do? The chatbot could simply answer the question or the request.

User >> Could you give me a number I can call to troubleshoot my laptop?

Chatbot >> Sure. Here is the number to call for all your troubleshooting problems — 0800 XXX YYY

Alternatively, it could go up the thought pipeline and explore the user’s plans, goals, and beliefs.

User >> Could you give me a number I can call to troubleshoot my laptop?

Chatbot >> Sure. I can help you with that. But would you mind telling me what the problem is?

User >> Sure. I think my laptop is not working.

Chatbot >> I can help you troubleshoot that myself. Would you like to give it a go?

At any point, the chatbot could decide whether to go down towards tasks and solutions or up towards better understanding the symptoms and beliefs and this decision is really based on the chatbot’s own goals.

For instance, a climate-change aware chatbot would prefer to steer the conversation towards repair and recycling where as a sales chatbot would steer the conversations towards new deals and sales opportunities.

Having the capability to identify not just the user’s problem but also where in the thought pipeline they are can help the chatbot steer in a more optimal way through the conversation graph.

This approach is not restricted to troubleshooting conversations but can extend to other types of task-based conversations too. So instead of bunching up all direct and indirect requests to solve a problem together in a single intent, we need to start building the capability to get a nuanced understanding of the user’s current location in their journey towards solving the problem they have — perhaps by building one model to identify the problem and other to identify the phase. For instance, when the user says, ‘When I start my laptop.. I can’t see anything on my screen and can’t hear anything either’, it could be classified as —issue=laptop_malfunction, phase=report_symptom.

Picture of a person on a beach in deep thought
Photo by Marcos Paulo Prado on Unsplash

So, in summary, by identifying the nuanced phases in the thought pipeline from the user’s utterance, we could provide rich opportunities for chatbots to steer conversations in interesting directions and create engaging, memorable and desirable conversational experiences.

--

--

Srini Janarthanam

Chatbots, Conversational AI, and Natural Language Processing Expert. Book author — Hands On Chatbots and Conversational UI.