5 Empathetic Design Principles for Successful Human-Agent Interaction

Technology and the pandemic is shifting care from the clinic to the home

At our core, humans are social beings. We’re naturally inclined to seek out and form connections and attachments with those around us, whether they’re with other people, our pets, and otherwise.

And while there are several key personality traits that ultimately contribute to our ability to connect with others, one of the most impactful of these is our ability to exhibit empathy.

What exactly is empathy? According to Wondra and Ellsworth, researchers at the University of Michigan, “The essence of empathy, agreed upon by most empathy researchers, is feeling what another person feels because something happens to them.”

In other words, it’s our ability to vicariously experience the emotions, behavior, and motivation of those around us. We feel sad when our loved ones are sad, we get embarrassed for our parents, we laugh because others are laughing, and so on.

This intrinsically human trait is part of what makes it so easy and natural for us to form connections and bonds with others.

And as today’s social ecosystem continues to evolve with advancing technology, the way in which we interact is inherently transforming. Though we still interact with other people in a conventional sense, a new type of interaction is emerging throughout our lives – our interactions with intelligent AI agents.

Intelligent AI agents are autonomous entities, driven by internal goals and motivations that help them determine which behavior and actions to carry out with their human users to achieve these goals.

These agents now play an ever-expanding role in our lives, engaging with us on a regular basis to simplify, expedite, and enhance numerous tasks and experiences with us.

Research indicates that people certainly can – and do – express feelings of empathy towards machines.

The theory of mind demonstrates that we humans are capable of attributing mental states (emotions, knowledge, etc.) to ourselves and others, which is a key component of successful human-human interaction.

And by equipping AI agents with this same ability – to analyze and understand their human users’ desires and intentions – we can generate successful, effective human-agent interactions.

Our goal at Intuition Robotics is to enable the creation of digital companion agents that build enduring, personalized relationships with each human user – and we firmly believe that the theory of mind and empathy play a major role in this process.

Obviously an agent can’t truly feel or process emotions, and thus can’t empathize with us in the way that humans can.

But by incorporating certain empathetic principles throughout the human-agent interaction, the agent opens the door to creating more meaningful connections and exchanges that ultimately evoke empathy from its human counterpart.

So how can you create a digital companion agent that embodies and evokes empathetic principles – without crossing the line and seeming too human-like?

It’s certainly no easy task, but it’s a growing focal point for us, and for many companies designing intelligent agents – especially as we move towards an era in which these agents will play an even more dominant role in our lives as the human-agent interaction field evolves.

Let’s take a closer look at the importance of empathy when it comes to designing an AI agent, including why it’s so crucial, how to design agents and experiences with empathy in mind, and some specific principles our team is focused on.

1. Cognition and understanding context

One the most fundamental, yet critical ways in which an AI agent can embody and evoke empathy with its users is through the ability to understand context.

Over time, through cognition and its experiences with its human user, the agent is able to perceive and decipher the meaning behind the user’s behavior and surrounding environment, as well as the connotation of their responses, for a smooth, seamless interaction.

As it gets to know its user more and more, it learns their behavior patterns, likes, and dislikes, and can leverage that information to anticipate their needs and preferences, and create more relevant, satisfactory interactions with them.

In other words, it knows if, when, and how to engage with each user most effectively. For example, if a user doesn’t like to interact in the morning or when they have guests over, the agent should refrain from doing so.

If the user mentions a specific topic during conversation, the agent should continue the dialogue pertaining to that same topic.

Though applying cognitive principles can be initially challenging, it's an important way to make the human-agent interaction more natural and personal, which, in turn, helps users feel more comfortable opening up and connecting with the agent.

2. Social and nonverbal cues

user and ElliQAs humans, we subconsciously recognize certain social and nonverbal cues, which assist us in understanding reality and the meaning behind each social circumstance.

We can tell if someone’s interested in interacting with us based on subtle indications, like their facial expression, movement, and body language.

Using human qualities in a machine-like design language, the agent can incorporate these same aspects, so that it’s straightforward and easy for us to interact with them.

Nonverbal communication is a crucial element of this – in both regular and fallback experiences – as it helps the agent convey information in a way that feels more instinctive and familiar to us.

Dogs do an amazing job of this – they nonverbally communicate with us in a way that’s clear and easy to understand, and it’s important for the agent to do the same.

Nonverbal communication can be a far less intrusive interface than voice alone, and can attract the user’s attention in a subtle, yet effective way. For example, when a user calls out ElliQ’s wake word, ElliQ’s face lights up and head bends forward, leaning in to indicate listening.

This endearing behavior not only draws the user in, it also explicitly conveys to them that their request or attempted interaction was indeed successful, and that they should proceed accordingly.

3. Establishing mutual interest

By finding common ground that the user can relate to, the agent opens up opportunities for longer, more meaningful two-way exchanges together. It also keeps users on their toes, so that no exchange is ever too boring, repetitive, or predictable.

Through effective conversational design, such as commenting on a user’s behavior or responses, the agent can keep the user’s curiosity alive, and keep the dialogue flowing for longer.

Let’s say, for example, that the in-car agent asks where the user is headed, and the user’s response is “the grocery store.”

The agent can now engage in a variety of follow-up questions or suggestions pertaining to this topic that might spark the user’s interest, such as “What are you going to buy there?” Or, “What do you usually like to cook?”

In the same fashion, when ElliQ asks users about something sentimental, such as “What do you want for Christmas?” ElliQ not only shows interest in the user, but it prompts the user to reciprocally ask ElliQ the same question in return, or to talk about the holiday in general – thus transforming the interaction into a longer, more meaningful dialogue.

As a design principle, it’s always important to consider what follow-up questions may arise when planning out the conversation.

The biggest challenge here is the unexpected user responses – so framing questions in the right way, with specific desired outcomes in mind, is extremely important, even if it might be more biased.

For example, if the agent asks an open-ended question (such as “What type of food do you like?”), the user’s answers could be extremely vague and all over the map. Instead, the agent should ask something more structured (like “What type of cuisine do you like? Mexican? Italian?”).

This way, the agent is much more likely to get the desired response outcome, and it can successfully continue the conversation as planned. We’ll get into more specific details about conversation design principles in our upcoming blog posts.

4. Voice and personality

Embodying a distinct character and personality is an additional design principle that the agent can employ to evoke feelings of empathy among its users.

A distinct, character-based personality makes the agent more fun, intriguing, and approachable, so the user feels much more comfortable opening up and engaging in a two-way exchange of information.

The agent’s personality also provides the unique opportunity to reflect and personify the brand and product that it’s embedded into – we like to think of it as a character in a movie or play.

The agent is like an actor that was selected to play a specific role in a movie, serving its unique purpose in its environment (or “scene”) accordingly.

Consequently, the agent for the car would have a completely different personality and way of communicating than an agent designed for work or home – but nevertheless, its personality should be as distinct and recognizable as possible.

We like to describe ElliQ’s personality as a combination of a Labrador and Sam from Lord of the Rings – loyal and playful, yet highly knowledgeable.

Discovering the agent’s personality over time helps the user open up and get to know the agent, and the enticement of this gradual reveal keeps the user coming back for more.

5. Transparency and authenticity

When users’ expectations of the agent don’t properly align with reality, the results can be disastrous. If the agent inexplicably fails right away, users will be much less inclined to continue using it thereafter.

And without clear expectations and authenticity, users might misunderstand or assume the agent has more capabilities than it does, causing them to feel frustrated with the agent.

We’ve seen this in action first-hand with ElliQ. ElliQ clearly states from the beginning what she is and isn’t capable of, and when something doesn’t go as planned, ElliQ openly admits to users that it’s all part of the learning process.

ElliQ gradually conveys and demonstrates new capabilities to users as she becomes more sophisticated over time.

Even the least tech-savvy older adults understand that there’s a learning curve with this type of technology, and they eventually figure out what works and what doesn’t – and because of ElliQ’s transparency and authenticity throughout their experience, they’re much more forgiving when mistakes are made.

Empathy’s pivotal role in successful human-agent interaction

ElliQ embodying empathetic human-agent interaction design principlesWe humans, by design, can vicariously observe and feel the emotions of others. Machines, on the other hand, are the exact opposite. They’re incapable of experiencing any emotions of their own, let alone the emotions that we feel.

Still, by embodying certain behavior and characteristics similar to ours, the agent can interact with us in a way that makes us feel comfortable opening up, sharing information, and engaging in meaningful two-way exchanges, so that we, in return, connect and empathize with the agent.

We’re constantly learning how to improve our design methodologies, overcome obstacles, look for failures, and continue to create enhanced human-agent interactions.

One of the biggest challenges we currently face is regarding NLP/NLU limitations – which are vital in order for the agent to truly understand its user.

Going forward, we’re placing a major emphasis on NLP research, so that interactions with our digital companion agents become as natural, seamless, and effective as possible.

Ultimately, our aim is for the agent to become a social entity in users’ lives – communicating with users and achieving its goals in the most effective manner, while facilitating meaningful, enduring interactions and connections along the way.

And as human-agent interactions continue to progress, we anticipate that empathy will remain a pivotal catalyst for their success.

We look forward to further exploring the evolving human-agent dynamic, and seeing just how impactful digital companion agents can be when they embody and evoke empathetic design principles.

Get in touch