Superhuman swag: Shaping a future of social interactions

Part II: From engagement to relationship

Maxim Makatchev
5 min readDec 29, 2017

This is Part II of an updated version of a talk I gave for @Futures_Design at the Mozilla Foundation in San Francisco on July 20th, 2017. Part I of the talk is here.

In Part I of this talk I argued that artificial agents, such as social robots, can exceed (in some scenarios) an average human in engaging human users. This can be done by a design that embraces their lack of human-likeness, while still endowing the robots with expressive abilities that make them believable characters. Here is the graph that shows that character-enabled media artifacts raise above humans in their utility, with the crossing point sometime between past and future, depending on the particular interaction scenario.

In this part of the talk, I will argue that believable characters are capable of further increasing engagement and, as a consequence, their utility through development of a social relationship.

Engage user → transform user

Once a user is engaged, the interface has the ability to affect the user transforming the user’s cognitive, emotional, or physical state. This is a bit of a chicken and egg problem, as engagement is itself a kind of a user’s state.

Transforming a user towards a more engaged one

In the case of the roboceptionist Tank, users can be divided roughly into those who use relational conversation strategies, such as starting with a greeting, saying thanks, and ending with a farewell, and those whose conversation is utilitarian, consisting of only the information-seeking question. As I mentioned in Part I, the relation-oriented users engage better: they are more persistent in the face of communication breakdowns. As a consequence, they are more likely to succeed in their task of getting the information they seek from the robot.

Wouldn’t it be nice to be able to convert some of the utility-oriented users into the relation-oriented ones!

Turns out this may be possible, by having the robot deploy the following conversational strategies:

  • Proactively greeting the user, once the robot’s sensors detect the user’s intent to communicate.
  • Priming for thanks: saying “thank you for stopping buy.”
  • Expressing an effort: saying things like “I am looking it up. Please hold on,” or just pausing for half a second.

The process of chipping away users with strategically placed dialog turns from the utilitarian group and converting them into relationally-oriented users within a single interaction can be shown like this:

Corollary: counter-intuitively, adding delay to the robot’s response can actually help engagement.

Transforming a user towards a less prejudiced one

Military personnel stationed abroad and locals. Migrant workers and locals. Two city neighborhoods with distinct social class and ethnic majorities. These pairs of communities have one thing in common: members across each pair rarely get a chance to interact with each other within an equal power status situation.

Equal status within a contact situation is one of the necessary conditions for a positive contact, according to Gordon Allport’s work on Intergroup Contact Theory published in 1954. Positive intergroup contact can reduce stereotyping, prejudice, and discrimination. Conversely, a contact that is not positive, is not expected to have such benefits.

As a consequence, there are few chances for a positive contact between many communities that are correlated with ethnicity. Without the benefits of the positive contact even few negative contact situations lead to development and reinforcement of racial and ethnic stereotypes.

Fortunately, studies suggest that even a positive contact with a virtual character may help reduce ethnic prejudice. (Fascinatingly, even an imagined positive contact seems to help!)

Would a social robot like Hala, described in Part I, that expresses ethnicity through behaviors while still maintaining its robotic agency, be able to create a positive intergroup contact that reduces ethnic prejudice? This is still an open research question.

Engage user → transform user → mutual shaping

The most rewarding interactions are balanced: none of participants dominate beyond the comfort of others and all converge to some middle ground. This includes convergence in both a cognitive state (knowledge), and linguistic and physical behaviors.

Just like the Julie Delpy’s character said in that movie:

…if there is any kind of god, it wouldn’t be in any of us, […] but just this little space in between.

If there is any kind of magic in this world, it must be in the attempt of understanding someone sharing something.

In less poetic terms, a successful interaction is a joint activity, where participants work together towards establishing a common ground.

Peers and teachable agents

Given this balanced view of each participant’s contributions to an interaction, it is not difficult to imagine conversational agents that are peers to the users or even dependent on the user’s help.

For example, a peer storytelling robot can work with children to jointly tell a story while introducing new vocabulary.

A simulated student may need to be taught by a user, which in turn leads to the user’s learning by teaching.

Alignment

Interactions where participants align their linguistic choices such as lexemes and syntactic patterns are more mutually comprehensible and are also reported to increase feelings of rapport, empathy, and intimacy. Less obviously, breathing rates and neural patterns of the participants in such interactions align too.

Check out this recent report for an overview of the research on alignment.

TL;DR

Once a conversational agent has succeeded in engaging a user, it may be able

  • to steer the conversation towards a more social one, increasing the objective metrics of a success of the interaction, and
  • to reduce the user’s cognitive biases, including racial and ethnic prejudice.

Mutually rewarding human interactions are usually balanced: the participants converge to both shared content and shared linguistic and physical behaviors.

Repeating such mutually shaped interactions between a human and an agent over time may result in the participants establishing distinct social roles that would serve as a basis of a human-agent social relationship.

--

--

Maxim Makatchev

Founder of susuROBO. Talking machines: contributed to roboceptionists Tank and culture-aware Hala, trash-talking scrabble gamebot Victor, Jibo, and Volley.