Return to News

Trusty robot helps us understand human social cues

by Angela Herring of news@Northeastern

You’re not sure why, but you don’t trust that guy. You wouldn’t give him a buck because you’re pretty sure he wouldn’t return the favor. What is it about him? Can you put your finger on it?

Despite decades of searching, sci­en­tists have not been able to iden­tify the visual cues that help us deter­mine a stranger’s trust­wor­thi­ness. But humans are a pretty coop­er­a­tive bunch, so they must be gleaning some­thing from their non­verbal inter­ac­tions that explains who to trust and who to be wary of.

In an article soon to be pub­lished by the journal Psy­cho­log­ical Sci­ence, North­eastern Uni­ver­sity psy­chology pro­fessor David DeSteno and his col­leagues reveal the mys­tery. The research sug­gests that a dis­tinct set of silent cues — hand and face touching, for example, or arm crossing and leaning away — will betray humans’ bad intentions.

“There’s no one golden cue,” DeSteno said. “Con­text and coor­di­na­tion of move­ments are what matters.”

The research was also reported in The New York Times.

DeSteno’s team, which also included researchers from Cor­nell Uni­ver­sity and the Mass­a­chu­setts Insti­tute of Technology’s Media Lab, per­formed two exper­i­ments to unearth these find­ings. In the first exper­i­ment, which they called the exploratory phase, the researchers asked 86 North­eastern stu­dents to have either a face-​​to-​​face con­ver­sa­tion with another person or to engage in a web-​​based chat ses­sion. The live con­ver­sa­tions were video recorded and later coded for the amount of fid­geting the two par­tic­i­pants demonstrated.

After the ini­tial con­ver­sa­tion, the same two people were asked to play a prisoner’s dilemma game, with real money (albeit not much) at stake. Players could either be selfish and make a lot of money for them­selves, or they could be gen­erous — and hope their partner would too — for a smaller but com­munal profit. As might be expected, par­tic­i­pants were less gen­erous when they didn’t trust the other player.

Players who engaged in face-​​to-​​face con­ver­sa­tions were much better at picking out the less honest than those who only par­tic­i­pated in online chats. And if someone dis­played the tetrad of cues men­tioned above, that person was less likely to be gen­erous, and the partner would know it, even though she couldn’t say why.

In the second exper­i­ment, par­tic­i­pants con­versed with Nexi the Robot. Photo by Mary Knox Merrill.

“But the problem,” DeSteno said, “was that iden­ti­fying the exact cues that matter is dif­fi­cult. Humans express many things at once.”

In order to val­i­date the cue set, the team repeated the exper­i­ment. But this time, instead of talking with another North­eastern stu­dent, the par­tic­i­pants con­versed with a robot cre­ated by MIT’s Cyn­thia Breazeal — they call her Nexi.

Two exper­i­menters behind the prover­bial cur­tain con­trolled Nexi’s voice and move­ments. The par­tic­i­pants were unaware of the exper­i­menters and when they played the money game with Nexi later, they assumed they were playing with the robot. When Nexi touched its face and hands during the ini­tial inter­view, or leaned back or crossed its arms, people did not trust it to coop­erate in the game and kept their money to themselves.

By con­trol­ling the non­verbal cues par­tic­i­pants received, the Nexi exper­i­ment con­firms that the cue set revealed in the first exper­i­ment is not just a relic of over-​​fidgety par­tic­i­pants, DeSteno said. But more than that, he said these addi­tional results sug­gest that robots are capable of building trust and social bonds with humans. Our minds are willing to accept that fact and assign moral intent to tech­no­log­ical entities.

Watch a video of Nexi in action here.

« »