A photo of a man and woman sitting next to each other. Both people are holding cell phones, but the woman is looking at the man to her right supsicously while the man is looking at his phone.
“Human behaviors are a rich source of deception and trust cues,” said Xunyu Chen, assistant professor in the VCU Department of Information Systems, who is teaching artificial intelligence to pick up on these cues to determine if a person is lying. (Getty Images)

With a game show as his guide, VCU researcher uses AI to predict deception

Findings could be used to analyze human behaviors in high-stakes scenarios, such as presidential debates, business negotiations and court trials.

Share this story

 Using data from a 2002 game show, a Virginia Commonwealth University researcher has taught a computer how to tell if you are lying.

“Human behaviors are a rich source of deception and trust cues,” said Xunyu Chen, assistant professor in the Department of Information Systems in VCU’s School of Business. “Utilizing [artificial intelligence methods], such as machine learning and deep learning, can better exploit these sources of information for decision-making.”

In one of the first papers that investigate high-stakes deception and trust quantitatively – Trust and Deception with High Stakes: Evidence from the ‘Friend or Foe’ Dataset” appeared in a recent issue of Decision Support Systems – Chen and his team use a novel dataset derived from an American game show, “Friend or Foe?” which is based on the prisoner’s dilemma. That game theory explores how two people could benefit from cooperating, which is challenging to coordinate, or suffer from failing to do so.

A photo of a man wering a black suit and white shirt from the chest up
Xunyu Chen, assistant professor in the VCU Department of Information Systems. (File photo)

Lab experiments that have been commonly used to study trust and deception have limitations in terms of realism and generalizability. Compared with low-stakes fictitious cases, high-stakes deception found in game shows demands greater cognitive resources for behavioral management. Also, the significant gain or punishment associated with a high-stakes decision may also cause stronger emotional and behavioral variation in cues such as facial, verbal and movement fluctuations.

“We found multimodal behavioral indicators of deception and trust in high-stakes decision-making scenarios, which could be used to predict deception with high accuracies,” Chen said. He calls such a predictor an automated deception detector.

This research extends the understanding toward deception and trust behaviors that could lead to substantial consequences from a scientific and quantitative perspective. Researchers and practitioners can use findings from this research to analyze human behaviors in high-stakes scenarios, such as presidential debates, business negotiations and court trials, to predict deception and the protection of self-interest.