Beware the silver-tongued devils. Liars are everywhere, and although the motivations for fibbing are complex and varied, for the most part, successful lies are not particularly enjoyable for the target.
Fear not, feeble humans. A team of researchers, led by Stephan Ludwig, a senior lecturer in marketing communications at the University of Westminster, has come to the rescue. According to the Telegraph, they have developed an algorithm that recognizes lies in any computer-mediated communication. This means that spam emails, shifty conversationalists on messaging apps, and even frivolous flirters on dating websites can potentially be caught out using this digitized defense mechanism.
The authors, who developed their algorithm at City University London, say that it can “assess the ability of word use (micro-level), message development (macro-level), and intertextual exchange cues (meta-level) to detect severe disruption by business partners,” whatever that particular business might be. In short, textual lies can be picked up on based on how someone is saying something, along with why they may be saying it and how the conversation develops over time.
Writing in their study, which is due to appear in the Journal of Management Information Systems, the team say that there are “three established dimensions of deception severity – falsification, concealment, and equivocation.”
The first involves communicating information they know is false, and only the most gullible will fall for the fib at the first hurdle. The second, concealment, normally involves being intentionally evasive, whereas the third, equivocation, involves the use of vague terms or linguistic misdirection.
Their algorithm was fed tens of thousands of emails containing truthful and deceptive contents, and at the end of the process, it was able to detect all three in any subsequent emails far better than a human normally could. A previous study revealed that people manage to spot lies like this just 54 percent of the time – slightly better than chance – whereas this novel algorithm detects an untruth 70 percent of the time.
Rise of the machines. studiostoks/Shutterstock
Their analysis reveals that people who tend to lie tend not to deploy personal pronouns a lot – “I,” “me,” and “mine,” for example. Instead, they prefer to frequent their speech with adjectives. The point of this is probably to cloud the person’s ability to judge what they are actually saying, while distancing themselves from the information they are conveying.
Connecting entirely separate pieces of information with phrases like “we know this happened because” was also found to be a common tactic of liars, who are seeking to fabricate links and patterns where none exist at all.
Deceiving mischief-makers also tend to mirror the sentence structure of the subject they’re communicating with. This is a speech-based equivalent to mirroring the actions of someone that is physically in your presence – it builds an empathetic connection, which can be used for nefarious means if one of the participants has malevolent intent.
Although they wish to increase its accuracy before deploying it properly, the team note that their algorithm could be used to produce a “possibly lying” score when assessing academic work, political discourse, or financial statements. The higher the score, the more likely a lie is hiding within the text.
So perhaps the future of lie detection rests not with the ill-suited polygraph, but with this digital arbiter. After all, it does seem to be remarkably good at its job, although there is a chance that this is exactly what the researchers want you to think.
It's unlikely that online liars all dress like this when they're sitting in front of a computer, but it's good for illustrative purposes. Brian A Jackson/Shutterstock