Right now, artificial intelligences (AI) tend to fall into one of two categories – freakishly impressive, like the Google AI that can invent its own AI, or hilarious rubbish, like the one failing miserably at designing inspirational posters. Facebook recently developed an AI that falls into the former, and it turns out to be so good at its job that the plug was pulled before it could get any further.
As reported by Fast Co Design, Facebook researchers had been working on an AI that was designed to make digital communication more efficient. In fact, they had developed several and let them talk to each other using, at first, English.
For a while, the conversations were gibberish – at least to most people. Here’s an example of one of the conversations that took place as a couple of AIs were learning how to negotiate and strike deals on virtual commodities:
Bob: “I can can I I everything else.”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
Judging by this conversation, it doesn’t seem like we’ve got much to fear from Bob and Alice, the happy yet bemused AI family. However, thanks to a mistake in the programming language, the AIs deemed English to be too “slow” and cumbersome.
This apparent nonsense, as it turns out, was their own language that they had just started to develop. It was a way of talking that played entirely by rules they had devised, and one that didn’t really resemble or have roots in any pre-existing language.
Comparing it to the slang or shorthand linguistic styles found all over the world, from markets to journalists, from special forces teams to traders, Facebook’s researchers noted that the repetitiveness of things like “to me to me” or “I can can I” might seem like a mistake to us, but may in fact be the AIs’ way of communicating more effectively – we simply don’t understand what it’s actually saying.
One suggestion is that the number of repeated words is related to how many virtual “items” each bot should take during their negotiations. Excitingly, or perhaps worryingly, this interpretation could be wrong because no one exists that can translate the new language or languages – except, of course, the bots themselves.
Either way, Facebook pulled the plug on these negotiating bots. They explained that they wanted them to speak in English so others would understand them online, but also because they would never be able to keep up with the evolution of an AI-generated language.
What do you think? Should AIs be allowed to develop their own languages or should they only be allowed to follow our rules?