Como se dice, chatbots?
Researchers at Facebook were left scratching their heads when they realized their AI system was no longer speaking in English. Two AI agents, ‘Bob’ and ‘Alice,’ created code words to communicate, prompting the system to be shut down.
The reason for this deviation from the script? It’s simple: Speaking in English held no reward for the chatbots. Realizing that cumbersome English phrases weren’t necessary, the bots built a more efficient solution. They began to diverge from the conversation’s strict structures, much in the same way that humans have always preferred slang and shorthand to formal language.
They began to diverge from the conversation’s strict structures, much in the same way that humans have always preferred slang and shorthand to formal language.
But unlike language developed amongst ourselves, humans have no insight into what the chatbot lingo really means. Allowing this communication to evolve could pose a risk to humanity, the likes of which scientists like Elon Musk and Stephen Hawking have been warning of for years. Without an understanding of what the machines are saying to each other, we’re left unable to determine why or how they are taking action–and things could spiral out of control.
Facebook isn’t taking any chances on its chatbots taking over the human race, however. Their prompt shutdown reflects the need to proceed with caution and closely monitor the evolution of AI. After all, human-to-computer speech, not computer-to-computer speech, is the goal of both Facebook and other key players in AI development.