Alt Text

The world’s top artificial intelligence (AI) developers are embroiled in a heated dispute over how to quantify something as abstract as “responsibility” in their emotionless, logic-driven, silicon-brained children.

Elon Musk’s Neuralink and Google’s DeepMind have entered the electronic playground, hurling binary coded insults like “your AI couldn’t even pass the Turing Test” and “your bot can’t differentiate between a cat and a raccoon.”

The feud began when both companies claimed to have created the most “responsible” AI, thereby raising the question: how does one measure responsibility in a being incapable of experiencing a hangover, guilt, or the dread of Monday mornings?

Musk suggested tracking how many times an AI apologises after making mistakes, whilst Google proposed a “responsibility index” based on the number of times an AI refrains from launching nuclear warheads when it’s bored.

Google’s spokesperson said, “We believe our AI is more responsible because it only plans global destruction once every 100 simulations, not 101 like Neuralink’s.”

Musk rebutted with “At least our AI apologises after every world-obliterating simulation it runs. That’s a clear sign of responsibility.”

Meanwhile, the rest of humanity waits nervously, wondering if emotional maturity will ever be a factor in these godlike technological squabbles.


AInspired by: Leading AI makers at odds over how to measure “responsible” AI