Ethereum Founder Warns that Malicious AI Could Mean the End of Humanity – Sooner than Later
In the movie The Matrix, robots have rebelled against their creators and enslaved the human race. They use bodies as

In the movie The Matrix, robots have rebelled against their creators and enslaved the human race. They use bodies as batteries and kill the dissenters. It's a bleak and once-fantastical take on the future, fascinating because of the potential but unlikely scenario.
However, one tech mogul is warning that this future – or one with similar disastrous implications – may not be so unlikely. Ethereum founder Vitalik Buterin has some thoughts on what the most pressing danger facing humanity is right now – and what we should be doing to counteract it.
Buterin’s Warning
The idea of artificial intelligence (AI)rebelling against humanity and driving us to extinction is not a new concept. Since the days of Mary Shelley's Frankenstein, humanity has explored the potential of near-human existence and what risk it may pose to us. While Dr. Frankenstein's monster was not a robot out to enslave humanity, it was the first time we as a species began considering that our creations may become our nightmares.
In recent years, tech companies like Elon Musk's Tesla have been working on enhancing AI. Whether it's for better self-driving cars or autonomous robots like Tesla's Optimus, the technology is the fevered focus of some of the greatest minds of the 21st century.
But Ethereum founder Buterin warns that unrestrained growth and technology enhancement could spell humanity's doom.
AI theorist and writer Eliezer Yudkowsky wrote a paper about why researchers and developers aren't doing enough to safeguard against the potential for disaster from AGI (artificial general intelligence, or the ability for AI to learn anything a human can learn). MIRI (Machine Intelligence Research Institute) shared the paper.
Buterin retweeted it, adding, "Unfriendly-AI risk continues to be probably the biggest thing that could seriously derail humanity's ascent to the stars over the next 1-2 centuries. Highly recommend more eyes on this problem."
One Twitter user responded that World War III was a greater threat to humanity, to which Buterin replied, "Nah, WW3 may kill 1-2b (mostly from food supply chain disruption) if it's really bad, it won't kill off humanity. A bad AI could truly kill off humanity for good."
While 1-2 billion people killed is a staggering possibility, it's nothing compared to the risk that AGI poses to humanity as a whole, according to Buterin.
Is He Right?

But how realistic is his fear? Is there really something to the likelihood that humanity could face doom from AI in the next 200 years?
Unfortunately, Buterin's seeming paranoia is shared by many experts in the field. The problem with AI is that it's meant to maximize efficiency and streamline processes. While that works well for humanity in the here and now, it poses a problem down the line.
The reality is that few machines are as inefficient as a human being. If humankind gets in the way of their intended purpose, what would keep a machine from destroying the barrier to doing its job properly – humanity? As of now, researchers don't know how to give robots empathy – or what the implications would even be if they could – so all they can do is rely on programming to keep humanity safe.
One of the pioneer science-fiction writers who blazed a path for artificial intelligence was Isaac Asimov. In his writing, robots were imbued with laws, which were as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course, as one can imagine, there were loopholes and problems with the laws, which were meant to be all-encompassing enough to protect humanity.
And therein lies the problem. No matter how efficient researchers think they are at creating boundaries and rules against harming humankind, robots are ever more efficient. If they decide we are detrimental to their goals, we become impediments. Without empathy to guide them, wiping out humanity wouldn't just be an option – it would be logical.
Buterin's warning is urgent, but urgent in the world of research doesn't necessarily mean "the next few years." Artificial intelligence is still relatively infantile, and mostly used within specific performance parameters.
However, the more the technology advances and the more humankind comes to rely on it, the greater responsibility developers have to ensure the programming is done with humanity's safety in mind.