Playing Nicely with Humans: Amazon’s New Warehouse Robots Don’t Need a Cage – or Do They?

Amazon has unveiled their new fully autonomous warehouse robots, and there's some good news: they can play nicely with humans.

Playing Nicely with Humans: Amazon's New Warehouse Robots Don't Need a Cage - or Do They?

Amazon has unveiled their new fully autonomous warehouse robots, and there's some good news: they can play nicely with humans.

Previous iterations of the robotic workhorses required cages to keep their human counterparts safe from being bowled over, but the newest versions can move safely around the warehouse and don't require caging.

While a more seamless integration between robots and humans probably lets the factory workers breathe a sigh of relief, it raises other questions. Is it a good idea to get more comfortable with robots rolling around our ankles? Experts are divided.

Amazon’s New Warehouse Robots Play Nicely with Humans

This week, Amazon revealed a look at its new warehouse robots, including Proteus and Cardinal.

Proteus is considered fully autonomous and can move around its human coworkers safely. CNET writes, "In a blog post, the company pointed out its 10-year track record of robot technology and its use in warehouse facilities. Unlike previous automated designs that raised safety concerns, Proteus is programmed to move around safely in the same physical spaces as humans. Amazon states that the robot is independent, and automatically does its work without interfering with employees. It can raise and haul GoCarts — the wheeled carts used to move packages — around the warehouse. Amazon hopes to reduce the amount of heavy lifting that employees do in a responsible, safe way."

Cardinal isn't quite as smart, but might be saving backs in the near future. The robot is designed to sort and lift boxes that weigh up to 50lbs, saving humans from the heavy lifting.

But Amazon is putting fears of a robot uprising to bed – mostly.

They want worried humans to know that their new smart robots aren't going to take over the world, just make things more "collaborative" and safe than ever before.

AI – Humanity’s Greatest Threat, According to this Billionaire

Of course, one billionaire might say – "what exactly does that mean?"

Ethereum founder Vitalek Buterin has some serious reservations about the safety of artificial intelligence like that used by Proteus and other autonomous robots.

CELEB recently covered a dire warning issued by Buterin about what the future holds if we don't start taking AI seriously – and responsibly putting up boundaries.

"The idea of artificial intelligence (AI)rebelling against humanity and driving us to extinction is not a new concept. Since the days of Mary Shelley's Frankenstein, humanity has explored the potential of near-human existence and what risk it may pose to us. While Dr. Frankenstein's monster was not a robot out to enslave humanity, it was the first time we as a species began considering that our creations may become our nightmares.

In recent years, tech companies like Elon Musk's Tesla have been working on enhancing AI. Whether it's for better self-driving cars or autonomous robots like Tesla's Optimus, the technology is the fevered focus of some of the greatest minds of the 21st century.

But Ethereum founder Buterin warns that unrestrained growth and technology enhancement could spell humanity's doom.

AI theorist and writer Eliezer Yudkowsky wrote a paper about why researchers and developers aren't doing enough to safeguard against the potential for disaster from AGI (artificial general intelligence, or the ability for AI to learn anything a human can learn). MIRI (Machine Intelligence Research Institute) shared the paper.

Buterin retweeted it, adding, 'Unfriendly-AI risk continues to be probably the biggest thing that could seriously derail humanity's ascent to the stars over the next 1-2 centuries. Highly recommend more eyes on this problem.'

One Twitter user responded that World War III was a greater threat to humanity, to which Buterin replied, 'Nah, WW3 may kill 1-2b (mostly from food supply chain disruption) if it's really bad, it won't kill off humanity. A bad AI could truly kill off humanity for good.'

While 1-2 billion people killed is a staggering possibility, it's nothing compared to the risk that AGI poses to humanity as a whole, according to Buterin."

While that could be techbro idle musing, it could also be a timely warning. With robots now given more skills and learning capabilities than ever before, humanity could do worse than sit down and figure out where the safe boundaries lie – and how far we're willing to push the AI envelope.