News 3083 items
Watch: How Much Control Should We Give Robots? | The Future of Robotics | Part 1 | WIRED (youtube.com)
What is a robot? Well, it doesn't always look like a human.

In fact, different roboticists have different definitions. But most agree that a robot needs to be a physical machine that can sense the world around it, and make at least some decisions on its own.

In the next few years, we're going to start seeing robots that make decisions entirely on their own - fully autonomous robots. Many fear that these kind of robots will breed dangerous results: can we trust a robot that makes all decisions for us? Or should humans and robots share the control?

Subscribe to WIRED UK â–º https://www.youtube.com/wireduk?sub_c...
Visit the WIRED website â–º https://www.wired.co.uk
Subscribe to WIRED Magazine â–º https://www.wired.co.uk/subscribe

Sign up for one or more of our WIRED newsletters: https://www.wired.co.uk/newsletters

CONNECT WITH WIRED
Facebook: https://www.facebook.com/wireduk
Instagram: https://www.instagram.com/wireduk
Twitter: https://twitter.com/wireduk
LinkedIn: https://www.linkedin.com/company/wire...

ABOUT WIRED
WIRED brings you the future as it happens - the people, the trends, the big ideas that will change our lives. An award-winning printed monthly and online publication. WIRED is an agenda-setting magazine offering brain food on a wide range of topics, from science, technology and business to pop-culture and politics.

How Much Control Do We Give Robots? | The Future of Robotics | WIRED
    • 1
    Francisco Gimeno - BC Analyst Robots are already part of our lives. And soon we may witness the appearing of advanced robots which will need more autonomy than the ones now in use, with fixed objectives and movements. An autonomous car is, in a way, one autonomous robot, which will need to act according to its own decisions (like a human, pondering different actions which lead to different outcomes depending on what is decided). A medical robot, which now is a tool can get autonomous enough to decide how to act on a medical emergency. There are many examples. How is this going to happen? With AI development applied to Robotics, of course. But this goes beyond being able to decide. It means also to apply ethical and moral values which are proper of humans. Are we ready for this? And what when the emergence of AGI creates new ethical and moral values, beyond or just different from human ones? Are we, again, ready for this? The future, although sometimes scary, can be also amazing.