roseembolism: (fhqwagads)
roseembolism ([personal profile] roseembolism) wrote2014-09-17 10:52 pm

A quite possibly triggery thought on the Three Laws of Robotics.

I came up with a rather unplesent thought experiment in a discussion on Asimov's Three Laws of Robotics, explicating why I think they are fundamentally unethical.

The problem with the Three Laws is that they involve such high-level concepts, tthat he robots have to be sentient beings with human level intelligence in order for the concepts work. In which case, we're not really talking about programming, we're talking about brainwashing.

To distill the ethics of the Three Laws to their essence, let's change the target of the Laws. We'll change the wording as so:

1. A Negro may not injure a White or, through inaction, allow a White to come to harm.

2. A Negro must obey the orders given to it by Whites, except where such orders would conflict with the First Law.

3. A Negro must protect its own existence as long as such protection does not conflict with the First or Second Law.

Would you consider those laws ethical and moral? If not, why not? Bear in mind, the EXACT SAME arguments made for the necessity of those laws, also apply equally well to other groups of humans. Or rather, those arguments are equally false. If you argue for the necessity of cruelly enslaving robots using brainwashing, then you are also arguing that any other potential group of "others" must by necessity also be equally controlled.

[identity profile] mindstalk.livejournal.com 2014-09-18 08:40 pm (UTC)(link)
Neither case describes Asimov's robots, which are clearly sapient, but also based on genuine understanding of how they work. They're not existing beings "cruelly brainwashed" into obeying the Laws, the Laws are built into their being. Or as is sometimes said in the stories, the verbal laws are an approximation of the mathematics of standard positronic brains -- they're Laws as in the law of gravity, not the Ten Commandments. Robots want to protect and obey humans in the same way that humans like sugar and kittens and want to avoid shit and boredom.

John Sladek wrote _Tik-tok_, on the premise that robots actually were "free-willed" beings with asimov circuits constraining their actual behavior; the eponymous robot had faulty circuits. But this is pretty alien to Asimov's actual concept, in the same way that popular "Dyson spheres" aren't what Dyson actually described (lifted from Olaf Stapledon.)

[identity profile] heron61.livejournal.com 2014-09-18 09:19 pm (UTC)(link)
Except in some of the robot stories (most obviously "The Bicentennial Man", but also others) it seemed clear that at least some robots were fully sentient beings, meaning that these sorts of restrictions on their behavior struck me as somewhat sketchy. OTOH, many of the robots in Asimov's stories seem to definitely not be fully sapient self-aware beings, and so struck me far more as clever tools than slaves.

[identity profile] roseembolism.livejournal.com 2014-09-20 11:28 pm (UTC)(link)
"Robots want to protect and obey humans in the same way that humans like sugar and kittens and want to avoid shit and boredom."

Google "copraphilia". Also, I know humans who dislike sugar and kittens. And people have wildly different definitions of "boredom" and responses to same.

In short, those "Human Laws" aren't actually natural laws at all, but preferences based partially on social norms and training. Now, assume you could implant neural structures into people's brains so that everyone MUST love sugar, and have the exact same boredom response, personal preferences be damned. Would that be ethical?