roseembolism: (fhqwagads)
roseembolism ([personal profile] roseembolism) wrote2014-09-17 10:52 pm

A quite possibly triggery thought on the Three Laws of Robotics.

I came up with a rather unplesent thought experiment in a discussion on Asimov's Three Laws of Robotics, explicating why I think they are fundamentally unethical.

The problem with the Three Laws is that they involve such high-level concepts, tthat he robots have to be sentient beings with human level intelligence in order for the concepts work. In which case, we're not really talking about programming, we're talking about brainwashing.

To distill the ethics of the Three Laws to their essence, let's change the target of the Laws. We'll change the wording as so:

1. A Negro may not injure a White or, through inaction, allow a White to come to harm.

2. A Negro must obey the orders given to it by Whites, except where such orders would conflict with the First Law.

3. A Negro must protect its own existence as long as such protection does not conflict with the First or Second Law.

Would you consider those laws ethical and moral? If not, why not? Bear in mind, the EXACT SAME arguments made for the necessity of those laws, also apply equally well to other groups of humans. Or rather, those arguments are equally false. If you argue for the necessity of cruelly enslaving robots using brainwashing, then you are also arguing that any other potential group of "others" must by necessity also be equally controlled.

[identity profile] heron61.livejournal.com 2014-09-18 06:12 am (UTC)(link)
Have you seen Charles Stross' Saturn's Children (which is all about this issue).

In that and the excellent sequel (Neptune's Brood), robots cognition was based upon an analysis of human cognition (but not on uploads), and thus these robots were essentially human minds in artificial bodies.

However, I can also imagine robots that are not based on human cognition, but are merely able to brute force language understanding and mobility/physical environment understanding through massive processing. Such a being would not have any self consciousness or emotions. I would not consider robots like this to be sentient beings and could definitely see using something like Asimov's 3 laws being used for them.

[identity profile] mindstalk.livejournal.com 2014-09-18 08:40 pm (UTC)(link)
Neither case describes Asimov's robots, which are clearly sapient, but also based on genuine understanding of how they work. They're not existing beings "cruelly brainwashed" into obeying the Laws, the Laws are built into their being. Or as is sometimes said in the stories, the verbal laws are an approximation of the mathematics of standard positronic brains -- they're Laws as in the law of gravity, not the Ten Commandments. Robots want to protect and obey humans in the same way that humans like sugar and kittens and want to avoid shit and boredom.

John Sladek wrote _Tik-tok_, on the premise that robots actually were "free-willed" beings with asimov circuits constraining their actual behavior; the eponymous robot had faulty circuits. But this is pretty alien to Asimov's actual concept, in the same way that popular "Dyson spheres" aren't what Dyson actually described (lifted from Olaf Stapledon.)

[identity profile] heron61.livejournal.com 2014-09-18 09:19 pm (UTC)(link)
Except in some of the robot stories (most obviously "The Bicentennial Man", but also others) it seemed clear that at least some robots were fully sentient beings, meaning that these sorts of restrictions on their behavior struck me as somewhat sketchy. OTOH, many of the robots in Asimov's stories seem to definitely not be fully sapient self-aware beings, and so struck me far more as clever tools than slaves.

[identity profile] roseembolism.livejournal.com 2014-09-20 11:28 pm (UTC)(link)
"Robots want to protect and obey humans in the same way that humans like sugar and kittens and want to avoid shit and boredom."

Google "copraphilia". Also, I know humans who dislike sugar and kittens. And people have wildly different definitions of "boredom" and responses to same.

In short, those "Human Laws" aren't actually natural laws at all, but preferences based partially on social norms and training. Now, assume you could implant neural structures into people's brains so that everyone MUST love sugar, and have the exact same boredom response, personal preferences be damned. Would that be ethical?

[identity profile] roseembolism.livejournal.com 2014-09-20 11:12 pm (UTC)(link)
Hmm. I'm thinking that the brute force approach would be millions of different approaches to a given "word" and equally millions of different encoded responses? As in, "If object designated as "human" approaches object designeted as "manhole" alter the human vector so it doesn't intersect? [Goto program sections on how to alter human vectors]" I suppose that could work, though robots in that case probably couldn't make inferencesor deal at all with novel situations. And in that case, the Three Laws wouldn't be programs so much as design goals in the part of the programmers. Assuming I've read your meaning properly.

[identity profile] haamel.livejournal.com 2014-09-21 05:07 pm (UTC)(link)
I've always been slightly mystified as to the stature the Three Laws have not just in the fannish sphere, but also with the lay public. Asimov himself was at great pains to show how the Laws were wholly insufficient to prevent, among other things, robots enslaving humanity in accordance with the "Zeroth" Law. The huge deal Asimov makes about the Laws being indelibly built into the positronic brain structure is a great example of what I might call "Clarke's One-Third Law" in action: sufficiently advanced magic that is indistinguishable from technology. And as such, rather difficult to draw realistically meaningful conclusions from.

Parenthetically, there is no real question about Asimov's robot architects: they specifically desired a work force they could keep under control. The "positronic" brain is an inherently fragile structure that one could demolish with a double-A battery or a little scuffed feet on the carpet (sources of electrons). The Laws do cognitively what the robots' makeup does physically.

As someone else pointed out, the Laws are not properly about brainwashing, but about the in-born nature of their bearers (which, like human nature, is not invalidated simply by the existence of "malfunctioning" individuals). IMO the ethical question is whether it is right to create sentient yet deliberately inferior beings, with or without the proviso that said beings might lack the capacity to grasp and/or resent that inferiority. This seems to me to devolve rapidly in a semantic quagmire over what it means for A to be "more" sentient than B. Does breeding dogs and cats for domestic purposes, for instance, qualify?

[identity profile] roseembolism.livejournal.com 2014-09-23 02:17 pm (UTC)(link)
I agree that the point was to have a sentient race that logic games could be played with. But I don't think that lets Clarke off the hook for the brainwashing question. Leaving aside that I consider "human nature"to be a very diffuse category (Peru much any attempt to define it narrowly runs into the "all those people over there are "malfunctioning"), freedom of choice alows people to choose whether or not to obey imperatives. In that case, if people can choose not to eat meat, or not to have sex, then can over really call human nature innate? If we could force humans to obey rigid sets if directives, would calling them "natural" and "innate" be any less unethical?

And as far as "limited beings" being created with innate comments, well what if, through neural engineering we could create humans who have "Three Laws" equivalents from birth. Would that be any different from an ethics perspective?

[identity profile] haamel.livejournal.com 2014-10-02 08:46 pm (UTC)(link)
On one level, given a choice between compelling obedience, and compelling obedience AND inflicting suffering, one would have to say that compelling obedience without suffering would be less unethical. This, I suppose, is the basic difference between "Brave New World" and "1984". Stated differently: if there's an unpleasant task that *must* be done, one could justify utilizing an agent which will experience less discomfort during the task -- the question becomes how then to ascertain "necessity".

A being that had a human(-like?) body but a circumscribed consciousness is, in my view, not properly "human". I would draw a distinction between reducing a pre-existing human to such a state (such as via brainwashing), and growing such a being from scratch -- the former being more troubling than the latter... For me, this has something to do with the "potential" of a given being versus how it's allowed to express. The domestic dog, for instance, comes both in breeds that nobly adapt its wolf origins (such as herding dogs) as well as breeds that subvert that origin for what I call frivolous aesthetic reasons. All that being said, it could also be less immoral to keep a species around in acceptable form than to cause its extinction with its charateristics intact.