![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
A quite possibly triggery thought on the Three Laws of Robotics.
I came up with a rather unplesent thought experiment in a discussion on Asimov's Three Laws of Robotics, explicating why I think they are fundamentally unethical.
The problem with the Three Laws is that they involve such high-level concepts, tthat he robots have to be sentient beings with human level intelligence in order for the concepts work. In which case, we're not really talking about programming, we're talking about brainwashing.
To distill the ethics of the Three Laws to their essence, let's change the target of the Laws. We'll change the wording as so:
1. A Negro may not injure a White or, through inaction, allow a White to come to harm.
2. A Negro must obey the orders given to it by Whites, except where such orders would conflict with the First Law.
3. A Negro must protect its own existence as long as such protection does not conflict with the First or Second Law.
no subject
In that and the excellent sequel (Neptune's Brood), robots cognition was based upon an analysis of human cognition (but not on uploads), and thus these robots were essentially human minds in artificial bodies.
However, I can also imagine robots that are not based on human cognition, but are merely able to brute force language understanding and mobility/physical environment understanding through massive processing. Such a being would not have any self consciousness or emotions. I would not consider robots like this to be sentient beings and could definitely see using something like Asimov's 3 laws being used for them.
(no subject)
(no subject)
(no subject)
(no subject)
no subject
Parenthetically, there is no real question about Asimov's robot architects: they specifically desired a work force they could keep under control. The "positronic" brain is an inherently fragile structure that one could demolish with a double-A battery or a little scuffed feet on the carpet (sources of electrons). The Laws do cognitively what the robots' makeup does physically.
As someone else pointed out, the Laws are not properly about brainwashing, but about the in-born nature of their bearers (which, like human nature, is not invalidated simply by the existence of "malfunctioning" individuals). IMO the ethical question is whether it is right to create sentient yet deliberately inferior beings, with or without the proviso that said beings might lack the capacity to grasp and/or resent that inferiority. This seems to me to devolve rapidly in a semantic quagmire over what it means for A to be "more" sentient than B. Does breeding dogs and cats for domestic purposes, for instance, qualify?
(no subject)
(no subject)