Advertisement

Would it be ethical of humanity to enslave its sentient androids?

Started by August 01, 2009 03:52 PM
81 comments, last by Calin 15 years, 3 months ago
Quote: Original post by caffiene
Quote: Original post by LessBread
Just to be clear, what I'm suggesting is that the notion that sentient machines would have desires is a human projection, an anthropomorphism.
Yes, we're on the same page here afaik.
Quote: Because we have desires and find them very powerful, we find it difficult to imagine how a sentience could not. We have set ourselves up as the measure of sentience.
True. I dont see any reason to believe that desires are inherently part of sentience.
I wonder though at what stage we might reasonably expect desires to appear, if we are using the human brain as our model for trying to construct sentience? If sentience and similar phenomenon can be created with an exact model of the human brain, it stands to reason that a model of the human brain would have the same characteristics as a human brain - eg, desires. At what point our simulation would be "close enough" to begin expecting human characteristics as opposed to general sentient characteristics, I dont know.


Researchers have found some symbolic aptitude in animals (parrots, gorillas, chimps,...) - but they haven't found animals that teach each other newly acquired techniques the way that we do. A chimp can use a stick to dig out termites, but they haven't found a chimp who then goes back to tell other chimps about the discovery and how they can do the same for themselves. Each generation of chimps has to reinvent the wheel, so to speak. Perhaps they live so deeply "in the moment" that they forget or they are so geared towards competition and self-preservation that they are not inclined to share such technology. Perhaps I'm anthropomorphizing [grin]

Quote: Original post by caffiene
Quote: What happens when preprogrammed behaviors conflict? If such behaviors are prioritized, what are the implications of situations where preprogammed behaviors conflict, yet a lower priority behavior is undertaken rather than a higher priority behavior? I have tried to avoid the word "choice" in this formulation, but isn't that what this anomaly points to, choice?
I dont know... Does this happen? Is it possible for a "lower priority" behaviour to be undertaken rather than a "higher priority" behaviour?

My expectation, coming from a materialist viewpoint, is that a lower priority behaviour would never overrule a higher priority one. Instead, details of the inputs to the system might cause a temporary change in priorities under very rare circumstances, or in ways that are computationally prohibitive to predict. But that doesnt necessarily mean that the behaviour is anything other than a predefined, if complicated, algorithm. "Choice", as distinct from a predefined behaviour, as far as I can work out would require some form of nondeterministic mechanism, such as a soul, or a mechanism outside of the brain which we have no understanding of yet. Moreover - if either a soul or an "external to the brain" process is necessary for choice, then we dont yet know to include it in the machine simulation and therefore the simulation wouldn't develop "choice" even if it exists in humans.


Human beings reprioritize all the time, for rational and irrational reasons. The irrational reasons are easier to spot (gambling for example). How such non-determinism could be imparted to a machine is uncertain. Perhaps the development of quantum computing will shed light on this subject.

Quote: Original post by caffiene
Quote: I'm not sure that a cognitive science approach to consciousness is an optimal approach to reaching ethical conclusions. We have an excellent understanding of how guns work, but that understanding does not lend itself very well to understanding the ethical implications of the use of guns.
Very true... Im really thinking through the cognitive science more as an exercise in working out possible outcomes - under what circumstances consciousness might or might not arise, etc - to give a better framework for thinking about ethics. It wont reach ethical conclusions, but it helps narrow down what scenarios we're most likely to need to find an ethical conclusion for.


That begs the question of what consciousness is. Can a positivist approach accurately define consciousness, let alone determine it's conditions? Or does it simply reduce consciousness to a set of stimulus responses and leave it at that?

Quote: Original post by caffiene
To be honest, working out the frame of reference is more interesting to me anyway, because the actual ethical conclusion in most cases boils down to a subjective value judgement at some point where discussion cant really have a useful input.


I agree that the ethical issue turns back to the question establish a reference frame.

Quote: Original post by caffiene
Quote: An animal rights advocate would likely base ethical considerations on suffering, the degree of suffering inflicted on the subject.
They probably would... but Id counter by asking, are they basing their decision on suffering because its the key factor, or because they dont have a mechanism for being certain of the animal's desires? Isnt "suffering" itself only a shorthand for emotional distress based on our best guess at the animal's internal experience of what is happening to it?


The impact of the suffering on our minds might be more profound (see mirror neurons). For me, ethics are about putting our values into action. So, to the extent that our values inform us that behavior that increases suffering is negative, it's not about the desires of animals, but the desires of humans.

Quote: Original post by caffiene
Quote: Furthermore, it seems to me that what would better serve human interests would be a machine more akin to a dog (not a wolf). That is, a very advanced tool, but not one prepared to overrun our ecological niche. I suppose the drawback, however, is that we would want to make such a machine so that it could communicate with us directly, and that would mean giving it symbolic aptitude.
Yeah. Im thinking about human-level sentience because it seems to be what the OP was talking about, but in reality I think it would be much more educational to work our way up from something less complicated, and only up to the point where it meets our needs - either able to perform the duty required, or when the desired behaviour emerges from the simulation for us to study. It also means the ethical issues can be addressed more slowly.

The only problem is if the research into sentient appearing machines for interface or emotional purposes begins to converge with true sentience simulation.


I agree with your remarks about slowly working our way up, but I don't see the convergence issue as a problem. It seems to fit in with the idea of a slowly emerging sentience.

Quote: Original post by caffiene
Quote: Desires and needs overlap, but that doesn't mean they are the same.
And requires a tricky subjective value judgement to make a ruling on, I think... That being: Which is ethically more important? A need or a desire? If a desire and a need conflict ... wait. Stopping mid-though, here - Can a desire and a need conflict? It suddenly occurs to me that "needs" are really only shorthand for a desire based on a biological imperative or an assumed universal desire. We need to eat and breath, for example... but is it really a need? If the biological imperative wasn't creating the desire to continue living, we wouldnt need to do the things necessary to survive. That is - if I dont desire to live, the needs associated with prolonging my existence stop being needs; and if they are optional, even slightly, then that suggests they are really only extremely pervasive desires.

Maybe Im missing something. Are there examples of need which dont fit this reasoning?


I think we often confuse our desires with our needs in ways leading to confusion. I think our desires and needs conflict all the time (see gambling and other vices). I think we are wired in ways that lead us to desire our needs, but that doesn't mean that desires are needs. We need to eat, but we don't need to eat filet mignon. I think needs pertain to the maintenance of the body. It's less confusing to discuss the need to eliminate than the need to ingest, but the topic disgusts us so it's usually avoided in these contexts. We associate desires with the body too, but that doesn't make them needs. Individuals desire sexual contact, but they don't perish without it as they would without food etc, the species would perish without it. I think there are needs pertaining to the mind, but I think desires are more operative there. I think the mind needs other minds. Prisoners in solitary confinement can go insane, for example. The absence of a desire to continue living doesn't kill immediately. I think it certainly leads to a shorter life, but it does not prevent someone from continuing to live. A person who no longer wants to live doesn't keel over on the spot. They actually have to do something to bring about their death, even if they refuse to eat. By refusing they are doing something. And when their body fights back and those hunger pains kick in, that person has to work even harder at the refusal.


"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Just want to point a link to a PBS documentary called Ape Genius.

It's about a series of psychological experiments done on our closest ape relatives to see what separates us from them.

I have a feeling it'll make the conversation more interesting.

Learn to make games with my SDL 2 Tutorials

Advertisement
Quote: Original post by LessBread
Human beings reprioritize all the time, for rational and irrational reasons. The irrational reasons are easier to spot (gambling for example). How such non-determinism could be imparted to a machine is uncertain. Perhaps the development of quantum computing will shed light on this subject.
Im still not convinced that theres any non-determinism to be explained/recreated. As youve noted earlier, there is a difference between brain and mind, and to me an irrational "reason" or decision seems to simply be a process of the brain that isnt reflected in the conscious mind. Gambling, for example, to my knowledge is generally explained in terms of "reward" chemicals such as dopamine, followed by basic instincts to repeat activities that result in those rewards.

if (presence of activity && activity is known to result in reward)  Prioritize activityelse  Prioritize (eg) going to work

We could add millions of branches and complicated conditionals, to the point where its beyond human ability to predict, but its still deterministic. Id need more evidence or examples before Id see any particular need to invoke non-determinism to explain whats occuring.

Quote: That begs the question of what consciousness is. Can a positivist approach accurately define consciousness, let alone determine it's conditions? Or does it simply reduce consciousness to a set of stimulus responses and leave it at that?
I dont know...
Until we actual understand consciousness, I dont think we're really in a position to tell whether a particular approach or viewpoint can be successful. All I can say is that I see no evidence yet that materialism isnt sufficient. (And add that we dont yet know that consciousness being a set of stimulus responses is a "reduction", because afaik we dont yet have evidence of anything other than stimulus responses to reduce out of the explanation.)

Quote: I agree with your remarks about slowly working our way up, but I don't see the convergence issue as a problem. It seems to fit in with the idea of a slowly emerging sentience.
Im imagining that it could be possible for the research into sentient appearance to approach true sentience from a different direction to research into sentience by simulation.

"Sentient appearance" at the moment is mostly a matter of mimicking responses, and bolting together pieces of technology to address different responses that we want to recreate - we have less of an understanding of how various human responses fit together to create consciousness, compared to how the complexity of a simulation might affect consciousness. Its reasonable to think that an accurate simulation of a worm's brain will have less sentience than an accurate simulation of a human brain. But compare that to: Is a facial recognition module a part of sentience? If we add it to a natural language processing module does it become "more" sentient? And if we can accurately recreate all of the responses of the human brain, without resorting to direct physical simulation, would it be sentient?

I can see a situation where, whereas simulation can progress slowly, connecting one module of response to another could cross a threshold and create sudden sentience by adding a missing link. Discrete modules of ability, compared to contiguously increasing complexity, plus less obvious consequences could lead to sudden quick advances.
Quote: Original post by LessBread
Researchers have found some symbolic aptitude in animals (parrots, gorillas, chimps,...) - but they haven't found animals that teach each other newly acquired techniques the way that we do. A chimp can use a stick to dig out termites, but they haven't found a chimp who then goes back to tell other chimps about the discovery and how they can do the same for themselves. Each generation of chimps has to reinvent the wheel, so to speak. Perhaps they live so deeply "in the moment" that they forget or they are so geared towards competition and self-preservation that they are not inclined to share such technology. Perhaps I'm anthropomorphizing

I beg to differ. I've read a few studies on the intelligence of crows, and there's clearly a transference of knowledge.

">Japanese Crows use traffic to crack nuts

Crows can play games
Crows Have Human-Like Intelligence, Author Says

Crows have a language and social structure:
Quote: Not a Bird Brain: As a group, the crows show remarkable examples of intelligence. They top the avian IQ scale. Crows and ravens often score very highly on intelligence tests. Crows in the northwestern U.S. (a blend of Corvus brachyrhynchos and Corvus caurinus) show modest linguistic capabilities and the ability to relay information over great distances, live in complex, hierarchic societies involving hundreds of individuals with various "occupations," and have an intense rivalry with the area's less socially advanced ravens. One species, the New Caledonian Crow, has recently been intensively studied because of its ability to manufacture and use its own tools in the day-to-day search for food. Wild hooded crows in Israel have learned to use bread crumbs for bait-fishing. Crows will engage in a kind of air-jousting, or air-chicken to establish pecking order.


All of this points to one conclusion: Any ideas we have of 'exclusivity in rational thought within the animal kingdom' is dead wrong.
Crows are smart but nothing in those links attests to their ability to develop a new tool and then share the knowledge of how to make and use that tool with other crows.
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
This is one episode that reached the media outlet.

I've also seen quite a few examples of tool using on Discovery Channel. One I recall was about some primates in some african jungle who learned to open seeds using two rocks. They placed one on the floor to use it as an anvil and the other one as a hammer. These monkeys didn't actually found a school on how to do that, but they did copy the technique from each other.
[size="2"]I like the Walrus best.
Advertisement
Quote: Original post by caffiene
Quote: Original post by LessBread
Human beings reprioritize all the time, for rational and irrational reasons. The irrational reasons are easier to spot (gambling for example). How such non-determinism could be imparted to a machine is uncertain. Perhaps the development of quantum computing will shed light on this subject.
Im still not convinced that theres any non-determinism to be explained/recreated. As youve noted earlier, there is a difference between brain and mind, and to me an irrational "reason" or decision seems to simply be a process of the brain that isnt reflected in the conscious mind. Gambling, for example, to my knowledge is generally explained in terms of "reward" chemicals such as dopamine, followed by basic instincts to repeat activities that result in those rewards.

if (presence of activity && activity is known to result in reward)  Prioritize activityelse  Prioritize (eg) going to work

We could add millions of branches and complicated conditionals, to the point where its beyond human ability to predict, but its still deterministic. Id need more evidence or examples before Id see any particular need to invoke non-determinism to explain whats occuring.


And when the syllogism breaks, when conditions conflict, what then? Understanding gambling as a response to reward chemicals may explain the physical process, but it doesn't explain how the content of the activity leads to the result, for example, how the mind finds a card game exciting in itself (apart from the social circumstances of gambling).

Quote: Original post by caffiene
Quote: That begs the question of what consciousness is. Can a positivist approach accurately define consciousness, let alone determine it's conditions? Or does it simply reduce consciousness to a set of stimulus responses and leave it at that?
I dont know...
Until we actual understand consciousness, I dont think we're really in a position to tell whether a particular approach or viewpoint can be successful. All I can say is that I see no evidence yet that materialism isnt sufficient. (And add that we dont yet know that consciousness being a set of stimulus responses is a "reduction", because afaik we dont yet have evidence of anything other than stimulus responses to reduce out of the explanation.)


The notion that we can't tell what approach works until we understand consciousness seems to beg the question. I don't think positivism should be conflated with materialism. Moreover, it seems very clear to me that reductionism occurs any time a phenomena is broken down into it's parts in order to understand it.

Quote: Original post by caffiene
Quote: I agree with your remarks about slowly working our way up, but I don't see the convergence issue as a problem. It seems to fit in with the idea of a slowly emerging sentience.
Im imagining that it could be possible for the research into sentient appearance to approach true sentience from a different direction to research into sentience by simulation.

"Sentient appearance" at the moment is mostly a matter of mimicking responses, and bolting together pieces of technology to address different responses that we want to recreate - we have less of an understanding of how various human responses fit together to create consciousness, compared to how the complexity of a simulation might affect consciousness. Its reasonable to think that an accurate simulation of a worm's brain will have less sentience than an accurate simulation of a human brain. But compare that to: Is a facial recognition module a part of sentience? If we add it to a natural language processing module does it become "more" sentient? And if we can accurately recreate all of the responses of the human brain, without resorting to direct physical simulation, would it be sentient?

I can see a situation where, whereas simulation can progress slowly, connecting one module of response to another could cross a threshold and create sudden sentience by adding a missing link. Discrete modules of ability, compared to contiguously increasing complexity, plus less obvious consequences could lead to sudden quick advances.


The whole could turn out to be greater than the sum of it's parts.

"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
IMO it would only be unethical if these sentient robots were forced to do things against their will, or forced to feel pain, etc. If we designed them so that they feel nothing, or so that they love working for humans, then there doesn't seem to be anything unethical.

I guess the more interesting question is what happens if/when one of these robots which is supposed to love working (by design) accidentally grows beyond that intention, and starts asking for human rights (robot rights) ?
Quote: Original post by Melekor
I guess the more interesting question is what happens if/when one of these robots which is supposed to love working (by design) accidentally grows beyond that intention, and starts asking for human rights (robot rights) ?


That'll be the day they'll stop recieving their DURACELL supply...
[size="2"]I like the Walrus best.
Quote: Crows are smart but nothing in those links attests to their ability to develop a new tool and then share the knowledge of how to make and use that tool with other crows.


That's because Crows are limited in other ways. They lack the dexterity to create, and manipulate, a useful tool which would do anything they didn't already do. Certainly, given such a complex social structure, they would be able to share their knowledge if they discovered something in anyway useful.

I would argue that our opposable thumbs are the primary reason for our technological evolution.

This topic is closed to new replies.

Advertisement