Quote: Original post by caffieneQuote: Original post by LessBreadYes, we're on the same page here afaik.
Just to be clear, what I'm suggesting is that the notion that sentient machines would have desires is a human projection, an anthropomorphism.Quote: Because we have desires and find them very powerful, we find it difficult to imagine how a sentience could not. We have set ourselves up as the measure of sentience.True. I dont see any reason to believe that desires are inherently part of sentience.
I wonder though at what stage we might reasonably expect desires to appear, if we are using the human brain as our model for trying to construct sentience? If sentience and similar phenomenon can be created with an exact model of the human brain, it stands to reason that a model of the human brain would have the same characteristics as a human brain - eg, desires. At what point our simulation would be "close enough" to begin expecting human characteristics as opposed to general sentient characteristics, I dont know.
Researchers have found some symbolic aptitude in animals (parrots, gorillas, chimps,...) - but they haven't found animals that teach each other newly acquired techniques the way that we do. A chimp can use a stick to dig out termites, but they haven't found a chimp who then goes back to tell other chimps about the discovery and how they can do the same for themselves. Each generation of chimps has to reinvent the wheel, so to speak. Perhaps they live so deeply "in the moment" that they forget or they are so geared towards competition and self-preservation that they are not inclined to share such technology. Perhaps I'm anthropomorphizing [grin]
Quote: Original post by caffieneQuote: What happens when preprogrammed behaviors conflict? If such behaviors are prioritized, what are the implications of situations where preprogammed behaviors conflict, yet a lower priority behavior is undertaken rather than a higher priority behavior? I have tried to avoid the word "choice" in this formulation, but isn't that what this anomaly points to, choice?I dont know... Does this happen? Is it possible for a "lower priority" behaviour to be undertaken rather than a "higher priority" behaviour?
My expectation, coming from a materialist viewpoint, is that a lower priority behaviour would never overrule a higher priority one. Instead, details of the inputs to the system might cause a temporary change in priorities under very rare circumstances, or in ways that are computationally prohibitive to predict. But that doesnt necessarily mean that the behaviour is anything other than a predefined, if complicated, algorithm. "Choice", as distinct from a predefined behaviour, as far as I can work out would require some form of nondeterministic mechanism, such as a soul, or a mechanism outside of the brain which we have no understanding of yet. Moreover - if either a soul or an "external to the brain" process is necessary for choice, then we dont yet know to include it in the machine simulation and therefore the simulation wouldn't develop "choice" even if it exists in humans.
Human beings reprioritize all the time, for rational and irrational reasons. The irrational reasons are easier to spot (gambling for example). How such non-determinism could be imparted to a machine is uncertain. Perhaps the development of quantum computing will shed light on this subject.
Quote: Original post by caffieneQuote: I'm not sure that a cognitive science approach to consciousness is an optimal approach to reaching ethical conclusions. We have an excellent understanding of how guns work, but that understanding does not lend itself very well to understanding the ethical implications of the use of guns.Very true... Im really thinking through the cognitive science more as an exercise in working out possible outcomes - under what circumstances consciousness might or might not arise, etc - to give a better framework for thinking about ethics. It wont reach ethical conclusions, but it helps narrow down what scenarios we're most likely to need to find an ethical conclusion for.
That begs the question of what consciousness is. Can a positivist approach accurately define consciousness, let alone determine it's conditions? Or does it simply reduce consciousness to a set of stimulus responses and leave it at that?
Quote: Original post by caffiene
To be honest, working out the frame of reference is more interesting to me anyway, because the actual ethical conclusion in most cases boils down to a subjective value judgement at some point where discussion cant really have a useful input.
I agree that the ethical issue turns back to the question establish a reference frame.
Quote: Original post by caffieneQuote: An animal rights advocate would likely base ethical considerations on suffering, the degree of suffering inflicted on the subject.They probably would... but Id counter by asking, are they basing their decision on suffering because its the key factor, or because they dont have a mechanism for being certain of the animal's desires? Isnt "suffering" itself only a shorthand for emotional distress based on our best guess at the animal's internal experience of what is happening to it?
The impact of the suffering on our minds might be more profound (see mirror neurons). For me, ethics are about putting our values into action. So, to the extent that our values inform us that behavior that increases suffering is negative, it's not about the desires of animals, but the desires of humans.
Quote: Original post by caffieneQuote: Furthermore, it seems to me that what would better serve human interests would be a machine more akin to a dog (not a wolf). That is, a very advanced tool, but not one prepared to overrun our ecological niche. I suppose the drawback, however, is that we would want to make such a machine so that it could communicate with us directly, and that would mean giving it symbolic aptitude.Yeah. Im thinking about human-level sentience because it seems to be what the OP was talking about, but in reality I think it would be much more educational to work our way up from something less complicated, and only up to the point where it meets our needs - either able to perform the duty required, or when the desired behaviour emerges from the simulation for us to study. It also means the ethical issues can be addressed more slowly.
The only problem is if the research into sentient appearing machines for interface or emotional purposes begins to converge with true sentience simulation.
I agree with your remarks about slowly working our way up, but I don't see the convergence issue as a problem. It seems to fit in with the idea of a slowly emerging sentience.
Quote: Original post by caffieneQuote: Desires and needs overlap, but that doesn't mean they are the same.And requires a tricky subjective value judgement to make a ruling on, I think... That being: Which is ethically more important? A need or a desire? If a desire and a need conflict ... wait. Stopping mid-though, here - Can a desire and a need conflict? It suddenly occurs to me that "needs" are really only shorthand for a desire based on a biological imperative or an assumed universal desire. We need to eat and breath, for example... but is it really a need? If the biological imperative wasn't creating the desire to continue living, we wouldnt need to do the things necessary to survive. That is - if I dont desire to live, the needs associated with prolonging my existence stop being needs; and if they are optional, even slightly, then that suggests they are really only extremely pervasive desires.
Maybe Im missing something. Are there examples of need which dont fit this reasoning?
I think we often confuse our desires with our needs in ways leading to confusion. I think our desires and needs conflict all the time (see gambling and other vices). I think we are wired in ways that lead us to desire our needs, but that doesn't mean that desires are needs. We need to eat, but we don't need to eat filet mignon. I think needs pertain to the maintenance of the body. It's less confusing to discuss the need to eliminate than the need to ingest, but the topic disgusts us so it's usually avoided in these contexts. We associate desires with the body too, but that doesn't make them needs. Individuals desire sexual contact, but they don't perish without it as they would without food etc, the species would perish without it. I think there are needs pertaining to the mind, but I think desires are more operative there. I think the mind needs other minds. Prisoners in solitary confinement can go insane, for example. The absence of a desire to continue living doesn't kill immediately. I think it certainly leads to a shorter life, but it does not prevent someone from continuing to live. A person who no longer wants to live doesn't keel over on the spot. They actually have to do something to bring about their death, even if they refuse to eat. By refusing they are doing something. And when their body fights back and those hunger pains kick in, that person has to work even harder at the refusal.