, this sort of reflects my stance about souls:
There is another kind of zombie, however: the philosophical zombie. A philosophical zombie (p-zombie, for short) would be a human body without consciousness which would nevertheless behave like a human body with consciousness. To some philosophers (e.g., Daniel Dennett) this is a contradictory notion and thus an impossible conception. If it behaves like a person and is indistinguishable from a person, then it is a person. Other philosophers (e.g. Todd Moody and David Chalmers) argue that a p-zombie would be distinguishable from a person even though indistinguishable from a conscious person. It is distinguishable, say these philosophers, because it is stipulated that it is not conscious even though it is indistinguishable from a conscious being. In case you are wondering why philosophers would debate whether it is possible to conceive of a p-zombie, it is because some philosophers do not believe or do not want to believe that consciousness can be reduced to a set of materialistic functions. Important metaphysical and ethical issues seem to hinge on whether there can be p-zombies. Can machines be conscious? If we created a machine which was indistinguishable from a human person, would our artificial creation be a "person" with all the rights and duties of natural persons? To the p-zombie advocates, consciousness is more than brain processes and neurological functions. No adequate account of consciousness will ever be produced that is "reductionist," i.e., completely materialistic.
I think it is possible to conceive of a machine which "perceives" without being aware of perceiving. In fact, they already exist: motion detectors, touch screens, tape recorders, smoke alarms, certain robots. An android which could process visual, auditory, tactile, olfactory and gustatory input but which would lack self-consciousness, i.e., would not be aware of perceiving anything, is conceivable. We can even conceive of such machines resembling humans in the flesh. How would we distinguish such automata from persons? The same way we do now: by the imperfect and fallible methods of conversation and observation. But that is not what would make the two distinct; self-consciousness or the lack of it would distinguish the automata from persons. "Visual perception" by a motion detector is unlike visual perception by a person just because of the difference in awareness of perception, i.e., self-consciousness. A smoke detector might "smell" certain chemicals, but it does not process odors the way a person does. In my view, the only conceivable p-zombie would be a machine which perceives but has no awareness of perceiving, i.e., no self-consciousness. Such machines are essentially distinct from conscious persons.
For what it's worth, I side with Dennett and those who think that the concept of the p-zombie is a logical absurdity. If the "zombie" exhibits all the symptoms of consciousness, then the "zombie" is not a zombie; for to exhibit all the symptoms of consciousness is to have consciousness, which the zombie is denied by definition.
Anyway, this reminds me of a story by Raymond Smullyan, the great logician and paradoxer. A man wants to commit suicide but does not want to cause his family any grief. He finds out about an elixir he can take which will kill him, i.e., separate his soul from his body, but leave his body intact to wake up, go to work, play with the kids, keep the wife satisfied and bring home the bacon. But before he takes the elixir, a well-intentioned friend sneaks in during the night and injects his suicidal friend with the stuff, thereby killing him, i.e., releasing his soul. The man wakes up and doesn't know he's dead (i.e., that he has no soul), so he takes the elixir. He can't kill himself, since he's already dead. But he thinks he can kill himself and become a p-zombie. However, he is already a p-zombie. Question: if the p-zombie can't tell the difference between a real person and a p-zombie, why would we think that we real persons could tell the difference? In fact, since the conception of the "soul" makes absolutely no difference in either the nature of a person or a p-zombie, the concept of the "soul" is superfluous. If persons are indistinguishable from p-zombies then they are not two distinct concepts, but one concept manipulated by language to mislead us into thinking there are two distinct concepts here.
As to the ethical questions regarding how we should treat androids which are behaviorally indistinguishable from natural persons, I think that if we stipulate that such creatures are persons with rights, then they will be persons; otherwise, they will not be persons. The concept of a person is not a matter of discovery, but of stipulation. I would argue, also, that the same is true of the concept of "soul." But it is not true of the concept of "consciousness": anyone who is conscious should be able to tell the difference between a dead body and a living person. Dead bodies which act like persons, and bodiless souls which perceive like conscious persons, exist only in the movies or in the minds of certain philosophers and other fantasy writers.
Personally I don't believe in souls, or should I say, I think the concept of "souls" obscures the mystery by giving it a name as though we know what we are talking about.