What is a person?

If you send an email to an address that doesn't correspond to anyone, then you may well get a message in reply that says: 'I don't recognize the address bill@wibble.com. Here is a list of six people that you might have been trying to write to: . . . Please do not send me a thank you message; I am a robot and would not know what to do with it.' The 'robot' in this case is a fairly simple program that looks up local email addresses that are similar to the erroneous one you specified, but the message is written as if it came from a person.

This trend is certain to continue. As you navigate round cyberspace you will meet a variety of 'agents': software programs that interact with you. The 'agent' metaphor is a powerful one. There are already whole conferences about software agents (just programs, remember) that are supposed to wander around the Internet on your behalf looking for (say) a good airfare, or a paper you want.

These agents are going to become increasingly lifelike. In virtual reality systems it will appear as though we are able to 'see' them. It is quite possible to imagine that it will become very difficult (or even impossible) to distinguish an agent from a person in cyberspace. That prospect seems to raise a series of questions about what it means to be 'a person' at all. If a sufficiently complex and sophisticated computer program really was indistinguishable from a person in cyberspace, should we treat it as a person? Should it have 'human rights'? Should it be considered an individual before the law? These questions concern our perceptions of these cyber-agents. They might provoke us to ask a different question, though: in what senses (if any) are humans different from computers? Such a question may seem fanciful, but we ask it utterly seriously. In 1999 there are no computer programs that can make even a tenuous claim to having the character of a human person - there is little danger of confusing a police constable with a personal computer - but in 2050 there almost certainly will be. Just as genetic research has increasingly raised serious questions about what it means to be human, so cyberspace is now beginning to do so too. Our brief discussion here will do no more than skim lightly over some very deep water.


Impersonation

The famous British computer scientist Alan Turing suggested the following thought experiment (although we have updated it a little). Suppose you met someone in a virtual reality world. You had extended conversations with this person on each of your visits, you shared your hopes and fears with him or her. Over a period of months you felt that the person became a friend. Now suppose you discovered that this 'person' was actually a computer. No human being was the source of this 'person's' conversation. What would you say? That the computer had tricked you? Or that the computer had become a person, had really become your friend?

Stripped to its essentials, the question is this: if a computer were indistinguishable from a person to us in cyberspace, should we not consider it to be a person? If it walks like a duck, and quacks like a duck, isn't it a duck?

Well, not necessarily. One possible counter-argument is that Turing's thought experiment is a 'set-up'. It is limited by being in cyberspace. We meet almost all our human friends in the flesh, and squeeze their hands. Even those we don't meet, we believe that we could, in principle, meet 'in person' (a telling phrase, that means to be physically present). In cyberspace a computer lacks a body for us to relate to, but of course in our cyber-relationships we don't relate to humans through actual bodily contact either. However, if we 'met' the computer which was running the cyber-agent program we wouldn't think of it as a fellow human at all! But perhaps that is because we expect it to look like a computer and not a human. What if computers could impersonate humans very well?

A second response is to take this thought seriously, but then say that the ability to impersonate someone perfectly does not make you into that person. Suppose that you meet two identical twins. One of them committed a murder, and one did not. Neither has an alibi, there were no witnesses, and forensic tests cannot tell which of the two was the culprit. The fact that we cannot tell which one is to blame does not change the fact that one is a murderer and one is not. By analogy, it seems that the ability to impersonate a kind of thing (such as a human being) does not necessarily make the impersonator a human being. It is clear that a program's ability to appear to be a person does not necessarily imply that it is a person.


What it means to be human

So it is not enough for a computer to appear to be human, no matter how perfectly. What is enough, then? We may imagine the poor computer asking: I (appear to) think like you, I speak like you, I (appear to) love and hate like you; what else do you want before you call me human? We do not ask these questions facetiously: they force us to ask what it means to be human. We might answer by suggesting some characteristics that humans have and asking if computers can have them:

So could any of these things be true of a computer, in principle? Here are some common reactions.

One of the difficulties here is that any sort of detailed answer has to delve into what exactly is meant by (for example) 'think', and 'computer'.

Take the question of 'thinking'. A few hundred years ago, most people thought that arithmetic was a skill that required thinking, but today doing arithmetic merely requires a cheap calculator, which no one would say thinks. Computers can play chess very well indeed by the 'brute force' approach of exploring all possible outcomes of a particular move. The result certainly looks like thinking, but is it? If we are not careful we can fall into the trap of using 'thinking' to describe what humans do and computers don't, which rather prevents us from ever classifying computers as being able to think! On the other hand, as we have already argued, merely looking as if you think doesn't necessarily mean you are thinking, as any child knows.

Or take 'computer'. All digital computers are the same in principle, in the following sense. At any moment it is in a particular state. In each time step it moves to a new state, based on (a) its current state (including its program), and (b) its external inputs. From any given state and input there is only one possible new state, so its operation is entirely deterministic. Many people (although not all) would conclude that such a computer could not possibly have free will. (Although that judgement depends on what is meant by 'free will', which is a philosophically disputed matter.) Many people are working hard on so-called artificial life and neural networks, both of which are innovative ways of programming traditional computers in such a way that the program itself can 'breed' or 'learn'. Computers programmed in this way may well behave in interesting, surprising and unpredictable ways, but they still obey the same model as before: moving from one state to another in a deterministic way, based on the external inputs. How can a computer be both deterministic and unpredictable at the same time? A deterministic computer's behaviour is always predictable in principle, but in practice its program may be so complicated that it may appear surprising or unpredictable to a human observer. The whole point is that computers running such programs may appear to learn or even to exercise free will, but in fact they are simply following a deterministic program. That is not to belittle them: such programs are marvellous; it is simply to say that they are not doing anything fundamentally different, in principle, from a word processor.

Other people are working on computers based on very different principles, notably quantum computers and biological computers. Such computers might differ from present-day computers, in the sense that they cannot be described by the state-to-state transition mechanism we sketched above. At the moment such computers are still in the very early stages of development and are far from being in a usable form. The point is that we don't know what sort of computers we will have in the future, so it is hard to say for sure that some future computer would not (say) have free will, even if we believe that current computers do not.

The criteria we have been considering form a set of varied features which we might point to if we were asked to say what were the characteristics of human beings. We have acknowledged that the precise meanings of them - thinking, for example, or free will - are the subject of philosophical controversy, as are the arguments to the effect that nothing which could properly be called a computer could possess free will. Even if these controversies were to be resolved, though, a further question remains. One criterion that a Christian might want to add to this list of what it means to be human would be of quite a different order. This is the affirmation that humans are created, chosen and redeemed by God. This characteristic is not based upon any of the characteristics listed above (thinking, free will, etc.). Unless God had created human beings in the way he has, they would not have possessed these characteristics anyway! So we need to ask: what would computers' possession of these characteristics mean or imply for their relationship with God? And that is not a question that we have any ready means of answering.


Why does this matter?

Intellectual arguments have an important place in Christianity, although not of course the central place. Similarly, intellectual arguments about whether or not computers and humans are fundamentally different are important, because if people believe that it is a scientific fact that humans are simply 'computers made of meat', then all sorts of false conclusions may appear to follow. It can seem an irresistible intellectual temptation to think that all it means to be human may be expressed by such beliefs. That might lead to a range of conclusions about the relative value and significance of human life (such as the essential substitutability of computers for humans in any context whatever - say, as friends or as lovers) which ought to be rejected.

But, as we have tried to show above, such beliefs are based on a view of what it is to be human which concentrates on human characteristics, and it remains quite unproved whether this is the really significant part of the story. Christians believe that 'what it is to be human' has to take other things into account - relationships, and especially our relationship with God through Jesus Christ.

To summarize, the status of computers, in terms of their capacity to share human characteristics, is the subject of intense debate, and there are powerful arguments on both sides. Nothing is settled, nor is it likely to be. It is absolutely not a settled scientific conclusion that computers and humans are, or will ever be, fundamentally the same, even in terms of their characteristics.

That debate will continue. The point of this discussion has been to place that debate in a proper context, and to show the (limited) significance that it has for an understanding of what it is to be human in Christian terms.