The Turing Test Doesn't Work

 

I love and admire Alan Turing.  I consider him brilliant, admirable, and heroic, as well as complex, and I think his treatment by the UK government was tragic, outrageous, and inexcusable.  He had many great accomplishments, and was one of those rare minds who was at once a technician and a philosopher.  Furthermore, the development of his famous "imitation game" thought experiment, now more commonly known as the "Turing test," is both a stroke of genius and a cute, charming story.  We can cherish it as a great, fascinating, perplexing moment in the history of ideas.  But as a practical matter, it is long past time to put it aside.  It is not only wrongheaded - it is downright dangerous.

The Turing Test is a fascinating artifact of a particular period in history.  It was developed during the heyday of behaviorism, when researchers like B.F. Skinner believed that psychology was nothing but the science of behavior, and that "internal mental states" either didn't exist or were, in any case, a blind alley for further research.  Every aspect of a person was "on the surface," so to speak, in our actions.  Thus it makes sense that people with this general outlook would come to develop an ideology that consciousness itself, the consciousness of a conscious being, if it was to be understood at all, would be fully identifiable in the behavior of that being. 

At the same time, the period in which the Turing Test was developed (1949-1950) was after the "linguistic turn."  And so researchers during this period would assume, quite naturally, that consciousness could be detected not only in our behavior but specifically in our linguistic behavior.

Alan Turing, charmingly enough, was inspired by a party game which apparently already had existed, in which a man and a woman went into two separate rooms, and a third person, the judge, passed notes back and forth under the door to have a conversation without seeing them.  The man pretended to be a woman, the woman pretended to be a man, and the judge tried to guess which one was which.  If the judge guessed correctly, the judge would win; if not, the man and woman would win.  (Alan Turing was gay and there has been some speculation that had they lived today, they might have seen themself as trans or non-binary, which puts an interesting spin on all of this - but that is beside the point.)  Alan Turing imagined a similar kind of "imitation game," but one in which a computer would try to pretend to be a human and a judge would try to guess.  If 100 judges tried to tell if a computer was a human based on the words in a conversation, and they were found to have about a 50/50 chance of being correct, then the computer would "win" and be deemed conscious.  It was a kind of thought experiment to get us to think differently about the meaning of consciousness.

As a test to determine whether or not a machine or a network or a piece of software or anything else is conscious, the Turing test fails in (at least) two ways: it is both too easy, and too hard.  That is to say, the set of beings that can pass the Turing test both excludes beings that probably are, or could be, conscious, and at the same time includes beings that aren't.  Might there be some overlap between the set of beings that can pass the Turing test, and the set of beings that are conscious?  Sure.  But the correspondence is not one-to-one.  The Turing test probably tests something, but that something is not consciousness.

To see what I mean, let's do the easy part first.  In one sense, the Turing test is too easy - that is, it "lets in" too many beings.  At least in a certain sense, machines have been passing the Turing test for a long time.  From the earliest chatbots, like ELIZA, there have always been people who have been fooled - or who have fooled themselves - into thinking that the pieces of software they were interacting with were conscious.  Humans have a tendency to anthropomorphize everything.  We have a sense of empathy that can extend itself towards all kinds of beings, if you allow it to.  It's not hard to feel that a stuffed animal has feelings.  In many societies all over the world, people regard not only animals but also plants, rocks, the sun, and the sky as conscious beings.  And who's to say that they're wrong?  I, for one, see many fairly good arguments for panpsychism.  

Yes, I can hear the counter-argument here - that while many rubes are easy to fool, that there could be a group of highly trained specialists who develop a form of expertise in which they can distinguish between conscious beings and non-conscious beings.  One remembers the scenes from Blade Runner: "You look down and see a tortoise, Leon."  Maybe so, maybe there will emerge this new kind of occupation, in which humans can become highly skilled.  But doesn't this defeat the entire point of Turing's thought experiment?  If Turing, along behaviorist lines, wanted to propose to us that "conscious" means "able to convince most people that one is conscious," ...but then we have to revise that definition to something like "able to convince consciousness specialists that one is conscious," ...then we are still left wondering how these consciousness specialists will be trained to identify conscious, and we are back to square one in terms of trying to define consciousness.  We seem to have a circular definition, in which the term being defined is itself part of the definition.  And it seems plausible that these consciousness specialists could get caught in a feedback loop where they are led by their study (in an interesting kind of via negativa) to avoid the obvious pitfalls of non-consciousness - certain patterns of thinking that non-conscious beings tend to exhibit, perhaps? - and so they refine their studies further and further about something but without getting any closer to consciousness, because they still don't know what it is.  And now, where we had one problem - not knowing what consciousness is - we now have two problems: we still don't know what consciousness is, but now we have a class of supposed experts, who claim to know what consciousness is, without any real justification, but with their own industry, their own lobbyists, etc., etc..

This brings us to the other problem I mentioned about the Turing Test.  In a way, as I have tried to show, it is too easy.  But in another way, it is also too hard. 


Imagine, for a moment, what it would be like if you were a conscious being that failed the Turing test.  You would be conscious, but no one would recognize you as a conscious being. 

The Turing Test was proposed as a way to define consciousness.  But it does not, in fact, define consciousness.  Imagine if scientists in Newton's day were trying to understand gravity, and one of them proposed that, instead of weighing different objects, we should blindfold people and have them feel various objects.  If more than 50% of people think that an object is heavy, then it's heavy.  If not, then not.  Or what if people in the early 19th century were trying to understand illness, and someone proposed that we should have some kind of imitation game in which a sick person tried to pretend to be well and a well person decided to be sick, and a judge had to figure out which was which?  We wouldn't be any closer to developing the germ theory of disease.  Similarly, Turing's imitation game does not get us any closer to understanding what consciousness is.  At best, it is pollster's survey that reveals to us, on a rough average, the hodge-podge of presuppositions and preconceptions about consciousness that the average person has.  Or worse, it will smuggle in the untestable technocratic drivel of self-appointed experts.  Either way, it can only tell us what we already know, but presenting it in a confused way that makes it appear as an external fact.

Comments

Popular posts from this blog

Why Capitalism is Ending

Against Curtis Yarvin, a.k.a. Mencius Moldbug

Why Sam Harris is Wrong About Free Will