Testing Turing’s test

Chatbots have been around forever, or at least since the birth of ELIZA back in the 1960s, and we all know how that worked out:

ELIZA’s key method of operation (copied by chatbot designers ever since) involves the recognition of cue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word “MOTHER” with “TELL ME MORE ABOUT YOUR FAMILY”). Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as “intelligent”. Thus the key technique here — which characterises a program as a chatbot rather than as a serious natural language processing system — is the production of responses that are sufficiently vague and non-specific that they can be understood as “intelligent” in a wide range of conversational contexts. The emphasis is typically on vagueness and unclarity, rather than any conveying of genuine information.

There are, of course, examples that don’t actually involve software. For instance:

Think of the way the average politician responds to the average reporter’s question about a scandal in which he or she is involved. The responses are in the form of regular human speech, but they are pre-scripted and designed to carry the form of human speech without fulfilling its function, i.e., explain why campaign contributions got spent at a strip joint. They are instead designed to divert attention from the scandal in the same way that a chatbot is designed to fool people that it is a real live incredibly attractive member of the opposite sex who wants to interact with you and lives just a few miles away.

Some people disparage lower-level members of the current administration as “Obamabots.” This is, however, exactly those members’ designated function; operatives have had this function in administrations nearly as long as there have been administrations.


  1. McGehee »

    12 June 2014 · 2:40 pm

    I find “Obamabot” to be an insult to ‘bots. Of course, my preferred epithet is insulting to ‘rrhoids, but they don’t care so why should I?

  2. fillyjonk »

    12 June 2014 · 5:33 pm

    I wonder if somewhere in the MLP fandom, someone’s come up with a SweetieBot chatbot yet. It would be pretty easy to program; most of the responses would be about cutie marks.

  3. CGHill »

    12 June 2014 · 6:06 pm

    There are at least three Sweetie Bot Tumblrs out there, but I suspect all of them of being written by some sort of humanoid.

    I wish I could remember the name of that unfinished fanfic in which the bot becomes sentient, to the amazement of her builder.

RSS feed for comments on this post