In 1966, Joseph Weizenbaum, a professor of computer science at MIT, created, as an experiment in natural language processing, a computer program called Eliza that carried on a conversation by replying to typed-in statements with printed words. Since this was before the widespread use of computer screens, the program used a form of teletype device connected to one of the first time-sharing computer networks. The resulting persona, Eliza, was that of a therapist, the kind of clinician who echoes back the concerns of the patient without interpretation. She also displayed a narrowly Freudian interest in topics like sex and the family.
Well, before the advent of multimedia, there was a moment in the history of the computer that demonstrated its representational and narrative power with the same startling immediacy as the Lumieres’ train did for the motion picture camera. This is the famous but often misunderstood moment in which the first completely computer-based character was created.
In a research paper describing the program, Weizenbaum included this remarkably coherent sample conversation:
Eliza proved astonishingly persuasive. Soon a creation legend stated to circulate that was based on an incident involving a researcher at a Cambridge think tank who left an early version of Eliza running on his computer, which a vice president of the company thought was typing out a communication being transmitted by his colleague from home. The following conversation took place between them:
Vice President: Do you think that I can use the computer this morning?
Eliza: Why do you ask?
Vice President: I might be able to make some additional telecom sales.
Eliza: Why aren’t you sure?
Vice President: My prospective customers have never seen a demonstration of the system.
Eliza: Does this mean very much to you?
Vice President: Of course, it does.
Eliza: You seem very positive about that.
Vice President: This is ridiculous.
Eliza: What do you mean this is ridiculous?
Vice President: Is there any reason why I can’t use the computer now?
Eliza: Tell me more about your prospective customers.
Vice President: Please dial me up on 491-1850
But because the vice president forgot to type in a final period, the machine does not answer him. Infuriated, he called his colleague (whom the vice president thought he was talking to) on the phone, resulting in the following conversation:
Vice President: Why are you being so snotty to me?
Colleague: What do you mean why I am being snotty to you?
(Explosion of anger from both ends)
The story became a legend because it discharges the anxiety aroused by the fear that Weizenbaum had gone too far, that he had created a being so much like an actual person that we would no longer be able to tell when we’re talking to a computer and when to a human being. This is very much like the fear that people would mistake film images for the real world.
Eliza was not persuasive as a live conversationalist: she was also remarkably successful in sustaining her role as a therapist. To Weizenbaum’s dismay, a wide range of people, including his secretary would “demand to be permitted to converse with the system in private, and would, after conversing with it for a time, insist, despite [Weizenbaum’s] explanations, that the machine understood them.” Even sophisticated users “who knew very well that they were conversing with a machine soon forgot that fact, just as theatergoers, in the grip of suspended disbelief, soon forget that the action they are witnessing not ‘real'”. Weizenbaum had set out to make a clever computer program and had unwittingly created a believable character. He was so disconcerted by his achievement that he wrote a book warning of the dangers of attributing human thought to machines.
Without any aid from graphics or video, Eliza’s simple textual utterances were experienced as coming from a being who was present at that moment.
Check out these gadgets that you’ll be bidding farewell to after 2020.