Commander Data: Talking Toaster?

an essay written by
Darrel W. Beach
May 26, 1998

In the Star Trek: The Next Generation episode "The Measure of a Man", the android Lt. Commander Data's rights to self-determination and freedom of choice are brought under challenge by Commander Bruce Maddox, a cybernetic researcher working for Starfleet.  The main point of debate brought out of the hearing is whether Data is truly a living being or just a complex machine.  Captain Picard manages to convince the judge advocate general that Data is in fact an intelligent lifeform by using the tenets of intelligent life: intelligence, self-awareness and sentience.  What it boils down to is that Picard uses a variation of the Turing Test (which allows that, given a sufficient number of questions, if a judge cannot distinguish the responses of a machine from those of a human, the machine is considered intelligent) to defend his second officer.  However, some critics in cognitive psychology have argued that the Turing Test is not enough to exonerate Data; by using more contemporary analysis theories, Commander Riker would have been more successful in his prosecution.  This essay will not provide a definitive answer to this debate, but the evidence provided should show that Data might well be closer in comparison to a talking toaster than first thought.

This entire fictional scenario gets its roots from a debate that got its start over sixty years ago, when Alan Turing created the first model of modern computers in 1936, the Turing Machine.  Originally designed to prove that mathematics was not decidable (ie. that all equations have a solution), Turing was able to create a universal version of his machine, in the respect that it could pretend to be any other Turing Machine.  This generated a lot of interest from people in the field of cognitive psychology.  Turing had designed an artificial model of information processing that could behave in ways similar to the human brain: it could solve a limitless number of complex problems using a limited amount of resources.  But an interesting question arose from the use of the Universal Turing Machine (UTM) in cognitive psychology studies: since the UTM had the power to perform highly complex tasks just as efficiently as an intelligent person, could it be considered an intelligent machine?  How would a person know when they have created an intelligent machine?

In 1950 Alan Turing provided a behavioristic answer to this question by developing the Turing Test.  A judge is placed in a room with a teletype machine, which is connected to both a computer and a person with another teletype machine in another room.  For 60 minutes the judge is allowed to communicate anything to both participants.  The judge receives responses from both participants and is then asked to decide which participant is the computer.  If the judge cannot choose between the responses, the computer is deemed to be intelligent.  It seemed like an adequate measure.  However, in the decades since the advent of the Turing Test, several examples of information processing have been created to show the weakness of this test.  For example, the ELIZA program (and many other psychotherapist simulators to date) has the potential for passing the Turing Test.  However, ELIZA can not be considered intelligent; the program merely mimicks intelligent language utterances by creating new statements or questions using the user's own input stored in its memory.  In other words, it has no understanding of the meaning of the utterances it produces.  Shockingly enough, many people first exposed to ELIZA hailed it as an ingenious product of intelligent computing!  This raises several problems associated with the Turing Test.

The first major problem that arises is the difference between weak and strong equivalence.  The Turing Test demonstrates only a weak equivalence between human information processing and computers: both can provide the same output behaviors, regardless of the methods used to derive that behavior.  Strong equivalence, on the other hand, stipulates that not only is the output the same in both cases, but the output is produced for the right reasons as well.  The machine will perform similar steps in solving a problem, making the same types of errors and consistencies as its human counterpart.  In other words, human information processing an machine information processing will have similar cognitive architectures.  A second (and perhaps less obvious) problem is the judge's fallibility.  As a human being, the judge is not perfect; as intelligent as we are, we are still prone to judgement errors.  Perhaps the biggest problem here is that we have no point of reference from which to make this kind of judgement other than our own intelligence and instincts, which is why we can be so easily fooled.

At the base of all these complications is how an information processor can be described.  This issue can be developed from the advent of Turing's simple computer.  One can take three different approaches when trying to describe how a Turing Machine works.  One way is to describe it physically, taking note of all its components and how they function.  A second way is to define the machine's program, providing a step-by-step account of how the machine solved a particular problem.  The third, most abstract approach is to simply explain what type of problem is being solved without saying how the machine works.  These questions form the basis of the Tri-Level Hypothesis, a contemporary theory in cognitive science. It is now widely accepted that all three accounts of information processing must be satisfied when trying to determine the intelligence of a machine.  In this regard, the Tri-Level Hypothesis can provide a much stronger base for determining Commander Data's sentience.  This approach not only looks at the problem being solved, but also how the problem itself is solved and the design and implementation of the problem solving device.  These are referred to as the computational, algorithmic and implementational levels of information processing.

There is no doubt that at the computational level, Data can pass for human.  He can solve problems and perform physical tasks with as much (and even more) efficiency as a man.  Even the appearance of his components resembles the human anatomy.  Yet this is still only weak equivalence.  He may produce the same outputs, but we need to know that Data's positronic brain works in the same way as our own grey matter.  This comes under evaluation at the algorithmic level, and it is here that Captain Picard's defense gets shaky. Data may look and act human, but that does not necessarily mean he is human.

There are many behavioral traits exhibited by Data that can be called into question at the algorithmic level.  For example, Data does not experience emotions without the facilitation of a specially designed "emotion chip".  We have no idea if the principles behind this chip operate in the same fashion as our own emotional responses.  Even today there is little agreement on whether or not human emotion is a modular component of cognition.  Another example is Data's supposedly artistic creativity.  He is a prolific painter and a master violinist, but his "creativity" is little beyond the synthesis and manipulation of procedures of other people, which he essentially programs into his neural net as new subroutines.  I think this lies in the heart of the matter: is Data's ability to learn anything beyond the mere programming of subroutines and storage of data?  His inability to use contractions and subtle flaws in social interactions are other ways to show how equivalence fails at the algorithmic level. Once that happens, Data's fate is ultimately sealed -- or is it?

There are still a few pieces of evidence produced by Captain Picard that makes a definite conclusion more convoluted.  For example, how are we to explain Data's seemingly sentimental attachment to some of his personal belongings, such as the hologram of Tasha Yar, a woman with whom he shared an intimate experience?  What about his unwavering drive to become more human, his efforts ranging from understanding humor to creating offspring?  In this light the judgement becomes extremely more difficult, unless Commander Riker can prove that such actions can be explained as having special parameters or flags attached to them.

Data may very well be an impressive mode of human behavior, but how far is he beyond the functionality of a thinking maching?  His actions may be nothing more than elaborate and highly sophisticated simulations, yet there are intangibles to his behavior that as yet cannot be easily explained.  I suppose such a decision remains in the hands of the judge's instincts.

Return to the corner