Sunday, October 10, 2010

The first horn of the dilemma in contemporary philosophy of mind

We are put onto the horns of our current dilemma by good arguments, not bad ones. The first line of argument at the heart of contemporary philosophy of mind is exemplified by Alan Turing’s work and his “Turing test,” although perhaps the most important elaboration of the line is that found in the writings of Ludwig Wittgenstein, and the whole approach has its roots in the empiricism of David Hume. Hume argued that we were on firm ground when we could specify experiences that grounded our descriptions of and theories about the world. Hume identified “metaphysics” with the traditional, pre-empiricist philosophy of the “Schoolmen,” as he called them, and he is a typically modern philosopher in that he imagined that he had done away with a great deal of traditional philosophy altogether; at least, that was his aim. He understood that this radical empiricism had radical implications for psychology: he denied that there was anything that could be called the “mind” other than the bundle of perceptions and thoughts introspection revealed, and questioned whether anything that could be called the “self” (other than the perceiving and acting body) could be said to exist, for the same reasons. The “mind” and the “self” were for Hume too close in nature to the “soul,” a putative non-physical entity of the sort that the Enlightenment empiricist wanted to eliminate along with angels and ghosts.

The early 20th century heirs to Hume were the behaviorists. Too often today behaviorism is regarded solely as a failed movement in the history of 20th century psychology, but it is important to appreciate that behaviorism was an attempt, and a very powerful, respectable and still-interesting attempt, to naturalize psychology. It is also important to see that the motivation for developing behaviorism for the empiricist-minded philosophers and psychologists of the time was essentially metaphysical. The ghostly mental entities, figuratively located “in the head,” that were the nominal referents of psychological descriptions and explanations (“beliefs,” “desires,” “attitudes,” etc.) had to be washed out of the ultimate, natural semantics. Behaviorism proposed to naturalize psychology in a simple way: stick to a strict empiricist methodology. If the methodology of science was adhered to, ipso facto psychology would be a science. For present purposes “behaviorism” can be defined as the view that psychological predicates (“He believes that Boston is north of here,” “She is hungry”) refer in fact to observable dispositions to behave: behaviorism is a good example of “theory of mind” as semantics of psychological language.

Behaviorism is a full-blown theory of mind (a general semantics for the psychological vocabulary) that eliminates any reference to anything “in” the mind. On one interpretation this is simply a methodological prohibition on psychologists who aspire to being “scientific” from referring to these “inner” (that is, unobservable) mental states and processes. This version is variously called “soft,” “methodological,” “psychological” or (my coinage) “agnostic” behaviorism. A more radical interpretation is that the inner is an illusion, a historic misconception. This more radical version, the leading avatar of which is Wittgenstein, is variously called “hard,” “metaphysical,” “philosophical” or “atheistic” behaviorism. I don’t want to get sidetracked here by the complicated story about behaviorism’s varieties and the varieties of problems and objections behaviorism encountered. Just now what we need is to grasp and appreciate what was powerfully persuasive (and enduring) in the empiricist line of theory of which behaviorism is an example.

Alan Turing, thinking about computation and computing machines, took a behaviorist approach to the word “intelligence.” He famously proposed the “Turing test”: when an intelligent, sane and sober (that is, a somewhat idealized) person, interacting with a machine, can no longer see any difference between the outputs of said machine and the outputs of an intelligent (etc.) person, at that point we will have to concede that the machine is (actually, literally) intelligent as well. Machine intelligence will have been achieved. “Outputs”: the Turing test is usually conceived as a situation where there are a number of terminals, some connected to people, some to machines. Human interlocutors don’t know which are which. Questions are asked, comments are made, and the terminals respond; that is, there is linguistic communication (there is actually an annual event where this situation is set up and programmers enter their machines in competition). Turing himself never saw a personal computer, but he was conceiving of the test in roughly this way.

However, “outputs” could be linguistic, or behavioral (imagine a robot accomplishing physical tasks that were put to it), or perhaps something else (imagine an animated or robotic face that made appropriate expressions in response to peoples’ actions and statements). Nor does the candidate intelligent thing need to be an artifact, let alone a computer. I am following Turing in sticking to the deliberately vaguer word “machine” (although it’s true that Turing theorized that intelligence, wherever it was found, was some species of computation). Imagine extraterrestrials that have come out of their spaceship (maybe we don’t know if they’re organisms or artifacts), or some previously unknown primate encountered in the Himalayas, say. The point is that in the case of anything at all, the only possible criteria for predicating “intelligence” of the thing are necessarily observation-based. But after all, any kind of predication, psychological or otherwise, is going to depend for its validity on some kind of observation or another (“The aliens are blue,” “The yeti is tall”), and psychological predicates are no different.

Wittgenstein gives perhaps the most persuasive version of this argument in what is usually called his “Private Language Argument.” Wittgenstein holds that language is necessarily intersubjective. (In fact he thinks that it is not possible for a person to impose rules on their self, so ultimately he thinks that a private language is impossible, but we don’t need to excavate all of the subtleties of the Private Language Argument to see the present point about the criterion of meaningfulness, which is fairly standard empiricist stuff). If I say to you, “Go out to the car and get the blue bag,” this imperative works because you and I have a shared sense of the word “blue.” Without this shared, public sense communication is impossible, as when people are speaking a language that one can’t understand. Psychological words, just like any other kind of words, will have to function in this intersubjective way: there will have to be some sort of intersubjective standards or other for determining if the words are being used correctly (the two of us have to be following some shared set of rules of use). Wittgenstein emphasizes the point that, to the extent that psychological predicates are meaningful at all, they cannot be referring to anything “inner,” known only to the subject of predication. And for all of the problems and failures of the original behaviorist movement, it is hard to see anything wrong with this central point.

The term of art for any theory of mind that says that psychological words necessarily have to conform to publically, intersubjectively established standards and procedures of use in order to make sense is operationalist. Behaviorism is a kind of operationalist theory, and so is functionalism, to which I now turn, so I need the word “operationalist” to use when I want to refer to these kinds of theories of mind in general. Operationalist theories appear to handle some critical problems in the philosophy of mind, and constitute the first horn of our dilemma.

Functionalism can be defined as the view that psychological predicates refer to anything that plays the appropriate causal role. That’s a bit gnostic so I will unpack it with some history. Remember that according to Turing there is no difference between a human and a machine qua intelligent being once the machine’s intelligent performance is indistinguishable from the human’s. Acting intelligent, on an operationalist view, is just being intelligent, just as sounding like music is just being music. “Being intelligent” breaks down into many (perhaps indefinitely many) constituent abilities. For an easy example take learning and memory. Part of being intelligent is being able to learn that there are people in the house, say, and to remember that there are people in the house. Both an intelligent human and an intelligent machine will be able to do this. But the human will do it using human sensory organs and a human nervous system, while the machine will have physically different, but functionally equivalent, hardware.

This is the problem of the multiple realizability of the mental. It is one of the deepest metaphysical issues in the philosophy of mind. Around the middle of the 20th century philosophers of mind concluded that a literal reductive materialism, for example the identification of a specific memory with some specific physical state in a human brain, or of remembering itself with some specific physical process in human brains, committed a fallacy often referred to in the literature as “chauvinism.” These philosophers weren’t the first to see this: Plato and Aristotle, for example, not only saw this problem but developed some of the best philosophical analyses of the issue that we have to this day. I want to stress that once we accept any kind of operationalist theory, the problem of multiple realizability is undeniable. Humans, dolphins (among other animals), hypothetical intelligent artifacts and probably-existing intelligent extraterrestrials will all take common psychological predicates (“X believes that there are fish in the barrel,” say, or “X can add and subtract”). In fact the extension of the set of beings who will take psychological predicates is indefinitely large and does not appear to be fixed by any physical laws.

Functionalism, like behaviorism, is motivated by essentially metaphysical concerns, in the case of functionalism by the problem of the multiple realizability of intelligence. Functionalism abstracts away from hardware and develops a purer, more formal psychology: any intelligent being, whatever they may be made of, whatever makes them tick, will have (by definition) the ability to learn, remember, recognize patterns, deduce, induce, add, subtract and so forth. Although the more enthusiastic advertisements for functionalism like to point out (rightly enough, I suppose) that functionalism, in its crystalline abstraction, is even compatible with metaphysical dualism, functionalism is best understood as a kind of non-reductive materialism. That is, while the general type “intelligent beings” cannot be identified with any general type of physical things, each token intelligent being will be some physical thing or another.

This extends to specific mental states and processes as well, of course: the human, the dolphin, the Martian and the android all believe that the fish are in the barrel, they all desire to get to the fish, and they all understand that it follows that they need to get to the barrel. Each one accomplishes this cognition with its physical body somehow, but they all have different physical bodies. There is token-to-token identity (that’s the “materialist” part), but there is no type-to-type identity (that’s the “non-reductive” part). It is not coincidental that functionalism has been the most influential theory of mind in the late 20th century, the age of computer science. The designer (the psychologist) sends the specifications down to the engineers (the computer scientist and the roboticist): we need an artifact with the capacity for learning, memory, pattern recognition and so on. The engineers are free to use any materials, devices and technology at their disposal to devise such an artifact.

This realization that functional descriptions do not analyze down to physical descriptions (a realization at the center of Aristotle’s writings) is a great advance in philosophy of mind. It changes the whole discussion of the metaphysics of intelligence and rationality in a decisive way. In Chapter Two I will argue that operationalist theories in general can indeed provide an intuitively satisfying naturalistic semantics for predications of cognition, intelligence and thinking. To close this introductory discussion of the first horn of the dilemma I will quickly sketch the way operationalist theories also can be deployed to address another core metaphysical problem, the problem of mental representation and mental content. Then I will be able to define one of the most important terms in this book and one of the most difficult terms in philosophy of mind: “intentionality,” the subject of Chapter Two.

“Representational” theories of mind hold that it is literally true that cognitive states and processes include representations. To some this may seem self-evident: isn’t remembering one’s mother, for example, a matter of inspecting an image of her “in the mind’s eye”? Isn’t imagining a tiger similarly a matter of composing one’s own, private image of a tiger? There are reasons for thinking that mental representations must be formal, like linguistic representations, rather than isomorphic, like pictorial representations: How many stripes does your imaginary tiger have? Formal representations, like novels, have the flexibility to include only relevant information (“The Russian guerillas rode down on the French encampment at dawn”), while isomorphic representations, like movies, must include a great deal of information that is irrelevant (How many Russians, through what kind of trees, on horses of what color?). While there are those who argue for isomorphic representation, most representational theorists believe that mental representations must be formal rule-governed sets of symbols, like sentences of language. The appeal of such a model for those who want to approach cognition as a kind of computation is obvious.

Some of these issues between species of representational theory will be developed in Chapter Two, but for introductory purposes four more quick points will suffice: First, why mental representation/content poses a metaphysical problem; second, how we can define the often ill-defined word “intentionality”; third, which psychological words are taken by representational theorists to advert to mental content; and finally, how operationalist theories might be successful in addressing the metaphysical problem of representation.

The metaphysical problem is that symbols per se seem to have a “property,” the property of meaning, which does not appear to be analyzable as a physical property. This issue is addressed in philosophy of language, but language and other symbol-systems are conventional (albeit the products of long evolutionary processes); the location of the ur-problem is in philosophy of mind. Consider the chair in which you sit: it does not mean anything. Of course you can assign some arbitrary significance to it if you wish, or infer things from its nature, disposition and so forth (“All of the world is text”), but that doesn’t affect the point: physical objects in and of themselves don’t mean anything or refer to other things the way symbols do. Now consider your own, physical body: it doesn’t mean anything any more than any other physical object does. Nor do its parts: your hand or, more to the point, your brain, or any parts of or processes occurring in your brain. Your brain is just neural tissue humming and buzzing and doing its electrochemical thing, and the only properties included in our descriptions and explanations of its workings are physical properties. But when we predicate of a person mental states such as “He believes that Paris is the capital of France,” or “She hopes that Margaret is at the party tonight,” these mental states appear to have the property of referring to, of being about, something else: France, or Margaret or what have you. It looks, that is, like the mental state has a property that the physical state utterly lacks.

I can now offer a definition of “intentionality.” In this book, intentionality refers to two deeply intertwined but, I will argue, separable metaphysical problems: 1) the problem of the non-physical property of meaning that is implicit in any representational theory of mind (I will call this “the intentional property” or sometimes “the semantic property”), and 2) the problem of rationality, that is, the apparent lack of any physical parameters that could fix the extension of the set of beings that take predicates of rationality (or intelligence). The intentional vocabulary consists of words like “belief,” “desire,” “hope,” “fear,” “expectation,” “suspicion,” the word “intention” in its ordinary use etc. Psychological predication using these words is often called “intentional psychology” or “belief/desire psychology” or sometimes (usually pejoratively) “folk psychology.” The intentional vocabulary consists of all and only those words that appear to entail mental representation, often referred to in the literature as “that-clauses,” as in {A belief that “Paris is the capital of France”}, or {A hope that “Margaret will be at the party tonight”}.

On a widespread representationalist view these are propositional attitudes, in the respective examples the belief that the proposition “Paris is the capital of France” is true and the hope that the proposition “Margaret will be at the party tonight” is true. It is commonly suggested that, since these intentional states are individuated by the content of the propositions towards which they are attitudes, propositions must be represented somehow in the mind. Such a view commits one to the existence of the non-physical “property” of meaning. This is not (or at least not entirely!) an abstruse argument amongst philosophers: any model of the nervous system as an information-processing device makes this commitment, and the most cursory perusal of standard neuroanatomy textbooks is enough to see that they are saturated with this kind of language.

On my view naturalizing psychology requires that putatively non-physical “properties” be washed out of the final analysis in favor of solely physical properties (the only kind there are). That is, I think that representational theories of mind are false. To use the term of art in theory of mind, I am an eliminativist about mental representation and content. Mental representation will be the main topic of the first part of Chapter Two, which in many ways is the heart of the book. To conclude this introductory section I will briefly sketch how operationalist theories of mind might open the way toward an acceptably naturalistic semantics of the intentional vocabulary.

Behaviorism is also a kind of eliminativist theory: behaviorism eliminates (from the semantic analysis of the psychological vocabulary) anything unobservable at all, including private “inner” mental states and processes. Functionalism, behaviorism’s more sophisticated progeny, acknowledges that states and processes “in the head” (that phrase may be taken either literally or figuratively here) play causal roles in the production of behavior (“The thought of X reminded him of Y and he started to worry that Z…”), but still manages to rid the analysis of psychological predication of reference to mental states (to intentional states, in the present case). It does so by describing cognition functionally rather that physically. Take any sentence that includes an intentional phrase, say: “At the sight of his mother’s photo he remembered the crullers she used to bake, and this memory motivated him to go to the grocery and buy sugar, butter and unbleached flour.” The representationalist is, it would seem, committed to the view that a representation of the crullers is playing a causal role here. But a functional description of the cognitive process can substitute a generic functional-role marker thus: “At the sight of his mother’s photo he X’d, and this X motivated him…etc.” Now “X” can stand for anything that plays the appropriate functional role, and obviously this no longer commits us to the existence of representations or of anything else with non-physical properties.

As I said, the two problems of intentionality (the problem of rationality and the problem of mental content) are further separable. In Chapter Two I will first develop a naturalistic semantics for intentional predication, one that is eliminativist about mental content. Then I will offer a second argument about the problem of rationality that relocates the metaphysical problem outside of philosophy of mind. Both of these arguments acknowledge the validity of the operationalist maxim exemplified by the Turing Test: outside of some formal, intersubjective standards for identifying intelligence through public observation there can be no justifiable reasons for predicating intelligence of a being or for refusing to do so.

No comments:

Post a Comment