Saturday, April 14, 2007

Two and a Half Versions of Eliminativism

This post is mostly some taxonomy with an argument (about the definition of eliminativism) built in. First, at a "macro" level, there are three flavors of "materialism" as that metaphysical proposition applies to philosophy of mind. Reductive materialism is the view that if materialism is the correct metaphysical position, then it follows that all descriptions and explanations are ultimately physical. This is a version of the "unity of science" hypothesis: there is some base-level type of explanation (presumably the most fundamental physics), and everything else that we talk about can be translated into this type of explanation. With this intuitive interpretation of the implications of materialism for epistemology at the center, there are two opposite wings (it's probably misleading to try to use the "left wing/right wing" nomenclature here; I can't make any sense out of that way of talking about this, anyway).
On one side, we have nonreductive materialism. This is the view that, although materialism is sound as a metaphysical proposition, it does not follow that all explanation can be smoothly, "intertheoretically" translated into the base physical type of explanation. Thus Fodor coined the phrase "special sciences" in rejecting the unity of science hypothesis. This position is equivalent to property dualism. The property dualist concedes that only physical substance (whatever that is) exists, but argues that psychological explanation appeals to properties that are not analyzable as (base-level) physical properties. Functionalism is taken as an example of nonreductive materialism because descriptions in terms of functional roles ("a doorstop") are not translatable into physical descriptions ("a small brown rubber wedge"); many things can function as doorstops. My own view at this point is that if materialism were true metaphysically speaking, then reductive materialism would follow, but if we concede that there are blocks to reduction (and it looks to me that there are) then we have to face the fact that materialism as we understand it is false. Thus I suspect that "nonreductive materialism" a) may be the right position and b) isn't really a type of materialism at all. (This is a large topic that I will put to the side just now.)
On the opposite side of reductive materialism from nonreductive materialism we find eliminative materialism. Reductive materialism holds that the traditional psychological predicates (thinking now of the intentional predicates: belief, desire, hope, fear, etc.) will be "reduced" to underlying physical states and processes. On this view, as science advances we will come to see what intentional states really are, and that will be some species of physical states. There will come to be a new semantics of (the same old) psychological words. Nonreductive materialism holds that no matter what advances physical science makes, psychology is autonomous (unanalyzable to the base-level physical science). Eliminative materialism is the opposite position: on this view, physical science will not reduce, or translate, the traditional psychological vocabulary, rather it will eliminate it, replacing traditional psychology altogether with some sort of physical explanation. Thus eliminativists refer to intentional psychology as "folk" psychology, and advance the "Theory theory": that intentional psychology is a theory, and an eliminable one.
With that taxonomy of materialism as introduction, I now want to discuss some versions of eliminativism. I say there are "two and a half" versions: I'm never sure whether eliminativism is really a full-blown theory of mind, or more a sort of methodological caution. Paul Churchland popularized the current discussion of eliminativism with a warning from the history of science: some older entities do indeed get reduced (Zeus's thunderbolts to electrical discharge), but some (the heavenly spheres) are not reduced to anything, they are eliminated. So it is at least possible, on this view, that some or all of the traditional psychological vocabulary will undergo elimination, not reduction. This is not a full-blown theory of mind. No systematic "new way of talking" is offered here. Later the Churchlands argued that connectionism represents an eliminativist alternative, but since they (both Churchlands) continue to talk about "mental representation," I don't think they succeed in developing anything systematic through the appeal to connectionism. Connectionism may be significant, but only if it is part of an eclipse of representational theory. The Churchlands give us half a theory, at best.
Another putatively eliminativist argument is that due to Steven Stich, based on the approach of Jerry Fodor and others. This camp used to be called "computationalist," but lately it has appropriated the name "representational." (As a point of exposition, let me point out that this is very unfortunate, since representational theories encompass far more positions than this.) In any event, the general view is that the syntactical structure of mental representation plays a causal role in cognition, rather than the semantic content of the representations. Perhaps the formal organization of the representations maps on to the formal organization of the brain. In this way the "machine language" of the nervous system might be divined. Stich argues that if this is so (if, that is, we come to see the syntactical structure as the causal property of mental states), then ipso facto we have eliminated the causal power of the propositional attitudes as traditionally understood.
This is a deeply confused position (for a long time I thought that I must not be understanding the view, because it seemed too obviously wrong; but no, it's obviously wrong). On the one hand, there are the predicates we ascribe to brains and other body parts: physical predicates and no doubt formal predicates as well, in the sense of formally organized physical states and processes. On the other hand, there are the predicates we ascribe to persons. Intentional psychology is the activity of explaining the behavior of persons, not of brains. One can stipulate (as Fodor tried to do long ago with "methodological solipsism") that one is interested only in what goes on inside the head, and such an investigation might be fruitful so far as it goes, but this is simply to change the subject from philosophy of mind to something else. Searle (Chinese Room) is right: this whole line gains no purchase at all on the relationship between consciousness and the physical properties of the body.
Fortunately, this is not the only game in town. The last version of eliminativism I want to discuss, what I take to be the promising one, is the eliminativism of Wittgenstein, Ryle, and the behaviorists. Here the suggestion is that the representational theory of mind is misconceived root and branch. Talk of "inner" mental states is figurative, a distortion of human grammar that evolved to deal with the three-dimensional, "external" world (the only world there is). Meanwhile, intentional descriptions refer to publically observable behavioral tropes. As Wittgenstein says, we can simply look and see whether a dog and a man, say, are afraid of the same thing. There is no "inner" mind just as language has no "meaning" beyond the intersubjective use of the symbol (this is the only coherent version of "functional-role semantics").
Now we have a spectrum of eliminativisms: at one extreme (occupied, perhaps, by the Churchlands) is the version that suggests that we might jettison the "attitudes" in favor of something else while retaining a representational theory of mind. The other extreme (Wittgenstein) holds that the attitudes are ineliminable as they are primitive descriptions of persons, but rejects the concept of "mental representation." This approach, unlike the misnamed "representational" theory of Fodor et al, would, if successful, actually resolve the problem of intentionality through elimination, rather than just change the subject. Ironic to find Patricia Churchland casting aspersions on Wittgensteins "pronouncements from the deep" when he is the most powerful exponent of the eliminativist approach to philosophy of mind to which the Churchlands aspire.

Monday, April 2, 2007

Animal Mind and the NeoCartesians

I have two projects that I am developing in this blog: one is a position on the mind/body problem, offering a theory of intentionality and a theory of phenomenology. The other is a defense of a realist psychology applied to non-human animals, that is, I think that the semantics of psychological descriptions is the same for intentional descriptions of humans and of, say, dogs. The two projects don't overlap in every respect, but there are important convergences. I have been developing a metaphysical theory of intentional "properties," and it turns out that this argument can be used against the "neoCartesians." Who are they? In the last part of the previous post I mention that it looks like both behaviorists and evolutionary psychologists don't actually have any arguments to the effect that psychological descriptions mean one thing when made of humans but another, perhaps metaphorical? thing when made of non-humans such as dogs. By their own lights, it looks like they can't sustain a difference. But there is a substantial argument, I then went on to say, amongst the cognitivists, referring specifically to Chomsky and his "generative grammar" argument and to Davidson's defense of the autonomy of psychology from the semantics of propositional attitudes. These are the "neoCartesians" as, like many 17th century rationalists, they think that humans are essentially distinct from ordinary (physical, natural) beasts on account of "the faculty of reason." The way the modern cognitivist argument goes is basically that language is possible because of the formal rules of grammar that allow for the production of infinite sentences (that's some of the Chomskian part), and that language is the formal architecture that makes thought itself possible: the intentional states (beliefs, desires, etc.) are individuated by their semantic content; a non-linguistic being literally can't be said to be in such states (that's some of the Davidsonian part). My literary audience at the College English Association Chapter meeting the other day was rather alarmingly congenial to this way of seeing things by the way. (And thanks to Nick Haydock of the University of Puerto Rico for asking for some more discussion of this.)
I don't think that the argument is a good one, and I mentioned last post some quasi-empirical reasons for thinking, for example, that thought and consciousness must in fact predate language, if by that we mean spoken human language, by a considerable time. But there is a deeper argument that is metaphysical and I think that this argument helps to reveal the foundations of the problem a little bit. It looks like the claim for human uniqueness, on this Cartesian-cognitivist version, is based on the allegedly formal organization of language. Mathematical and logical relations are transcendental, universal relations, demonstrably sound unlike the accidental caprices of nature. The rational mind breaks free of physical determinism, loses its "thinghood," and becomes an intentional agent. Thus Chomsky held that generative grammar enabled humans to form forward-looking plans, thus gaining rights (becoming Kantian ends), unlike the non-verbal animals, who were merely instinct mechanisms and conditioning machines (in recent years Chomsky has conceded much on this to the consciousness studies people). The contingency of some of the grammatical structure was made much of on the grounds that it showed that language emerged somehow randomly: the early Chomsky didn't want human behavior to come under behavioristic or genetic analysis and he explicitly constructed arguments to block it.
I am going to give a very compressed version of my metaphysics of intentional mental states, explaining its relevance for the question of animal mind. I think that psychological descriptions ("She's thinking of chocolate fondue," "He's struck by the immensity of the ocean," "They have sore feet") pick out relations between persons and their environments. It is a kind of "wide-content" view, I guess, although I'm not yet sure how comfortable I am with that. One thing that is persuading me is the surely right claim that intentional states are characteristics of whole persons, not of body parts (such as brains). At a minimum I think it's right to say that intentional attributions aren't referring to any part of a person, but to the person as a whole, that is as a person. But if that's right then intentional descriptions aren't physical descriptions of persons at all. They're formal descriptions, descriptions that various different physical systems could come under: descriptions of relations. Then here's the argument as it applies to the metaphysics of the mind-body problem: so far as intentional mental states are concerned, two poins: first, if it's true that intentional states are properly understood as relational states, then maybe we don't have to include mental representation in our model of mind, and second (this is what got me on to this), if it's true that the block to psychophysical laws in the case of intentionality is due to the formality of intentional states, then the metaphysical problem is not a problem specific to the philosophy of mind. Rather it is a general problem for metaphysics. Geometrical properties (roundness, triangularity) are ubiquitous, and they cannot somehow be "analyzed physically." Roundness isn't a property specific to metal, or stone, or plastic, or what have you. Transcendental formal properties are everywhere. So we must ask ourselves, are we really materialists if we concede that the universe is formally organized? Or does true materialism require a doctrine that things evolve randomly? Either way, this maybe settles intentionality as a metaphysical problem, anyway.
And here's the argument as it applies to the minds of animals: The formal structure of language is an extension of the deeper overall formal structure of the environment. Animal minds were already endowed with some formal structure prior to the (consequent) evolution of human language. It's not that rationality is not a real feature of humans, rather that nature was already exploiting the potential of formal organization long long before. Even the apparently arbitrary
grammatical structures of language are evidence of the inevitability of formal organization, not its improbability.