Wednesday, November 10, 2010

Within the Prospect of Belief

Over at his blog, Russell Blackford has a new post up on the use of the word "scientism," partly spurred by my own usage of this word there and elsewhere. Russell considers the word a slur that has no place in respectful dialogue.

Now, I don't personally find it all that offensive--ideological biases need names, after all, so that we can call attention to them. But perhaps this is one of those words, like Yuppie, or Mugwump, that carries too much emotional baggage to be merely descriptive, so I'm happy to replace it in polite company. "Positivist" is a pretty close fit, but it's so moldy that it carries the connotation of old-fashionedness, which may also be unfair. Perhaps Russell will help me find other alternatives. (I also hope he'll consider that accomodationists don't like being called "faitheists" any more than he likes being called scientistic.)

Aside from being impolite, Russell argues that "scientism" depicts a straw man. Most people so tagged do not actually believe in the omnicompetence of science to answer questions of ontology, ethics, aesthetics, and other branches of traditional philosophy*. In comments, Russell goes on to elaborate that "it's difficult to find working scientists who actually do hold those positions."

Well, not that difficult. Here's Jerry Coyne, agreeing with Russell that the term "scientism" is perjorative, and then planting his feet squarely on turf Russell claims to be unpopulated:
[Russell] uses the example of “how sympathetic one should be to Macbeth?”, but can literature really answer that question for us? Or is it an empirical question based on psychology and sociology, sussing out what effects one’s actions have on others? ... I still maintain that every question about how things really are in the universe is a question that demands a science-based answer.
This is precisely the attitude I intend to depict when I, until now, employed the S-word. I want to observe that (a) Coyne rejects the primacy of metaphysics over empiricism and (b) that he is wrong. With what term shall I so do this?




* Russell seems inclined to define "scientism" more broadly as the belief that the humanities have nothing to offer, rather than the specific belief that science can settle metaphysical questions. I don't see it presented that way by philosophers of science (I posted number of examples pulled form the OED) and Russell doesn't cite anything to support this broader definition, so this would seem like a case of double straw man, but I want to extend my request to him to substantiate this definition, and reiterate that I'm open to finding more emotionally neutral language to try to typify his position.

Tuesday, November 02, 2010

Silent, Hidden, Lawless

Sean Carroll has posted a thoughtful (and goddamned civil) response to my recent post on science and supernaturalism.

I agree with Sean that the essential quality at issue is lawlessness (though I don't prefer this term, since it implies chaos, rather than a "higher" or otherwise unevaluable type of order--imagine a capricious demon changing the length of your yardstick every night when you go to bed.)

I disagree, however, with his implication that a scientist who settles for a non-naturalistic explanation is still doing science:
There is a perfectly good question of whether science could ever conclude that the best explanation was one that involved fundamentally lawless behavior. The data in favor of such a conclusion would have to be extremely compelling, for the reasons previously stated, but I don’t see why it couldn’t happen. Science is very pragmatic, as the origin of quantum mechanics vividly demonstrates. Over the course of a couple decades, physicists (as a community) were willing to give up on extremely cherished ideas of the clockwork predictability inherent in the Newtonian universe, and agree on the probabilistic nature of quantum mechanics. That’s what fit the data. Similarly, if the best explanation scientists could come up with for some set of observations necessarily involved a lawless supernatural component, that’s what they would do.
I don't find this very convincing, for several reasons. First, probabilistic science is just as lawful as mechanical science. It just uses different bookkeeping techniques. All Sean's example really points to here is a Kuhnian "paradigm shift."

Second, the "best explanations scientists can come up with" is a moving target. Einstein, who had every right to rest on his laurels after having introduced relativity theory, famously spent the rest of his career unsuccessfully trying to unify the fundamental forces of nature--to find the single and consistent explanation for why different rules govern the interaction of matter and energy at different levels of scale. He could have given up, or answered the question of why there are four forces, "just because." Or, "maybe we'll all wake up tomorrow and there will be three forces, and no one will remember that we ever thought there were four." Or perhaps even "God really likes the number 4." Not being satisfied with these answers is what made Einstein a good scientist.

But most crucially, as I mentioned above with my yardstick reference, studying lawless phenomena (whether chaotic or capricious) with science is logically insensible. It is like trying to translate an infant's murmuring and babbling from Finnish to Greek. I'm happy to agree with Sean that the Supernatural is an empty set, but as a logical category (here defined), it cannot be coextensive with science any more than hot can be with cold.

Who is the Master Who Makes Colorless Green Ideas Sleep Furiously?

The "language of thought" hypothesis (LOT), originally developed by Jerry Fodor in the 1970s, and now championed in the popular press by Steven Pinker in The Language Instinct and The Stuff of Thought, presumes a pre-literate conceptual language, sometimes called "mentalese", upon which our conscious, tangible symbolic language is based. This language of thought is imagined to be innate and universal, and thus a substrate for all human language from Algonquin to Finno-Ugric to Brooklynese.

The LOT hypothesis is an outgrowth of Chomsky's nativist theory of a "universal grammar," which in its turn was a response to the reigning behaviorist paradigm of the day. Behaviorism never rebounded from Chomsky's critique (though it’s found new expression in the speculative protoscience of memetics), and we're all better off for this. But beyond this, nativism has not proved to be very fruitful in our understanding of cognition, serving mostly to fortify the sociobiological argument that our cultural norms reflect hard-wired biological determinants that originally emerged to help us manage the challenges of our paleolithic beginnings.

There are a number of logical problems with the LOT hypothesis, with perhaps the most obvious being that words, unlike numbers, are not static and precise through time, as they would need to be if they were subject to translation into and out of mentalese. The number represented by the numeral 2, for example, can be counted on to always be the same. But what is indicated by the modern English words love, doctor, faith, fish, holiday, circus, atom, fairy, wealth and savage, just to name a few, has wildly varied just in the few hundred years we've been using this form of the language. If there was some kind of inborn uber-language which determined the meanings expressed in our own spoken languages, it's difficult to see how it could permit this kind of semantic drift.

The LOT model is built on the metaphor of computer processing, so it is instructive to ask how well a computer would function if different things were intended by the same terms during successive installs of a piece of software. It seems plausible to many of us us living now to imagine that human language rests on a logical foundation just like a computer program: after all, we can perform logical calculations, just as a computer can, and most of our expressions appear to be logically grounded. But the question to ask is not what we can do now, but what humans or humanoids could and did do at the dawn of language, at least 50,000 years ago, perhaps much earlier. The rudiments of formal logic didn't appear on the scene until less than 3,000 years ago, with the Greeks, and weren't developed into a complex system until the 20th century. This would be a strange course of events if formal logic were built into the structure of our cognition from the start, which is what LOT proposes.

As best we can tell, for the first several thousand years of our existence human cognition took the form of what we now derisively call "magical thinking," or myth. This is the environment into which language was first born and given to develop. There is little in our history to suggest a strictly rational thought process underlying pre-modern language, and a great deal to suggest something very different.

Ernst Cassirer noted that the primacy of mythological thinking presents a significant problem for the realist view, which characterizes mythic narratives and meanings as erroneous explanations of objects and phenomena, given the lack of adequate tools and resources to understand these objects and phenomena for what they really were. But this description is based on a misunderstanding of mythical thinking; it presumes that from the very first, humans were concerned with explanations. The problem is that to formulate the questions that these explanations are supposed to answer, one must already have a language, and a fairly well developed one. Cassirer locates the error in common sense, writing in Language and Myth (1946):
It seems only natural to us that the world presents itself to our and inspection and observation as a pattern of definite forms, each with its own perfectly determinate spatial limits that give it its specific individuality.
But on reflection it becomes difficult to see how these forms might have been experienced before there was a language to conceive them in. At the very least it is an open question whether or not we can truly be aware of spatial limits that we cannot name, and thus it would seem that ideation and language require each other. But then we are faced with the problem of winnowing down the full stream of experience into discrete, graspable elements. Cassirer continues:
What is it that leads or constrains language to collect [classes of objects] into a single whole and denote them by a word? ... As soon as we cast the problem in this mold, traditional logic offers no support ... for its explanation of the origin of generic concepts presupposes the very thing we are seeking to understand and derive, the formulation of linguistic notions.
Cassirer was writing 40 years before Pinker's first books on language, but provides an apt preemptive critique of the LOT thesis. We are tempted, in developing a philosophy of language, to work backwards from the world we know, but since philosophy must proceed in language (and a highly discursive one at that),

That's the rational critique. The empirical critique is that How would this putatively inborn, genetically determined linguistic structure have supported a conceptual schema so radically different from our own, and so different from what its own nature would predict, for so many thousands of years?

Cassirer provides numerous examples of the slow progression of mythological ideation from the earliest and simplest myths to the appearance of logical reasoning, and we could turn to any prominent cultural anthropologist for additional demonstrations. But there is interesting evidence of a more recent provenance as well, in the autobiography of Helen Keller, who very explicitly asserts that she had close to no inner life at all before she was taught sign language:
Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect.I was carried along to objects and acts by a certain blind impetus... [N]ever in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation.
We shouldn't read too much into one self-reported anecdote, of course. Keller was a special case, born with sight and hearing only to lose it at nineteen months, so she was exposed to spoken language for a not insignificant period of time. But it is intriguing to note how little cognition she seemed to be capable of before she learned to use language.

In the March 10 New Yorker, John Lancaster writes of a similar, though more everyday, predicament when it comes to the most precisely descriptive regions of experience, as in the appreciation of wine, or perfume. He begins with a story about his "discovery," after long resistance, of what oenophiles call "graininess" in red wine. Before the experience, he had rejected the term as rhetorical overkill--something that many people with less refined palates (myself included) are quick to presume when encountering such seemingly fantastical language.

But when he finally noticed graininess (after many failed attempts), he conceded it was the perfect word, and not nearly as figurative as he had imagined. Here's the interesting part, which I was not expecting to find in a New Yorker article on olfactory perception:
What's more, in tasting it I realized I'd encountered versions of it--milder, more restrained--before. Now I knew what grainy tannins were. Most taste experiences work like that. A taste or smell can pass you by, unremarked or nearly so, in large part because you don't have a word for it; then you see the thing and grasp the meaning of a word at the same time, and both your palate and your vocabulary have expanded.
This is exactly the opposite of the common sense view, that objects and phenomena precede their names (though to be fair, someone had to be the first person to call a wine "grainy.")

Is it possible that our understanding of the world expands and develops not before we describe it, and not because we describe it, but as we describe it? This seems much more plausible than the Darwinian explanation, in which we are in constant stenographic response to a world of given stimuli; and because the latter has us spinning our wheels, culturally, over alleged biological imperatives from a world long past, the possibility that we participate in our description of the world also seems much more likely to allow some actual evolution of thought, philosophical, scientific, and moral.