Friday, March 30, 2012

Refuting You In Effigy

Brandon Watson has an interesting discussion of the "Straw Man" fallacy, one of the most widely claimed argumentative fouls in modern day Blogistan. He notes that a lot of critiques that get characterized as straw manning really aren't: It's not fallacious to merely simplify someone else's argument, for example. A straw man response must illegitimately oversimplify someone's argument, and it's not at all obvious where the line is between necessary and negligent reduction in a critique.

Without being too dense about it, I'd like to emphasize that it's impossible not to simplify someone else's argument when responding to it. This is in part because because no argument (whether an essay, editorial dissertation, lecture, or blog post) is truly finite or complete, but we must respond as though it is. (This truth is abundantly clear to anyone who has gotten into an argument on the internet). In constructing any proposition, no matter how formal, we constantly make decisions (consciously or otherwise) on what to make explicit and what to take for granted. The space created by these decisions is practically, if not actually, infinite, leaving abundant room for a thoughtful critic to ask questions, challenge assumptions, and point out ambiguities.

Put another way, all arguments, even the utterly casual, are in some sense formal (as are, to be frank, all uses of language and symbolic communication generally.) Reality is infinitely manifold, and our statements about it must, by nature, take enormous liberties. If every exercise in simplification is a "straw man," than language itself must be found guilty of the charge. Such was my reaction to Brandon's original post about "straw man straw men"--that there is a deeper fallacy in imagining that any critique can by truly faithful.

This offends our vanity. We don't like to be shown that our attempt to create living truth has resulted in, essentially, a scarecrow. And I think that a large percentage of "straw man" cries reduce to exactly this type of umbrage and apprehension at the ease with which our creations can be prodded, dismantled, modeled, and mocked.

But there is also a more subtle variety of straw man that comes from a true failure of a critic to understand the content and context of what has been proposed, and I believe this accounts for a majority of "true" cases of the straw man fallacy: simplification born of hauteur. I am put in the mind of this variety of straw man by a recent article by Roger Scruton in The New Spectator, which invokes the insightful observation by Max Bennett and Peter Hacker of the recent mania for neuroscience as a species of the "homunculus fallacy." (Scruton can be a bit of a reactionary hack, but he hits most of the right notes here.) Denied a "soul" as a plausible seat of human agency, but loath to resort to passive descriptions of human behavior, "neurophilosophers" (actually just new-fangled Behaviorists) like Patricia Churchland and Sam Harris cast the brain in the starring role, endowing it with the capacity for all sorts of intentional activity like processing, reasoning, calculating, and a great deal more. What is swept under the rug in this account is just how a lump of flesh, however complex, subject to the deterministic laws of nature, can be said to be the subject of any transitive verb whatsoever.

Scruton's critique has been ably made by many others, among them Neil Postman, Stephen J. Gould, Stephen Toulmin, Mary Midgley, and Marilynne Robinson, all of whom make a case for we might for expedience oversimplify as "humanism;" the need to talk about humans as in some sense irreducible units, as whole persons (while always of course recognizing that medically, physically, chemically, and pathologically we are quite dissectable).

There is, from the ranks of the sciences, a strain of very intelligent dissenters of the Humanist view, which I categorize by their failure to see the essential argument being made. In his discussion of real and imagined straw men that I link to above, Brandon alludes to the strange inability for some disputants to see that their critiques are already present in the material they are criticizing. We can see this in comments with the Scruton article: commenter blindboy merely recapitulates the description of the brain as "mapping" the body without engaging the Bennett/Hacker "homunculus fallacy." Anthony Steyning and James Smith, revealingly, mock Scruton for attributing human agency to a "soul," which he does not do. Tom Hartley reiterates the Behaviorist project favored by the neurophilosophy set without responding to the logical problem, raised by so many critics, including Scruton, of how we justify continuing to use the active language of agency and intention while admitting of no entity endowed with those properties.

Five negative comments, five straw men, responding to an entirely different argument to the one Scruton has put forth, not out of necessity, but out of a clear myopia. I could point to similar patterns on threads scattered throughout the internet. It is, perhaps, human nature to see only the possibilities before you that you have been trained to see, which in our culture tend to fall into two determined camps: Abrahamic theism and scientific materialism. Though both camps claim, with some justification, the mantle of enlightenment and rational inquiry, it is far too often observed that a critique of one is taken as an endorsement of the other, a conclusion neither inquisitive nor enlightened. Perhaps the real problem with straw manning is not that we want to make (and cannot avoid making) effigies of ourselves and others, but that presented with those effigies we so desperately want to hang, burn, and lynch them, rather than bearing the much more arduous but ultimately more rewarding burden of enhancing our vision and embracing the impossible task of perfecting our own idolotry.

Monday, March 26, 2012

On Having a Face to Save

[Update: March 27, 2012.  By happy coincidence, Andrew Brown has a new piece up today at The Guardian titled "Iris Murdoch Against the Robots," discussing her excellent 1970 book of philosophical essays The Sovereignty of Good, and making some points very consonant with my own about purportedly "scientific" myth-making that leads us to imagine that human agency is illusory.]

Sometimes people have arguments they don't want to have in order to shore up some principle they wish they didn't have to defend. Actually, most debate can probably be characterized this way, though it doesn't always nestle up against outright absurdity the way that the argument I will speak of here is so prone to do, namely the argument that something called "Determinism" means that something else called "Free Will" cannot exist.

This is a rather hot debate right now, largely because advocates of the "incompatibilist"  or "hard determinist" view I have just described believe they smell blood in the water and have moved in for the kill. Sam Harris, the smartest man who was ever wrong about everything, has a recent book out on the topic ("Free Will,") and Jerry Coyne, the smartest horse ever to be led to water while steadfastly refusing to drink, has made this topic a regular staple on his blog.

The conversation has lately found its way into the pages of the Chronicle of Higher Education, which recently featured five essays on the debate by Coyne and some of the top names in neuroscience and philosophy of mind.

The philosopher Russell Blackford has, in response, started blogging about these essays at The Philosophers' Magazine. I think Russell is essentially on the right track, but his argument is strangely timid, given the grave and gross contradiction in the incompatibilist view, that I'm obliged to pick up where he leaves off.

Russell asks us to imagine that it is in our power to save a child who is drowning in a nearby pond. We, for some reason, do not attempt to save the child, who dies of drowning. In such a case, is there any value in relating to the grieving parents the incompatibilist view, that we could not have acted other than we did, because all of the causal factors that preceding this moment were not changeable by us? It is an absurdity. The only answer that any interested party wants to hear from us in such a circumstance is what our reasons were for not saving the child. This is an inescapable aspect of the human condition; in fact we would not recognize as "human" any member of any species with the power of rational reflection who did not offer a reason for its actions. By "reason," here, I don't mean an objective, external description of the physical conditions that led to the present moment, such as we might observe in an MRI, but the internal moral reasoning which which our behavior is consistent.

Russell concludes that, having been given the incompatibilist story that Determinism precludes our having acted other than we did, "the parents are likely to reply that it’s not that I couldn’t have chosen to act otherwise, but that I merely didn’t want to act otherwise."

This is an inarguable conclusion. There may be outliers, for whom we might allow that no other action was conceivable: young children who haven't learned empathy or impulse control, people with unprocessed trauma, people with lesions or tumors that impair rational reflection--but these are the exceptions that prove the rule. They are notable as deviations from the norm, where the presumption of moral reasoning ability does not apply. In normative cases we are all accountable to each other for our actions, which is to say we are expected to give an account of our behavior, and be subject to the judgment of others for it. To refer to the alleged inevitability of our behavior is an irrational category error, failing to address the question that has been asked of us, which is social, not physical. It is as though someone has asked of us if we will drive them to the hospital tomorrow, and we responded with the probability of supervening physical forces. Perhaps there will be an ice storm. Perhaps the hospital will be destroyed by invaders from Mars. But these are not relevant to the question, which is a social and moral one.

Now, here the incompatibilist may double down, seeing a crucial weakness in our argument. What we are trying to establish, insists the incompatibilist, is the scientific truth or falsity of free will, not the social truth of moral accountability. Our thesis is that our choice-making faculty is a mere appearance, a self-flattering illusion, perhaps a spandrel, perhaps an important adaption, but nonetheless a figment with no actual empirical or ontological foundation.

The correct response, which I would have preferred to see Russell issue, is this: If that's the intellectual barricade you want to die on, Incompatibilists, that can be arranged. Just be warned that what you stand to lose is far greater than archaisms like "the soul" or the Ghost in the Machine. What also must perish, when choice and volition are cleansed away, is reason itself, and all the flows from it, including the "scientific" method, "free" thinking, and philosophical naturalism generally.

John Pieret has been making this point consistently over the last few years at his blog Thoughts In a Haystack, though to my knowledge he has never spelled out the critique in full. If he did, it would go something like this. If we grant that determinism applies to mental processes like thought and reflection--that these, like all events, are the effects of prior causes--and that "hard determinism" precludes any mechanism whereby we might influence the causal chain that gives rise to our thoughts, then thinking is not something we do but something that happens to us. We can witness our thought processes in the theater of our minds, but cannot actively participate in them. In Coyne's own words: "To assert that we can freely choose among alternatives is to claim, then, that we can somehow step outside the physical structure of our brain and change its workings. That is impossible."

Thought, then, is an epiphenomenon, a kind of apparition that exists without any influence on the physical world. It is something that happens, as an after effect of prior causes, but not something we can actively "do," since this would imply a violation of laws of causation. The clear implication of this, of course, is that reason, reflection, evaluation and judgment are likewise things that happen without our ability to influence them. "Automatons" (Coyne's word) cannot, on their own, "apply" logic to any given proposition, lacking the ability to choose among competing assertions. This has rather terrible connotations for our ability to do philosophy (with epistemology being a particular problem) or science. It is difficult to see how any conclusions we might call "scientific" or "rational" could be given any higher status than those we might call superstitious or fallacious, since in no case do we have any power to evaluate them as good, bad, sound, preposterous, or even as green ideas sleeping furiously. To do so would require an ability to "choose among alternatives," which Coyne has already declared "impossible."

This of course makes "hard determinism" self-refuting, since it is derived by means of faculties (empiricism, rationality) it later declares to be physical impossibilities. Like most doctrines preceded by the word "hard," we can now dispense with it as misguidedly over-simplistic. What, then, is the way out of the forest? There are still, after all, serious questions to asked about "free will." Just how "free" is it? What is the source of this freedom? I think a couple of short answers must suffice as a sort of bookmark. The first is that trying to make human agency "compatible" with determinist physics is an over-zealous impulse toward reductionism. Each level of description has its place, and needs its own language to be properly understood. We don't need to call natural selection an illusion because it is not couched in the language of Newtonian physics. In the case of human agency, the price of trying to describe all of reality under the determinist rubric is, as I've shown above, far too high; we are asked to give up our humanity in the process.

The second bookmark comes from Iris Murdoch (one of the 20th century's most under-rated philosophers, IMO.) She rightly asked where the myth of "free will" comes from, and theorized it as a narcissistic and infantile desire, promoted most strongly by Existentialism, to exercise what Sartre called "total freedom." It is a myth that needs revision, not by turning the dial all the way in the other direction, such that humans are akin to thermostats, but by contrasting the existential focus on action and decision with the important role of observation, reflection and evaluation, without which action tends toward empty posturing. This outlook may even help explicate some of the puzzling aspects of neuroscience that fuel arguments like Harris' and Coyne, like the now famous Libet studies that suggest some of our decisions are already made before we are aware of having made them. Here's Murdoch, two decades before Libet's famous consciousness studies, and two years before Deecke and Kornhuber's studies on "readiness potential:"
If we ignore the prior work of attention and notice only the emptiness of the moment of choice we are likely to identify with the outward movement since there is nothing else to identify it with. But if we consider what the work of attention is like, how continuously it goes on, and how imperceptibly it builds up structures of value about us, we shall not be surprised that at crucial moments of choice most of the business of choosing is already over. This does not imply that we are not free, certainly not. But it implies that the exercise of our freedom is a small piecemeal business which goes on all the time and is not a grandiose leaping about unimpeded at important moments. The moral life, on this view, is something that goes on continually, not something that is switched off in between the moral choices. What happens in between such choices is indeed what is crucial. (my emphasis)
This outlook seems to me to do double duty, not only helping us understand the nature of will, volition, or agency, but also how we can go about continuing to enhance it, as we have on the long journey from pre-verbal primates to the present day. Which is to say that the topic of agency is synonymous with the topic of humanism, of humanity itself, and needs just as vigorous a defense.