Tuesday, March 30, 2010

The End of Pluralism

Sam Harris has doubled down on his claim that science can tell us what to value, largely in reaction to critiques by Sean Carroll (who has responded) and Russell Blackford (whose response for now is nominal). He does seem to have picked up one convert, a certain Richard Dawkins, who is apparently some kind of biologist.

Since Harris, in common with his detractors, values science, this puts him in the unenviable position of having to defend the proposition that science tells us to value science. If there is no other supervening influence, his moral schema must then bear a strong resemblance to such forms of divination as consulting oracles.

The bulk of Harris's elaboration doesn't bother to address this logical prickle. Rather, it goes like this: You had better hope I'm right that there are universal moral values discernible scientifically, because otherwise we will have no way to objectively separate good people from psychopaths.
What if a man like Jefferey Dahmer says, “The only peaks on the moral landscape that interest me are ones where I get to murder young men and have sex with their corpses.” This possibility—the prospect of radically different moral preferences—seems to be at the heart of many people’s concerns.
[...]
The fact that it might be difficult to decide exactly how to balance individual rights against collective good, or that there might be a thousand equivalent ways of doing this, does not mean that we must hesitate to condemn the morality of the Taliban, or the Nazis, or the Ku Klux Klan—not just personally, but from the point of view of science. As I said at TED, the moment we admit that there is anything to know about human wellbeing, we must admit that certain individuals or cultures might not know it.
What Harris really means is that he wants to give up on the project of having to defend moral stances by delegating the questions to a neutral referee, namely, objective science. This is an appeal to ultimate fairness of a kind we haven't seen in the West since the death of God. Harris' exasperation that we should have to waste any time articulating why Nazis are wrong is understandable. But it is rather chilling to imagine Harris' alternative in practice. What happens to pluralism in a society where certain belief systems can be declared objectively, scientifically wrong?

A glimpse at this prospect is offered by this controversial statement from The End of Faith:
Some propositions are so dangerous that it may even be ethical to kill people for believing them. This may seem an extraordinary claim, but it merely enunciates an ordinary fact about the world in which we live. Certain beliefs place their adherents beyond the reach of every peaceful means of persuasion, while inspiring them to commit acts of extraordinary violence against others. There is, in fact, no talking to some people.
According to this view, empirical fact (an "ordinary fact about the world") mandates killing people because of their beliefs. This is apparently one of the values that science bequeaths. The rule of law and principle of freedom of conscience would instantly wither under such a regime. Will science show us that we were wrong, after all, to value liberty and equal rights?

If there is really "no talking to some people," Harris would not have bothered to write "Letter to a Christian Nation." We don't actually know how susceptible to persuasion many fervent radicals are, or what forms of persuasion might be most effective. Persuasion is one of those things whose success can't be known ahead of time. It's hard work, thankless work. But we should be wary of any philosophical position that argues for abandoning it as moribund, in favor of a certainty in its futility.

Seeking an objective foundation for morality is an ancient pursuit, and one of the things we notice by studying the history of it is how often it provides a cover for one's own ideology. There is actually very little empirical science underlying Harris' presentation of the virulence of faith, and yet this appears to be one of his most deeply held views. In his debates on this question he has demonstrated very little openness to alternative explanations of the etiology of jihadism, even when put forth by those, like anthropologist Scott Atran, whose primary research is on terrorists and suicide bombers. Atran has catalogued numerous cases where Harris' views lean away from scientific data and toward folk wisdom (which is to say, stereotype) about jihadism, to the point of toppling over completely. And Harris is often quite open about the self-evidence of his convictions. After stating his belief, in an Edge "Reality Club" segment, that the response to the Jyllands-Posten cartoons was not contingent on anything but a Koranic injunction against depicting Muhammad1--neither racism, economic disparity, nor blowback--he writes:
If the Koran contained a verse which read, "By all means, depict the Prophet in caricature to the best of your abilities, for this pleaseth Allah", there wouldn't have been a cartoon controversy. Can I prove this counterfactual? Not quite. Do I really need to?
When it's obvious, when we all know what "those people" are like, data and context be damned, then we are adopting the "moral peaks and valleys" of the mob. If that's Harris's idea of "well-being," then perhaps he is right after all to say that certain people might not know what that word means.





1Actually no such injunction appears in the Koran. The Hadith (sacred to Sunnis, much less so to Shiites) discourages the painting of all pictures of people or animals generally, but says nothing of depictions Muhammad particularly. The response of the Imams who toured Denmark was that the cartoons were offensive to Muslims, not that they broke a sacred injunction against depicting the prophet. So much for facts.

Sunday, March 28, 2010

Summum bonum medicinae sanitas

[3/30/10--Welcome Cosmic Variance readers. I'll remain silent on the question of whether I know what I'm talking about any better than Sean, except to nod reverently to Brandon @ Siris, to whom I owe my improved understanding of Hume and the is/ought fallacy.]

The end of physic is our body's health.
Why, Faustus, hast thou not attain'd that end?
Are not thy bills hung up as monuments,
Whereby whole cities have escap'd the plague,
And thousand desperate maladies been cur'd?
Yet art thou still but Faustus, and a man.
Couldst thou make men to live eternally,
Or, being dead, raise them to life again,
Then this profession were to be esteem'd.
Physic, farewell! Where is Justinian?

--Marlowe, The Tragical History of Dr. Faustus

Sean Carroll has come out hard against Sam Harris's recent, mulish TED talk arguing that science can (and should?) tell us what to value. Carroll's response draws on the clear and simple principle that arguing that values are "objectively true" is like trying to square the circle. You can only succeed by annihilating your subject.
In the real world, when we disagree with someone else’s moral judgments, we try to persuade them to see things our way; if that fails, we may (as a society) resort to more dramatic measures like throwing them in jail. But our ability to persuade others that they are being immoral is completely unaffected — and indeed, may even be hindered — by pretending that our version of morality is objectively true. In the end, we will always be appealing to their own moral senses, which may or may not coincide with ours.
Along the way, Carroll invokes Hume, to whom, it is commonly supposed, we owe the notion that "you can't derive an ought from an is." That's only partly true. Hume was getting at something more specific than a distinction between facts and values. He was trying to identify an error in rationalist thinking, in order to discredit that way of thinking, and the fact-value gap was his reductio. In the end, Hume didn't take much interest in deriving oughts from anything, and makes a poor champion for moral reasoning.

What most casual talk of "oughts" leaves out is a conditional premise. We ought to do X, if Y. We ought to eat better, if we want to live longer. We ought not to murder, if we believe that humans are ends in themselves -- or that we want to avoid a blood feud -- or that what goes around comes around -- or that God will be displeased with us for reasons he hasn't specified. Or some combination of these premises, which generally speaking flow from the way we imagine the nature of the world, which is to say, from our metaphysics.

The main danger of defining values as a sub-category of "empirical fact," as Harris does, is that it buries our metaphysical depictions in cement, where we can no longer call them up for reconsideration. It is an ideological move to say that one's values are universal, but it is nearly always a lie.

Harris asks:
Why is it that we don't have ethical obligations toward rocks? Why don't we feel compassion for rocks? It's because we don't think rocks can suffer. And if we're more concerned about our fellow primates than we are about insects, as indeed we are, it's because we think they're exposed to a greater range of potential happiness and suffering.

[...]

There is no version of human morality and human values that I've come across that is not at some point reducible to a concern about conscious experience and its possible changes.
Overt concern with the fate of conscious (and pain-sensitive) agents is the sort of "universal moral truth" that seems obvious to those who have not surveyed the facts, which don't permit any such universality. The connection between "biological complexity" and sentience is a relatively new moral development. To an animist, mountains, trees, and bodies of water are all conscious. To a follower of Vedanta, everything in the world is a performance by the Godhead. Buddhist and Jain monks take care not to trod on the smallest beetle if they can help it. To define sentience in our modern way, as a function of neural complexity, and then argue that such sentience has always been everyone's concern, is incoherent.

Discussions of meta-ethics must take into consideration that like all societies, we have our own metaphysical presuppositions that our ethical questions flow from. Today we are almost all humanists of a type, and this includes the Abrahamic religions. Secularists and monotheists alike take for granted that nature--matter--is passive and inert, and must be shaped by an intentional or "informational" force, whether God, man, or laws of physics and natural selection. This is not a matter of science, it is a matter of the metaphysical view that permits science (though there may be others).

Another important metaphysical precept that goes unexamined in Harris'analysis is Hobbesean social atomism, which underlies most moral thinking in the enlightenment tradition. But ecologists have been questioning for at least a century whether it makes any sense to value apes and dolphins, but not ants and plankton, when they each inhabit (with all of creation) the same web of causation. Perhaps it is not so crazy to have an ethical obligation towards rocks (in the form of coral reefs, for example), and perhaps this is area in which our primitive forebears have bested us in moral reasoning, by recogizing there is no extraneous, separable part of nature which we are not obligated to.

Harris does have some thoughtful things to say in this lecture. He makes a strong case for moral reasoning (though I would say it conflicts with his main thesis that morality is empirical.) And he makes the important point that we do have the right to judge other people's moral practices, such as shame killings. But ultimately his animus for religion drives him to illogical conclusions, such as the notion that what the Taliban lacks is sufficient science about "human flourishing" to make good moral choices1. By this standard, how much less moral must the ancient Greeks have been, who knew so much less science than the medieval Arabs. And how much more immoral the nomadic tribes that preceded them thoughout Africa and Asia minor. How immoral that first human couple must have been, in their African Eden!

What this hyper-utilitarianism (which marries Mill with the logical positivists) primarily accomplishes is this. It  obviates the need to look reflectively at evil. When evil can be equated with a simple paucity of learning, like deficiency of a vitamin, there is no need to look within our own hearts for its seeds. All the world's darkness can be projected outwards onto people we couldn't have less in common with. We don't want to subjegate women; we don't want to tyrannize innocents; we don't want to convert the whole globe to our ethos, and exterminate those who resist (wait, scratch that last one.) Unfortunately it is far more likely that the opposite is true. Not even all the science of John Faustus can make us good, if we won't season it with introspection.





1It is noteworthy that Harris sets up the Dalai Lama as a paragon of virtue, based on his emphasis on compassion. But Tibetan Buddhism is just as old as Sunni Islam, which Harris places on the other pole, and while it may not be as reactionary in its anti-modernism as the Taliban, it does not embrace the metaphysical naturalism that Harris conflates with science.

Sunday, March 21, 2010

Belief in "Belief in Belief"

In Elizabethan England, the word "atheist" was used to indicate any deviation from religious orthodoxy, more a charge of noncomformity than actual metaphysical rejection of God. Most people of this time just did not want to know the details of someone's apostasy, whether it be Deist, Gnostic-Hermetic, or some other freaky-deaky thing. It was the deviancy that mattered, not the content of that deviancy. Reject just one of the terms of orthodox Anglicanism (or Catholicism; it was the Jesuits who first called Ralegh an atheist) and you might as well reject the whole package, a time-honored mode of alarmism we see today when Republicans brand any attempt to regulate markets as "Communist."

Something of spirit pervades the writings of the neo-atheists, whose attempts to enforce a Christian doctrinal purity I have been loosely chronicling here over the last few months. Both Christopher Hitchens and Richard Dawkins have had strong things to say about who may really call themselves a true Christian, and now Daniel Dennett, as part of his effort to demonstrate the evils of "belief in belief," has chimed in with the results of a pilot study profiling five "non-believing" clerics currently preaching in Protestant churches.

In his report on this study, posted from his pulpit at the Washington Post's "On Faith" panel, Dennett allows from the outset that such a project is hampered by the difficulty of what, exactly, a "believer" is supposed to believe. He describes a continuum, one far pole of which would feature a god few people have believed in (In the Judeo-Christian tradition, at least) since the days of Jacob: a fully anthropomorphic God "existing in time and space with eyes and hands..." On the other pole, pure atheism, lacking even an abstract "Ground of Being." Moving along this continuum, from left to right, where can we properly say we have left "belief in god" behind? Two of Dennett's study subjects, both evangelical, are able to draw such lines in the sand (helped no doubt by the implicit hermeneutic of their own fundamentalism), and self-identify (privately, confidentially) as atheists. They each consider themselves hypocrites, and are looking for a way to leave the church. Assuming they are able to do so, these two would not seem very tightly bound by the "trap" Dennett goes on to describe. Their story would seem to be the more straightforward one of people who change their mind about the world, and make changes accordingly.

The bulk of the moral quandary, then, is assumed by the other three subjects (each Mainline Protestant), who consider themselves believers by their own lights, but are concerned that these lights illuminate a much more symbolic realm that that embraced by many of their parishioners. They would like to "come out" and explicate their more subtle understanding of God and scripture, but fear they cannot for the effect it would have on their ability to preach. Some are concerned they would lose their jobs outright, others that focusing on what is signified by doctrinal terms would just get in the way. Dennett quotes one pastor named "Wes":
Well, because on the pulpit, on a Sunday morning, you get people in all different stages. And if I laid that out there, then again, people would not hear the point of the sermon.
Dennett means for all of this to support his idea that there is a conspiracy of lies enshrined by our culture's tacit respect for faith that he calls "belief in belief." Because we are not allowed to examine certain cultural values marked off as "sacred," we end up absurdly perpetuating them with institutions which long ago abandoned any claim to real conviction.

It's a compellingly chilling story, one that might lend itself to dystopic science fiction. But is it true? Does Dennett's preliminary data support it? And to the extent it is true, is it uniquely true of religious values, or is this a dilemma that involves anything a society might value, including freedom, justice, progress, self-sacrifice, or even rationality itself?

In his On Faith column, Dennett writes that his working hypothesis, which emerged out of writing Breaking The Spell, was how often ordained clergy "didn't believe a word of the doctrines of the faith to which they were devoting their lives1." Even of his own small and self-selected sample of "non-believers" this is too strong a statement. At least three-fifths of his subjects believe in Christian doctrine in a meaningful, allegorical way. They don't think they are telling lies in their pulpits; they think they are offering vehicles for truth expressed non-literally (which would seem to be legitimate according to the continuum earlier proscribed). Of the two atheists he describes, one has taken the role of "worship leader," avoiding sermons. The other admits to play-acting, and that he doesn't believe "in what I'm singing in some of these songs." But while Dennett has shown what anyone might have guessed (especially anyone who has read Victorian literature), that some percentage of clerics have lost their faith, or part of it, he falls short of convincingly arguing that this rises to the proportion he originally speculated on.

To the extent that closet atheist clerics do exist, though, as they clearly do, can we attribute the etiology of this phenomenon, as Dennett would, to the success of religion in keeping a firewall against rational inquiry? I think we have to ask if there are any parallels pointing to a more broad phenomenon.

Let's imagine a secondary school teacher in mid to late career, once full of zeal for opening the doors of possibility for classroom after classroom of eager students, now jaded into thinking that it's of no consequence whether or not children grow up learning history, math, economics, geography, and biology. The world keeps getting worse; today's children invariably end up as tomorrow's suck-ups and sell-outs. Our cultural experiment in betterment through public education has been a failure. But perhaps it's not all bad, in that it keeps the kids off the streets and out of the sweatshop. Best to go along with the charade. And besides, after 20 or 30 years, what else is a trained teacher, with no savings and only a meagre pension, to do?

Now imagine another pedagogue, this time a college professor of literature. Throughout grad school this professor sustained herself by fantasizing about future seminars and workshops where she would introduce students to the practice of literary analysis, and, indirectly to moral philosophy and the examination of their own souls. Instead she has found, even after securing tenure at one of the nation's most renowned colleges of arts and science, that the vast majority of her students want to be lectured to. They want her to tell them what the important themes of the great books are, and are inordinately concerned with what's "on the test." Attempts to exercise greater control over curriculum and grading policy have only led to increased friction not just with her students, but also with her fellow faculty members, and with administrators, forcing her to modulate her teaching style. She still tries to engage her seminars in discussion, but resiles to lecture format much more than she would ideally prefer, resigning herself to the thin hope that special students will seek out more engagement during office hours.

Are these two instructors also not held in a type of "trap"? We could construct similar analogies featuring scientists, politicians, industrialists, soldiers, environmentalists, filmmakers, poets, building inspectors--indeed it is hard to imagine a vocation that is not prone to disillusionment, in which one could not one day stop believing, for better or worse, in its foundational values. Religion would not seem to suggest any sort of special case on this account. In a separate On Faith column, Richard Dawkins begs to differ, calling the religious case a "singular predicament," and revealing the ideological impediment that keeps him from treating this matter thoughtfully:
Does a doctor lose faith in medicine and have to resign his practice? Does a farmer lose faith in agriculture and have to give up, not just his farm but his wife and the goodwill of his entire community? In all areas except religion, we believe what we believe as a result of evidence. If new evidence comes in, we may change our beliefs.
Farmers losing faith in farming as anything but an instrument of destitution, and yet having to carry on, is such a commonplace in literature and folklore, that I can't imagine how Dawkins can bring it up with a straight face. For modern, factual examples we can look to Michael Pollan, for example, who in The Botany of Desire chronicles one potato farmer who will not eat the produce from his own field for its saturation with pesticides. Another potato grower he profiles uses organic methods, and as a result cannot farm the potato (the Russell Burbank) that the biggest customers--makers of fast food french fries--want to buy. As for doctors, the dwindling number of medical students willing to become general practitioners tells a story we could flesh out in a series of posts each as longwinded as this one.

As someone who cannot imagine losing faith in his principles, Dawkins is perhaps one of the lucky ones. But such certitude serves as an impediment against his having clear insight into this aspect of human nature, in which conviction can fail, and it is not easily known whether this failure is an indication that it's time to move on, or to re-up. Is our marriage doomed, or are we just not trying hard enough? Has our child-rearing philosophy gone awry, or do we need to double down? Is our chosen career wrong for us, or are we always too quick to give up when the going gets rough? Is our argumentative style in business meetings an efficient way of getting results or just a needless source of hostility? Of course empirical facts must play a part in answering these kinds of questions. But facts alone cannot make our important moral decisions for us. The defining feature of human agency is the leap of faith. (Always, notably, subject to later revision. If even the conservative Christians of Dennett's study find themselves capable of rejecting their once-sacred beliefs, perhaps the iron-grip of the faith-meme has been over-sold). Looking taxonomically all known traps, surely there is a greater risk that when we make faith in every circumstance an emblem of evil, we make war with ourselves.


1(That this was more than just a hypothesis, but more like an article of faith, is suggested by a comment from December, 2008, on a Guardian post by Andrew Brown stating affirmatively that "The seminaries and churches are full of atheist clergy who live their own version of this paternalism" [where religion is defended as being good for people, even if false].)

Saturday, March 06, 2010

On Who Started It

I haven't attended to the whole "accomodationist" business in a while, but the aspidistra is still flying.

Josh Rosenau, [via John Wilkins] makes a point I have long argued here, that a great deal of neo-atheist rhetoric appears absorbed not with putting forth a rectified ideology, but with score-settling.

Rosenau cites the example of PZ Myers, one of the neo-athiests' most gifted shibbolecists, who defends a gratituous shot at noted "faitheist" Chris Mooney by writing
Did you read Mooney's last book, Josh? It's a bit late to tell ME that I'm taking cheap shots.
This is an unusual response from someone hoping to be thought of as representing the evolved, wise and mature point of view. But let's allow that it is an understandable one. We've all felt that it would be a lot easier to remain reasonable if only we were not besieged by hostility. Last summer I quoted Russell Blackford making just such an explicit argument:
It would be nice to live in a world where no one is ever called names, no one is ever sarcastic or nasty, nobody ever gets exasperated, all argument is totally civil and reasoned, all points are engaged rather than evaded, etc. But we don't live in that world, and I'm sick of the idea that it's okay for our opponents, such as these faithiests (not to mention the religious themselves) to be very nasty indeed, while a double standard applies requiring us so-called "New Atheists" to bend over backwards to be nice and not make the slightest fun of anyone else.
We all want to be taken seriously, granted respect, and given the benefit of the doubt. But whatever our disappointments on this score, the idea that Russell is "sick of" here is the main principle underlying the impulse to civilization itself. All adult, grownup behavior is subject to a double standard. That is the definition of adulthood in the post-Vendetta phase of human civilization.It is one of the main tasks of child-rearing--and one of the reasons why it takes so long in humans--to build a faculty for sublimating the urge to retaliate long enough to see one's other discursive options. Such a faculty is a baseline expectation for ordinary adults in our culture, not to mention philosophers and intellectual standard-bearers.

Being a grown-up means not relying on others to set the tone. It means being able to see, even in the heat of anger, that though your opponents may resemble, at the moment, a band of rock-throwing ogres, that they are actually human, like you--and that they may well feel, just as keenly as you, that someone else "started it."

Neo-atheists are fond of analogizing themselves to the civil rights movements of the late 20th century, and in the sense that many of them are struggling for the legitmacy and visibility of a non-theist view, they are right to do so. But no civil rights movement has made even a millimeter's progress by appealing to the spirit of retaliation. Dr. King didn't complain that the nastyness of his opponents made it too difficult to do his job, and neither did the leaders of the more successful arms of the women's rights and gay rights movements. They focused on the nobility of their vision, and let the ugliness of others make whatever impression it would. That is how progress happens.

Meanwhile, in the eternal contest over who is the oppressor and who the victim, what gets lost is the actual substance that supposedly underlies these great gifts--the case that would be made if it were to be decided on its merits, rather than on the triage of schoolyard transgressions. Wilkins gets into the question of just how scientific the supposed "incompatibility" of science and religion actually is, a subject I wrote several posts on1 last Summer, beginning with one titled "Is Neo-Atheism a Pseudo-Science?" That was in June, and since then I have seen nothing that would add any intellectual credibility to the "incompatibalist" position, which may in part explain why the low road in this debate is so often taken.



1As did John Pieret at Thoughts in a Haystack.