Saturday, April 28, 2012

Anti-intellectualism is Easy

Cosmologist Lawrence Krauss has "apologized" for some "off the cuff" statements he made to an interviewer for The Atlantic, disparaging the role of philosophy as it relates to physics and other sciences. Statements such as the following:
Philosophy used to be a field that had content, but then "natural philosophy" became physics, and physics has only continued to make inroads. Every time there's a leap in physics, it encroaches on these areas that philosophers have carefully sequestered away to themselves, and so then you have this natural resentment on the part of philosophers. 
 And:
Philosophy is a field that, unfortunately, reminds me of that old Woody Allen joke, "those that can't do, teach, and those that can't teach, teach gym."  
Pressed by the interviewer to defend such sweeping statements, which would seem to indict some pretty big names in analytic philosophy, Krauss dodges, now suggesting that many of those philosophers who did make important contributions weren't really doing philosophy. Wittgenstein, for example:
Formal logic is mathematics, and there are philosophers like Wittgenstein that are very mathematical, but what they're really doing is mathematics. (my emphasis)
 (This is false, by the way: math is a subset of formal logic, not the other way around. And while Wittgenstein may have a tendency to be very methodical in his analysis, in a way that could analogize to being mathematical, his primary occupation was with the way we use language. This is in no way "doing mathematics.")

And Russell:
Bertrand Russell was a mathematician. I mean, he was a philosopher too and he was interested in the philosophical foundations of mathematics... (my emphasis)
Some of Krauss's pals, whose careers fall directly in the lineage of analytic philosophy established by Russell and Wittgenstein, including a certain Daniel Dennett, apparently took exception with the implication that they were really just glorified mathematicians, inducing a bit of a walkback by Krauss on the Scientific American website yesterday, where he took pains to stress that the statements I have cited above (not to mention his tendency to repeated append the modifier "moronic" before mentions of philosophers) were not intended as a "blanket condemnation of philosophy as a discipline."

His defense of philosophy's value, however, is quickly dispensed with in a couple of short paragraphs, and Krauss applies the rest of his considerable verbiage defending the idea that philosophy of physics (his field) is best left to physicists, a defense which consists entirely of the argument that Krauss is only interested in the ideas he is interested in.

That's his right, of course, but the reason this particular cloud of cyber dust has been stirred up is that Krauss has also claimed to have answered, in his recent book A Universe From Nothing, the old and intractable philosophical problem "why is there something rather than nothing?" -- a question which dates back to Leibniz, but is also implied by the classic metaphysical distinctions between "being" and "becoming" dating back to Plato.

The question itself serves as kind of a shibboleth between curious and incurious minds. We cannot, on the one hand, inquire into the reasons for things, (as the physical sciences do), and simultaneously cordon off some of those possible reasons as boring or moribund because physics cannot explicate them. Once we have decided that it is interesting to ask why certain things are the way they are, and use causal reasoning to provide answers, we are stuck with problems of infinite regress, which come with the territory. There is no logically intrinsic reason why some questions admit of answers (why is the universe expanding, why can't matter exceed the speed of light? what happened in the first 3 seconds of cosmic time?) and some do not (why does the universe have the properties it has, and not other properties? why does it have any properties at all?)

Sean Carroll, for example, writes in a 2007 blog post that the correct answer to the question why is there something instead of nothing is "Why not?" Can we possibly imagine him as blithely presenting this answer to the question why are there galaxies, or why are the nuclei of atoms bound together?

Krauss's incuriosity is worse, though, because he explicitly claims (using Richard Dawkins as a proxy, in an afterward to A Universe From Nothing) that "Even the last remaining trump card of the theologian, 'Why is there something rather than nothing?' shrivels up before your eyes as you read these pages." That is he claims (or at least endorses Dawkins' claim) to be answering not merely a scientific question of how the first instances of matter could have arisen from quantum fields, but the ontological question of why the universe is inherently structured to allow these fields. Why these fields, and not others? --or no fields at all?

When it is brought to Krauss' attention that he has addressed the former, but not the latter, he firmly denies he ever set out to do anything more than this. From the Atlantic interview:
I don't really give a damn about what "nothing" means to philosophers; I care about the "nothing" of reality. And if the "nothing" of reality is full of stuff, then I'll go with that.
So Leibniz' question stands, then? Wherefore then Dawkins' talk of trump cards? This is the point physicist and philosopher of science David Albert made in the New York Times (earning him the characterization of "moronic" from Krauss):
Relativistic-quantum-field-theoretical vacuum states — no less than giraffes or refrigerators or solar systems — are particular arrangements of elementary physical stuff. The true relativistic-quantum-field-­theoretical equivalent to there not being any physical stuff at all isn’t this or that particular arrangement of the fields — what it is (obviously, and ineluctably, and on the contrary) is the simple absence of the fields! The fact that some arrangements of fields happen to correspond to the existence of particles and some don’t is not a whit more mysterious than the fact that some of the possible arrangements of my fingers happen to correspond to the existence of a fist and some don’t. And the fact that particles can pop in and out of existence, over time, as those fields rearrange themselves, is not a whit more mysterious than the fact that fists can pop in and out of existence, over time, as my fingers rearrange themselves.
Interestingly, after dismissing any explanations that don't invoke empirical fact, Krauss concludes his apologia by offering just that, an angels-on-the-heads-of-pins type rationalization made of pure speculation:
If all possibilities—all universes with all laws—can arise dynamically, and if anything that is not forbidden must arise, then this implies that both nothing and something must both exist, and we will of necessity find ourselves amidst something.  A universe like ours is, in this context, guaranteed to arise dynamically, and we are here because we could not ask the question if our universe weren’t here.   
This is a muddle, logically, but the least we can say of it is that it begs the question, posed by Albert, of why "all universes with all laws can arise dynamically." It just gets worse from here:
If “something” is a physical quantity, to be determined by experiment, then so is ‘nothing’. 
I look forward to these experiments with great interest.

Again, there's no reason at all for Krauss to be interested in anything other than what he is interested in. His field of cosmology is rich with opportunities for him to remain very engaged for several lifetimes without ever venturing into other areas. It is his going out of his way to paint other people's concerns as "moronic," "sterile," "impotent," "useless," and "just noise" while demonstrating serious difficulties in even summarizing those concerns (Wittgenstein is "doing mathematics") and in engaging in questionable ontology himself ("nothing" is a "physical quantity to be determined by experiment") that makes Krauss, in this context at least, something of a reactionary boor. This is the same kind of defensive, know-nothing, tough guy posturing we see all too often in the "hard" sciences, and it is completely unnecessary.





Sunday, April 22, 2012

Law of the Jungle (re-post)

[Here's a re-post from 2009. I'm posting it again because it gets at one of the big problems raised by the "hard-determinist" or "incompatibilist view that free will is an illusion: If our thoughts are not something we actively and consciously engage in, but rather the pre-determined "effects" of prior genetic and environmental causes, then how are reason and morality even possible? I've made a few slight revisions to the original post, and elaborated on a couple of points that were earlier unclear.]

In working toward a definition of human nature and intelligence, Kant drew a distinction between the actual and the potential -- that is, the world as it is, and the world as it might be. Even allowing for some porosity between humanity and other species, it should not be controversial to suggest that such a distinction does not exist in any developed form in non-human intelligences. (I exclude so called "artificial intelligences," which are really just extensions of human intelligence by other means). So far as we know, only humans have "oughts." To whatever extent non-human organisms choose their behavior, they do not so do by reasoning among choices, for this would require a symbolic thought process they do not possess. (An exception may be the cetaceans, but we'll leave that aside for now).

The appearance of humanity's faculty to envision potential alternatives to "what is" marks the origin of (among other things) morality. Without a system to order our possible choices as preferences, we would either be reduced to paralysis, or forced to return to the realm of pre-conscious behavior. This is to say that the world of actuality (as described by biology, for example) is, for humans, transected by a realm of thought not confined to its borders, and often in opposition to it. Indeed, one of the main functions of language is to discourse on things that do not, but might, exist. Highly ordered metaphysical schemes like those of Plato or of Christian theology are specific manifestations of this kind of transcendence, but no system of thought is completely free of it. Even supposedly amoral "anything goes" philosophies, like Nietzsche's or Sartre's, stand in opposition to an "actual" state of affairs they wish to disparage, such as traditional Christianity, or "bourgeois" values.

The question that emerges for an ethical system that purports to be "naturalistic" (that is, explained entirely in biological terms) is this (very old) one: Given our ability to imagine multiple possible worlds (if not, in fact, our inability to refrain from imagining them), what is to be our rationale for choosing among them? Any answer that appeals to biology alone will fail to account for moral reasoning (if not culture altogether), since the distinction between "is" and "might" cannot be found in genes or neurons. It is a property only of minds, which is to say of intelligence experienced subjectively. (We can dispense with the silly objections about dualism here, I hope.)

Before it is proposed that no moral philosophy would ever try to explicate an ethos in strictly biological terms, let's look at a famous article by the Australian philosopher J.L. Mackie (1917-1981), titled "The Law of the Jungle," and published in Philosophy in 1978. This paper was one of the first philosophical responses to Richard Dawkins' The Selfish Gene. Dawkins himself was careful in that book not to imply that biological "selfishness" (that is, the persistence of successful traits throughout time) justified psychological egoism. Mackie's take was far less cautious.

The main body of "Law of the Jungle" is a fairly innocuous exploration of a type of group selection that Dawkins overlooked in The Selfish Gene. But he closes with a palpably ethical conclusion:
What implications for human morality have such biological facts about selfishness and altruism? One is that the possibility that morality is itself a product of natural selection is not ruled out, but care would be needed in formulating a plausible speculative account of how it might have been favoured. Another is that the notion of an ESS may be a useful one for discussing questions of practical morality. (my emphasis)
ESS, as readers of The Selfish Gene know, stands for "Evolutionarily Stable Strategy," which is a type of biological homeostasis worked out by game theorists. ESS theory is called upon to demonstrate why "reciprocal" altruism exists in populations where we might expect a brute selfishness to prevail: Since a pugilistic stance is thought to require a huge outlay of energy (having constantly to defend oneself in fights), the smart strategy would be to lay low and live in harmony until that harmony is disrupted by another member of the population.

Following Dawkins, Mackie cites the example of bird grooming behavior, which ESS theory divides into three types: Sucker, Cheat, and Grudger. The Sucker embodies the extreme of complete altruism, removing ticks from other birds without reservation. The Cheat embodies pure selfishness, allowing other birds to remove its ticks but never going out of its way to return the favor. The Grudger bridges the difference, grooming all other birds with the exception of those who don't reciprocate.

Game theory predicts that the Grudger "strategy" of reciprocal altruism will spread through a population, displacing the less sophisticated strategies of pure selfishness or pure altruism. And so it may. And we might pause to notice, as Mackie does, that there is an echo in this strategy of our own concept of fairness. ("Do unto others...")

This is not a problem as far as it goes. Birds have been employed as symbols of justice and wisdom as at least as far back as Athena's owl. We find the flock-as-jury in Farid Ud-Din Attar's allegorical poem The Conference of the Birds from the 12th century, and Chaucer's Parlement of Fowles 200 years later. As valuable as modern ethology is, it is nothing new to demonstrate that we share with birds certain social norms.

But this is a far different thing than asserting, as Mackie does, that because some birds have evolutionarily developed behaviors which are "healthy in the long run" and which resemble our own notion of fairness, our notions of fairness are thereby justified. Other far less savory bird behaviors, such as eating the young in a neighboring nest, would appear to be just as "stable" as grooming behavior. Are they, too, to be adopted as preferred human behavior?

After the standard disclaimer that "there is no simple transition from ‘is’ to ‘ought,' no direct argument from what goes on in the natural world and among non-human animals to what human beings ought to do," Mackie goes on to promote exactly that argument. After linking reciprocal altruism to our modern common sense notions of fairness (although it bears a much closer resemblance to older modes of justice like the vendetta or blood feud--"An eye for an eye"), he associates the "Sucker" strategy with the philosophies of Jesus, and Socrates, who advocated, he says, "repayment of evil with good." Then, switching back to ESS theory, he writes:
[A]s Dawkins points out, the presence of suckers endangers the healthy Grudger strategy. It allows cheats to prosper, and could make them multiply to the point where they would wipe out the grudgers, and ultimately bring about the extinction of the whole population. This seems to provide fresh support for Nietzsche’s view of the deplorable influence of moralities of the Christian type.
This attenuation between discussions of biological stability and moral programs happens so quickly it's easy to miss Mackie's move, in this paragraph, of using the "is" of biology to justify ("provide fresh support for") the "ought" of the Nietzschean moral structure. But it's there, in very clear terms: Always retaliate. It works for birds! (Note also there is no historical evidence that Nietzschean morality is more "evolutionarily stable" than Christian morality.)

(It's important to mention in passing that Mackie is wrong on the science too. According to ESS theory, if the cheats prosper and wipe out all the suckers and grudgers, we are left with a stable population of cheats (until the rise, through variation and selection, of new grudgers, which would re-dominate the population.) It would be no less "healthy," in biological terms, than an all-grudger population. We can hypothesize that there might be more sickness through tick infestation, but whether this is sufficient to threaten extinction is not captured in this particular model, which only measures the relative effectiveness of the three strategies within a closed system.

What this suggests is that Mackie has unwittingly added a moral dimension to the grudger strategy among birds where none belongs. At the same time he is employing the example of bird populations to demonstrate why Nietzschean morality is better than Christian morality, he is simultaneously using our own human concepts of fair play to valorize the behavior among birds. Everyone knows an all-cheat population would be "bad," after all.)

Mary Midgley, in her famous response to Mackie, which kicked off her ongoing feud with Richard Dawkins (a "Grudger," in temperament, if there ever was one), points out the fairly obvious shortcomings of such a linkage between evolutionary stability and ethics. Like the birds in the game theorists' model, we appear to already be congenitally prepared by our genes to retaliate against transgressions against us. We need no special help from the world of ideas --the realm of the possible--to remember to do harm to our enemies when transgressed upon. As Midgley puts it, "The option of jumping on one’s enemies’ faces whenever possible has always been popular." She does not, however, follow Mackie's lead in suggesting that the only other option is to make a wholesale replacement of the strategy of retaliation with a strategy of saintly restraint. She suggests that the ethos of the paying good to evil arose as an intelligent, reasoned--not dogmatic--response to the limitations of our emotional makeup:
This disregard of the essential emotional context reappears in Mackie’s idea that the undiscriminating ‘sucker’ behaviour is one recommended by Socrates and Christ. Neither sage is recorded to have said ‘be ye equally helpful to everybody’. Both, in the passages he means, were talking about behaviour to one narrow class of people, with whom we are already linked, namely our enemies, and were talking about it because it really does present appalling problems. (my emphasis)
She goes on:
Of course charity and forgiveness have their drawbacks too, especially if they are unintelligently practised. As Mackie rightly says, there are problems about reconciling them with justice, and justice too has its roots in our emotional nature. There are real conflicts here as both Socrates and Christ realized. (my emphasis)
In other words, in the moral realm we are dealing with considerations here far beyond the ability of game theory to effectively model. The issue now becomes one of flexibility and versatility, which are dramatically multiplied in the human capacity to represent things symbolically, and of intelligence--the ability to hold multiple variables in one's consciousness while working out a problem. There is simply no way for specific usages of complex reason to be genetically encoded or learned by rote: there are far too many unknown contingencies to account for. Game theory is unlikely to predict the pythagorean theorum, the Critique of Judgement, the Theory of Evolution by Natural Selection, the Parable of the Talents, or even "The Law of the Jungle." (Mackie's article, not Kipling's maxim, though probably that too.) Fortunately, we need no more than what we already have: a formal, methodical study of reason and  symbolic representation. As soon as we stop letting fears of "dualism" drive us into the arms of an untenable "naturalistic" understanding of human thought and behavior (which is really just a zombie version of that old hard-to-kill doctrine of Behaviorism), we can return to an actual, meaningful study of the human condition.


Thursday, April 19, 2012

Not Even A Little Bit!

Cal Tech physicist Sean Carroll has a post on his blog Cosmic Variance today, called "Jon Stewart Doesn’t Understand How Science Works Even a Little Bit" in which he takes Mr. Stewart to task for "misrepresenting science" in saying (in a 2010 interview with Marilynne Robinson) that a lot of it seems to rely on faith. Here is the offending passage:
I’ve always been fascinated that, the more you delve into science, the more it appears to rely on faith. You know, when they start to speak about the universe they say, well, actually, most of the universe is antimatter. Oh, really, where’s that? Well, you can’t see it. [Robinson: "Yes, exactly."] Well, where is it? It’s there. Can you measure it? We’re working on it. And it’s a very similar argument to someone who would say God created everything. Well where is he? He’s there. And I’m always struck by the similarity of the arguments at their core.
Carroll takes Stewart to have meant "dark matter," not antimatter, and he's upset that Jon doesn't appreciate the great deal of evidence that supports the existence of dark matter, which makes it a direct contrast to a hypothesis like Deism, for example. And indeed the evidence for dark matter is pretty good. There are gravitational fields in the Bullet Cluster, for example, where regular baryonic matter used to reside until it was pushed away by a collision with another cluster. Dark matter, being non-interactive with normal matter, would not be pushed away by the collision, which would explain the gravitational fields in the cluster's old location. Not a slam dunk, but compelling evidence all the same.

But what if Stewart wasn't referring to dark matter--what if he meant dark energy instead? The names are similar, and both dark matter and dark energy have an influence on cosmic gravitation, so it's easy for a layperson to confuse them.  Almost 3/4 of the universe is postulated to be "made of" dark energy, and we don't know anything about it, except that it "must" be there, to explain the expansion rate of the universe. We don't even know what properties dark energy might have. It might be something called "quintessence," about which all we can say is that it acts as a kind of anti-gravity, pushing the universe away from itself.

If we take Jon Stewart's comparison at its most basic, replacing "God" with some kind of generic First Cause, it's not really so far fetched. The first cause is a logical construct that would explain the existence of the universe (though it does raise new logical problems, like what caused the first cause). And at present dark energy is not much more than a logical construct, either, the explanandum in this case being the slightly less sexy expansion, rather than creation, of the universe. Unlike someone like Bill O'Reilly, who ignorantly claimed that science can't explain the tides, or magnets, Stewart is essentially correct here. We don't know why the universe is speeding up, and the words "dark energy" comprise the name we give that ignorance.

Faith is often taken to mean something like "dogmatism." In this sense it has not much value for either science or religion, but is present in each. Even in the best science, humans being what they are, it is never possible to apply the null hypothesis to all our tacit assumptions and motivations. We don't dwell on these excesses, but historically they are many. Behaviorism, for one, which promoted a near-complete lack of affection in child-rearing. The prevalence of pre-frontal lobotomies, for another, in the mid-20th century.

A common response to this line is that what is incidental to science is essential to religion, which is centered upon acts of faith. This is where I think we need to introduce complementary meanings of the word, connoting something more like dedication in the face of the unknown. Anyone who has read Marilynne Robinson knows she was not on the Daily Show to encourage religious dogmatism.

This kind of faith is essential to science. (It is also not unrelated to curiosity and play, though the stakes can be much higher in scientific contexts.) This is the kind of faith a scientist might have in thinking she might redeem the work of prior scientists, or in believing there is a rational explanation for an unexpected observation. These proceed without any certainty regarding the results whatsoever. The greatest discoveries must have required immense faith (in this second sense), so upsetting were they to our common understanding of things. Galileo's discovery of moons around Jupiter, Darwin's discovery of common descent. These required an incredible commitment and dedication to see their work through, and both suffered terribly for it.

It is attractive to believe, perhaps, that religion has room only for the first, dogmatic type of faith, and I cannot and will not argue with the menacing prevalence of it. But to pretend that all of religion is just unreasoned adherences and appeals to authority, while all of science is merely evidentiary, betrays no understanding of how much we know about the sociology of science, and the varieties of religious experience. The "conflict thesis" was once compelling, but given all the data we've been able to accumulate over the 150 years since White and Draper, there's no reason for any educated person to believe it.

Tuesday, April 17, 2012

Peter Singer, Concern Troll

I've often written here that utilitarianism, the view that "the morally right action is the action that produces the most good" (Stanford Encyclopedia of Philosophy), has a built-in bias to status quo values and power relations. The fallacy that makes this possible was pointed out by G.E. Moore 100 years ago: when we start from the position that it is obvious what "the good" is, we elide the need for moral reasoning altogether. Utilitarianism cannot distinguish between what is desired and what ought to be desired. In Moore's words:
The fact is that “desirable” does not mean “able to be desired” as “visible” means “able to be seen.” The desirable means simply what ought to be desired or deserves to be desired; just as the detestable means not what can be but what ought to be detested.
Our ability to evaluate the differences between what is and what might be, is what enables morality in the first place. Once we have worked out, through contemplation and dialogue, what we desire for ourselves and others, then all that remains are the policy questions. What distinguishes utilitarianism form other forms of moral philosophy is that it starts there, eschewing the need to have the difficult conversations about meaning and value in the first place. Critics of utilitarianism don't object to "doing the math" once these conversations are over. Of course we all want the consequences of our actions to align with our moral preferences. What we object to is the refusal to allow debate on what our aims -- our "oughts" -- should be, as Moore described.

I could not have expected to come across so stark an example of the problem as in the recent half-hearted defense of Colonialism in Africa by the renowned utilitarian philosopher Peter Singer, who remarked as follows, in an BloggingHeads conversation (transcript here) with libertarian economist Tyler Cowen, who asked "Do you think the end of colonialism was a good thing or a bad thing for Africa?"
That's a really difficult question. I think, clearly, there were lots of bad things about colonialism, but you would have to say that some countries were definitely better administered and that some people's lives, although they may have had some sort of humiliation, perhaps through not being independent, being ruled by people of a different race, in some ways they were better. It's hard, really, to draw that balance sheet. Independence has certainly not been the unmitigated blessing that people thought it would be at the time.
The dialogue continues, with Cowen honing down the question a little:
Cowen: Let's say we have the premise, that with colonialism there would not have been wars between African nations. It's not the case that a British ruled colony would have attacked a French colony, for instance. It's highly unlikely. So given just that millions have perished from wars alone, wouldn't the Utilitarian view, if you're going to take one, suggest that colonialism was essentially a good idea for Africa, it was a shame that we got rid of it, and that the continent would have been better off under foreign rule, European foreign rule.
Singer: I don't think we can be so sure that it would have continued to be peaceful. After all we did have militant resistance movements, we had the Mau Mau in Kenya, for example. We had other militant resistance movements. It may simply have been that the fact of white rule would have provoked not one colony going to war against another but civil war within some of those countries. If what you're asking is would colonialism, had it been accepted by the people there, without military conflict, would that have been better than some of the consequences we've had in some of these countries, you would have to say undoubtedly yes. But we can't go back and wind back the clock and say "how would it have been if" because we don't really know whether that relative stability and peace would have lasted.

Cowen: If we compare the Mau Mau, say, to the wars in Kenya and Rwanda, it seems unlikely that rebellions against colonial governments would have reached that scope, especially if England, France, other countries, would have been willing to spend more money to create some tolerable form of order. My guess is you would have had a fair number of rebellions but it's highly highly unlikely it would compare to the kind of virtual holocausts we've had in Africa as it stands.

Singer: I certainly agree that if you look at what's been happening in the Congo, just as one example, or countries like Sierra Leone or Liberia, yes, you could certainly think that it might have been better for those countries. (my emphasis)
This is the type of argument known now, in our internet age, as "concern trolling," where the essayist or commenter expresses a token empathy for the principle at issue (in this case political liberty and self-determination) but sadly concludes that the time is just not right, or that other needs must take precedence. In most cases it is accompanied by a patronizing stance, tacitly casting all dissent as understandable but naive exuberance. (In political writing, the avatar of concern trolling is David Brooks, whose frequent appearance on left-leaning outlets like NPR and PBS, not to mention his regular gig at the Times, suggests that liberals can't get enough of his tough love--that concern trolling works.)

The first thing we must note is the utter historical illiteracy of the presumption of a simple dichotomy between the stability of white rule and the relative chaos and violence of self-determination. A great number of governments in post-colonial Africa were, for example, propped up by the US as anti-communist bulwarks (Mobutu, Idi Amin), in a more indirect mode of colonialism by other means. Very few African governments actually followed the Pan Africanist template of Ghana, for example. We therefore cannot point to the impact on ordinary citizens of these regimes, and use it to indict the notion of self-rule as untenable.

Similarly, a number of political events in post-colonial Africa trace back to causes that go deep within the colonial era. The Rwandan Civil War was not, as Cowen and Singer both imply, the result of ethnic tensions that had earlier been eased by Belgian colonial rule; rather it was in large part fomented by the Belgian encouragement of strict racial identity in Rwanda. This is not to say that there was no tension between Hutu and Tutsi before the Belgians arrived, but this was largely a social and economic rift, with little basis in tribal lineage. (In the Pre-Colonial Kingdom of Rwanda, "Hutu" was essentially a marker for low socioeconomic status.)

All of which makes the question of whether the end of colonialism was a good thing or a bad thing one of false premises. The violence and disorder that we have witnessed in post-colonial Africa was in large part created by the conditions which Cowen and Singer would like to contrast it with. Peaceable colonialism in our time is a fiction that can only be conjured if one neglects to consult history. (To be fair to Singer, he does not rule out that colonialism may actually promulgate war by proxy, as in fact it has--though it's interesting that someone so interested in the social conditions in the poorest parts of Africa would not know this for sure.)

All of this would be true whether or not Singer was offering a utilitarian solution. But here we go from bad to worse. In Singer's analysis it apparently goes without saying that social order and a modestly decent standard of living are in themselves more important "goods" than political self-determination. This is, of course, exactly the same argument mounted against abolitionism in the 19th century, and exactly the same argument being mounted today against feminism by the Republican party. In fact it's the standard reactionary response to almost any assertion of rights in our modern era: we simply can't afford the price of granting them.

Singer professes to adhere to "Preference Utilitarianism," which defines the greatest good as that which actual rational beings would choose for themselves, so it is interesting that he makes no allowance here for what actual Africans would prefer; it is simply assumed that independence has not been worth--could not be worth--the cost in lives and misery. This leads me to surmise that "Preference Utilitarianism" is the public relations version of the doctrine, which tends to fall away when one encounters any "preference" that might conflict with ones own ethical or metaphysical biases.

I don''t mean to defame Singer by suggesting he actually supports a return to white rule in Africa; I'm sure he doesn't. The point is that his utilitarianism affords him no tools to defend against even so simple a transgression as human exploitation, provided a certain basic standard of living is maintained. This, of course, is a fact known well by the exploiters themselves, all but the most sociopathic of whom will offer some semblance of compensation for their dominion. But this is just sleight of hand, to distract from the real damage. Where, for example, is any mention in Singer and Cowen's discussion of the underlying reason for Colonialism? It's not to dominate other social groups, but to enrich oneself off those groups' natural resources. The control (if not outright destruction) of social relations is the means, not the end. Surely this must play some role in a utilitarian consideration of whether Africans would benefit from continued colonialism. But it rates not a mention.

***

Nor can utilitarianism, it would seem, provide a ready defense against the logic of capital here at home. Look at the way that Cowen gets Singer to sign on for tax breaks for the rich in the US:
Cowen: If we give a greater tax break to charitable donations, and here I mean only true charity, not say a fancy art museum, disproportionately this will benefit wealthy people. Wealthy people have a lot of money. In essence you're cutting their taxes. They're giving more, they may not have a higher level of consumption, but would you be willing to raise your hand and say "I, Peter Singer, think that cutting taxes on the US wealthy is in fact one of the very best things we could do for the world's poor, if we do it the right way"? Yes or no?

Singer: Yes, if the tax break only goes to those of the wealthy who are giving to organizations that are effectively helping the poor, I'll raise my hand to that.
Of course, the truth is that very rich people have enough money to pay a higher rate in taxes and be charitable to the world's poor, both of which are direly needed (though not sufficient without major political and social changes on the part of the entire global community). By assenting to Cowen's either/or proposal, Singer is validating the libertarian idea that rich people already pay too much in taxes, and that the very rich cannot be incentivized to give up more of their fortunes. I doubt Singer actually believes this; it's a shame he couldn't spot the fallacy sooner.

We also might raise the objection that a much bigger problem resides in the way wealth is created in the first place. Cowen does not suggest, nor does Singer reply, that anything is wrong with the expectation of rich investors of a certain rate of return. This expectation is one of the reasons why debt forgiveness, which would accomplish far, far more than individual charitable giving could ever hope to, is now taboo. Why make the libertarian assumption that economic decisions should be driven by investors rather than governments, especially when there is so much good evidence to the contrary? Why focus on the relative role of individual charitable giving when public policy would be much more effective? I suggest the main reason is that utilitarianism cannot actually propose even mildly radical social change, being far too reliant on normative ideas of "the good." This has the effect of making it, in essence, the ultimate vassal philosophy.

Tuesday, April 03, 2012

On Learning To Love Crimestop

This post by Santi Tafarella at Prometheus Unbound gets to the heart of the big problem posed by the doctrine of "Hard Determinism" that, as we've been discussing, tries to establish that the fact we live in a deterministic world (where every effect has a cause) forces us to consider human beings as puppets or automatons both physically (they can't choose other than they choose) and morally (they aren't accountable for the choices they make.)

I've written on this before and don't want to unduly repeat myself, so here's Santi making the essential point. First he notes that Jerry Coyne puts half his money where his mouth is, quoting him thus:
I, for one, have tried to stop fretting about bad “choices” in the past, since I had no alternative [...]
(Of course, if our behaviors are predetermined by prior causes, then no amount of "trying" is likely to prevent Coyne from fretting. Indeed, the whole concept of personal effort cannot really survive under the new paradigm of the automatonic human.)

What we immediately notice is that Coyne is conveniently applies the principle here to expiate his own guilt and shame over past "bad" events. Can he, however, take the next logical step and refrain from blaming others? Can he in fact refrain from judgement entirely? This is what the doctrine would suggest as a logical approach (if it in fact permitted us to be logical, which it doesn't). As Santi writes:
On deterministic terms, for example, it makes no more sense to call a president or senator “just” or “unjust” than it does to call a volcano’s eruption “just” or “unjust.” There is simply no place for praise or shame in a deterministic universe.
If every object in the universe is merely obeying the laws of physics and cannot conduct itself other than it does, then the entire concept of judgment is rendered meaningless. (Here we may recall Dawkins' example of Basil Fawlty beating his car for breaking down, as though the car was to blame.)

But as Santi observes, however successful Coyne may have been at releasing himself from blame for past transgressions, he has so far not chosen to extend this courtesy to the other poor automatons who share the universe with him. For example, President of the National Academy of Sciences Ralph Cicerone, about whom says Coyne:
It is shameful that the president of the premier science organisation in America has endorsed a prize for conflating science with religion. (my emphasis)
Or Ken Miller, of whom he writes:
I give Miller plaudits for his continuing fight against creationism, not only in his Dover trial, but in his book and many public presentations of why evolution is a scientific fact.  But he gets no plaudits for deliberately refusing to identify why Americans dislike evolution. (my emphasis)
Speaking of plaudits, now that Coyne has released himself from the burden of blame, we might inquire if he is as likewise eager to relieve himself of praise or credit for his accomplishments. This apologia from 2009 may be instructive here, consisting as it does of a lengthy defense of his argumentative style, including the claim that he is "infinitely more polite" than his disputants, and that he has "never criticized an evolutionist, writer, or scholar in an ad hominem manner." (By "never" he must mean, with Captain Corcoran of the HMS Pinafore, Well, hardly ever!, overlooking such lapses as calling Mary Midgley "dumb" and "superannuated," Elaine Ecklund a "disingenuous" "Templeton-funded automaton," Rob Knop "mushbrained," Michael Zimmerman lacking in "intellectual courage," Stanley Fish "dumb" and "preening," Robert Wright "annoying," Alain de Botton is "cringe-making".... You get the picture.)

Hypocrisy aside, Santi is going for a larger point here, and it's a good one: if you truly believe that human beings are automatons, then there is just one way to alter their undesirable behavior, and that's coercion and violence. An automaton cannot self-regulate, after all. Since, in our culture the state has a monopoly on violence by rule of law, what of the rest of us, when we come into conflict with each other?
There’s a problem here. What if you don’t have access to the violence of the state to settle disputes and force people’s thoughts and behavior in the direction that you want? What do you do then?

Enter Aristotle. He provided the famous (and best) answer: if you’re not going to use violence to settle disputes, your recourse is to rhetoric, the art of persuasion. But this is precisely what an intellectually consistent determinist cannot resort to, for persuasion is premised on the idea that people are self-determined. (my emphasis)

[...]

[P]ersuasion calls attention to actors—those who initiate actions—and it does so for purposes of getting audiences to evaluate, appreciate, or mock them. Persuasion seeks from audiences moral judgments and asks them to bestow or withhold such things as justice, forgiveness, and mercy.

All these imply free will: that things can be other than what they are; that one really can choose the better over the worse and exercise will over a matter.
The vision of the sort of world that might be built by a people who don't believe in free will* is, we begin to see, somewhat orthogonal to the one we say we want, where concepts like justice, reason, democracy, dialogue--all the humanistic virtues--assume pride of place. Indeed we might begin to fear a not-so-subtle creeping totalitarianism. As Santi writes in a related post, to declare that the language of accountability has lost its meaning, and yet to continue to use it, is nothing short of Orwellian. And indeed, as in Nineteen-Eighty-Four, (or in PK Dick's "Minority Report,") if our thoughts are "determined" by prior causes, the way is now open for criminalizing them.






*Let's be clear it is the disposition toward or against agency that will influence our culture, not the objective "fact" of whether volition is real or illusory.