Author Archive

h1

The Lebedev Syndrome

November 18, 2015
"Leucocephalus" Phil Hansten

“Leucocephalus” Phil Hansten

In Dostoyevsky’s The Idiot, one of the least appealing characters is a fellow named Lebedev, a man who could pontificate on any topic. I was reminded of Lebedev following a recent discussion I had with a friend on the topic of climate change. My friend pointed out that, although he didn’t claim to understand the science of climatology, he did think Michael Crichton’s novels expressing skepticism about human-caused global warming made sense.

Now, to say Michael Crichton had remarkable talent and intelligence is a bit of an understatement. He graduated from Harvard summa cum laude, sold 200 million books, had colossal successes in television (ER) and film (Jurassic Park), and he won an Emmy, a Peabody, and an Academy Award. And this is only a partial list of his awards and accomplishments.

Unfortunately, however, therein lies the problem. People with that kind of success tend to have great difficulty in recognizing their limitations, and often have Lebedevian proclivities. Nobel Prize winners are notorious for this human foible. William Shockley (transistor) and James Watson (DNA), for example, both claimed there was strong evidence that some races were inherently less intelligent, despite overwhelming evidence to the contrary. Linus Pauling thought vitamin C could prevent many human ills. Other Nobel laureates have similarly ventured far afield to weigh in on topics about which they know very little.

Crichton apparently suffered from this same delusion… assuming that his astonishing success somehow allowed him to understand an exceedingly complex topic that was outside of his expertise. He was trained as a physician, not a climate scientist, and his skeptical views on climate change were thoroughly debunked by qualified climatologists. Crichton weighing in on the science of climate change makes about as much sense as me weighing in on quantum mechanics; I simply do not have the requisite training or understanding.

It would appear that people who pontificate on issues outside their field suffer from a humility deficiency. They would have no doubt profited from reading sixteenth century French thinker, Michel de Montaigne. Notoriously humble despite his obvious wisdom, Montaigne talked about how he eschewed topics about which he knew little: “…sounding the ford from a good distance; and then, finding it too deep for my height, I stick to the bank.” Would that Shockley, Watson, Pauling, and Crichton had the wisdom to stick to the bank as well.

Crichton was also vindictive; when a columnist criticized him on the climate change issue, Crichton put the columnist in a new novel as a pervert with a small penis who raped a 2-year-old boy. Not cool. It takes a pretty small person to do something like that rather than to debate the person who disagreed with him in an open forum. (I assume my criticism will not result in a similar fate for me since Crichton is no longer with us.)

Crichton had every right to question the findings of climatologists, of course. The problem is that—although he did not have a deep and nuanced understanding of climatology—his fame gave him a voice in the world that was orders of magnitude louder than that of those who are truly qualified: climatologists. And that is problematic for one simple reason… probability. It is overwhelmingly more probable that the climatologists are correct than that Crichton was correct.

Look at any scientific discipline during the past century; it is not unusual for people outside the discipline to snipe at the scientists within the discipline. On exceedingly rare occasions, the outsiders turn out to be more right than the scientists. But the question for the climate change issue is whether we are willing to bet the farm on the very slim chance that the climate change deniers are right. Basing public policy on the pronouncements of climate change deniers is like society putting all its money on a single number in roulette; we might win, but it is much more likely that we will lose big.

Mark Twain once said, “Supposing is good, but finding out is better.” Michael Crichton “supposed” that climate change does not represent an existential threat to humanity. But he did not “find out” if his position was scientifically valid. It wasn’t.

Blaise Pascal’s observation is again apropos: “So let us work on thinking well; that is the principle of morality.” The list of “not thinking well” on climate change is long: Michael Crichton, Fox News, most of the Republican candidates for president, fossil fuel corporations, a billionaire brother team (who shall remain nameless, lest I have to buy one of those mirrors to check under my car every morning), and countless “regular folks” who believe the propaganda spewed by the aforementioned voices. According to Pascal’s calculus these people are not thinking well and are taking an immoral position on a vital public policy question. In my opinion, Pascal is absolutely right.

Advertisements
h1

Lethal Absurdity

June 26, 2015
"Leucocephalus" Phil Hansten

“Leucocephalus” Phil Hansten

The most costly of all follies is to believe passionately in the palpably not true. It is the chief occupation of mankind. H. L. Mencken

We seem to have an epidemic of absurd thinking. Discussions based on empirical evidence and rational arguments still occur, but they are drowned out by the disputes in which one side has adopted an absurd position—that is, an intransigent stand on an issue in the face of overwhelming evidence to the contrary.

It is absurd, for example, to avoid giving life-saving vaccines to your children. It is also absurd to defend a health care system with per capita costs that are roughly twice that of any other country, yet give results that are inferior to most other developed countries.

It is absurd to claim that unlimited amounts of political donations will not debauch our elections. It is absurd to claim that giving the super-wealthy tax breaks will result in trickle-down to the middle class.

It is absurd to promote gun policies that allow purchase of assault rifles, guns in bars (guns and alcohol… what could possibly go wrong?), and high-capacity magazines. It is absurd to promote a death penalty that does not act as a deterrent, regularly kills innocent people, and costs substantially more than life in prison without parole.

And probably the most chilling absurdity of all is denying the compelling evidence that climate change is largely caused by human activity, and that it represents an existential threat to every person on the planet… including, ironically, the billionaires who are desperately trying to obfuscate the scientific evidence.

We thus have a cadre of state and national politicians who have allowed their self-interest and willful ignorance to distort or deny the empirical evidence on a wide range of issues. They constitute a confederacy of dunces and knaves in a theater of the absurd who are fighting against rational and evidence-based solutions to serious problems.

In the case of climate change they are sabotaging energy policies that are needed to reduce the risk of an unfathomable catastrophe to the human race, one in which the worst-case (but plausible) scenarios suggest that billions of people may perish. Blaise Pascal aptly called humankind the “mindless worm of the earth.” Ironically, by the time we are done destroying the earth, worms may be one of the few life forms left.

What all of these absurdities have in common is that they are on the wrong side of empirical evidence and rational thought. Unfortunately, absurd positions often have the backing of powerful interests or—as with the vaccine avoiders and supporters of capital punishment—they emanate from the pervasive intellectual indolence of the American public.

quoteMere opinions are not inherently misguided, of course. It may be my opinion that chocolate ice cream tastes better than strawberry, and even some moral opinions do not necessarily have an objective and rational basis. I can be for or against gay marriage, for example, without being asked to present any facts about the matter.

But the central question is seldom considered: is absurd thinking immoral? Sometimes not. I think we can give a pass to the person who put rectangular (not square) pants on SpongeBob SquarePants or who painted the trucks of the Yellow Truck Company orange (not yellow). I would argue, however, that absurd thinking can indeed be immoral for those in a position to influence public policy. Most of the absurdities discussed above result in a net increase in the deaths of innocent human beings. People who promote public policy based on these absurd positions are no doubt sincere, and consider themselves moral creatures. But I think Pascal was right when he said in his Pensées, “So Let us work on thinking well. That is the principle of morality.” Irrational and counterfactual thinking leading to deaths of our fellow humans is not “thinking well” and it is not moral, no matter how much spin they apply.

One could, therefore, divide public policy debates into three categories: 1) moral questions that do not require much consideration of evidence (e.g., gay marriage, abortion), 2) policy questions that have at least some legitimate arguments and evidence on opposing sides (e.g., education, economic policy), and 3) issues where the empirical evidence has clearly reached the threshold for action, but absurd positions prevail due to predatory self-interest (e.g., climate change) or ignorance (e.g., death penalty). There is hope for correcting absurd positions if they derive from ignorance, such as the death penalty issue, because there is little money supporting the absurd side. For many absurd positions such as those on health care, gun control, and climate change, however, lasting solutions depend on minimizing the overpowering effect of money in politics. It will not be easy, but our very survival may depend on it.

h1

Antonin Scalia and the Sleep of Reason

September 17, 2014
"Leucocephalus" Phil Hansten

“Leucocephalus” Phil Hansten

“…the gallows is not merely a machine of death, but the oldest and most obscene symbol of that tendency in mankind which drives it towards moral self-destruction.”

Arthur Koestler, Reflections on Hanging

By now most of you have heard that two mentally disabled half-brothers from North Carolina, convicted of a brutal murder of an 11-year old girl, have been exonerated and released. Both Henry Lee McCollum and Leon Brown spent 30 years in prison for a crime they did not commit; McCollum spent it on death row. This is nothing new, you say. After all, 144 defendants on death row had already been exonerated and released for reasons of innocence. But this case is different in one significant respect.

In 1994, using the McCollum case as an example, Justice Antonin Scalia dismissively (does he have another way of communicating?) rejected then-justice Harry Blackmun’s claim that the death penalty is unconstitutional. But instead of proving Scalia’s point, it turns out that the McCollum case is the perfect example of why the death penalty should be abolished. If Scalia were a reflective thinker rather than a reactive ideologue, he would acknowledge that he was dead wrong on this case, and perhaps consider the possibility that he is wrong about capital punishment in general. Don’t hold your breath.

Justice Scalia refuses to acknowledge that our criminal justice system is fatally flawed in its application of capital punishment. His mind-numbing obstinacy appears to arise primarily from his rejection of incontrovertible facts, but also through his distortion of data. It is truly unfortunate that the longest serving justice on the current Supreme Court is no more capable of rational thought than your raving crazy uncle at Thanksgiving. (I realize that I am venturing into ad hominem territory here, but there are rare times when it seems almost required.)

So how Justice Scalia demonstrate his inability to apply reason to capital punishment? Let us count the ways.

First, Scalia denies that we have a serious problem of putting innocent people on death row. A rational person would look at the now 145 people exonerated and released from death row, and admit that our criminal justice system—one that metes out an irreversible punishment—is badly broken. But Scalia dismisses or ignores the simple fact that we regularly convict the innocent in capital cases.

Secondly, Scalia rants about the failure of death penalty opponents to show him a single case of an innocent person who has been executed. Scalia’s demand is irrational and disingenuous, however, because once a person is executed, efforts to prove his innocence basically stop. There are so many potentially innocent people still alive on death row, that the efforts are focused on them. Moreover there are many cases where the evidence does in fact suggest that the executed person was probably innocent, such as Cameron Todd Willingham, who almost certainly was innocent of setting the fire that killed his 3 children. Rick Perry, the oligosynaptic “hang-‘em-high” governor of Texas, ensured that Willingham would be executed by replacing key members of the Texas Forensic Science Commission just days before they were to hear from a forensic expert who would testify that the evidence in the Willingham case was invalid. Rick Perry… tough on crime… soft on the truth.

Third, Scalia has resorted to an unconscionable distortion of data. In 2007, Scalia wrote in a concurring Supreme Court opinion that the error rate in the American criminal justice system is a paltry 0.027 percent. Nonetheless, as Samuel Gross and colleagues pointed out in their scholarly 2014 paper in the normally staid Proceedings of the American Academy of Sciences, Scalia’s claim is “silly.” Actually, “silly” is a charitable characterization given Scalia’s egregious distortion of the facts. As Gross, et al point out, Scalia committed the flagrant error of taking the exonerations known at the time for murder and rape (which what almost certainly a very small subset of the actual false convictions) and divided that number by all felonies for any crime, including things like income tax evasion. It is a sad day when a Justice of the highest court in the land makes a claim that would earn an “F” on a paper if he submitted it as freshman in a criminal justice class.

Finally, Antonin Scalia has now become famous for claiming that executing the innocent is not unconstitutional. Is Scalia actually saying that we can commit any repugnant and outrageous act as long as it is not against the constitution? Moreover, we do have the Eighth Amendment to the Constitution that prohibits “cruel and unusual” punishments. If executing the innocent is not cruel, I’m not sure what could be possibly be classified as cruel. Vincent Rossmeier stated in Solon, in an understatement of Olympic proportions, that Scalia’s views on the constitutionality of executing the innocent “…suggested a certain callousness…” I would take it a step further: If you are actually arguing that it is okay to execute innocent people, you need some serious therapy, not a lifetime appointment on the Supreme Court.

Plato knew about people like Justice Scalia, and would have called him a “philodoxer”—defined as a person who is especially fond of his or her own opinions, without having objectively investigated the facts of the situation. We all encounter people like this in our daily lives, and we are all guilty of this failing to one degree or another. We do not deserve, however, to have a flagrant and unrepentant philodoxer on the Supreme Court.

Nonetheless, one does not have to focus on the irrationality of a Supreme Court Justice to decide whether or not we should have the death penalty. If one considers the empirical evidence objectively, an astonishing conclusion appears: A rational person looking at the data on capital punishment in the US would necessarily come to the conclusion that the death penalty almost certainly results in a net increase in the deaths of innocent human beings. You heard that right. Capital punishment most likely increases the deaths of innocents and the reasoning is very simple.

First, there is no credible evidence that capital punishment reduces future murders. Deterrence was debunked in the 1950s by Arthur Koestler (whom I quoted above) in perhaps the most eloquent and penetrating essay ever written on the death penalty, entitled “Reflections on Hanging.” Moreover, a panel of the Committee on Law and Justice of the National Research Council concluded in a 2012 report that the evidence for the death penalty acting as a deterrent has no basis in fact.

Indeed, the available evidence actually points to a “brutalizing effect” which is the term criminologists use to describe an increase in the murder rate in the presence of the death penalty. A poll of leading criminologists found that while only 2.6 percent of criminologists think that capital punishment is a deterrent to future murders, 18.8 percent feel that the death penalty increases the murder rate. These are only educated opinions, of course, but they certainly show that the experts in the field do not support the deterrence effect.

Other evidence adds more embarrassment to the death penalty proponent. It is uncontested that the murder rate is substantially higher in states that have the death penalty than in those without it. And it is also true (as pointed out by Koestler) that the murder rate usually decreases when countries abolish the death penalty. If the data were reversed, we can be sure that death penalty advocates would be citing the figures incessantly. Death penalty opponents, however, are generally willing to admit that these data do not prove that the death penalty increases murders, even though the data suggest it.

Secondly, the evidence that innocent people are regularly condemned to death row is overwhelming. The fact that 145 people on death row have been exonerated cannot be ignored. Moreover, Samuel Gross and colleagues calculated in their recent publication (mentioned above) at least one in 25 people sentenced to death in the US is innocent of the crime for which they are condemned. The conclusion is clear: If the death penalty does not prevent murders but does result in the execution of innocent defendants, capital punishment results in a net increase in the deaths of innocent human beings. There is no other rational way to interpret the data.

I am optimistic that when these uncomfortable facts become common knowledge, the obscene and dehumanizing spectacle that is the death penalty will be no more.

h1

The Devil’s Desiderata

November 10, 2013
"Leucocephalus" Phil Hansten

“Leucocephalus” Phil Hansten

Astronomers assumed for years that there were planets similar to earth throughout the universe… planets that could possibly harbor life and even intelligent beings. Now astronomers are regularly finding evidence of planets circling other stars (“exoplanets”). This virtually guarantees what many of us thought; we are not alone. Or does it?

There is a chilling alternative theory, namely that the very process of organisms evolving intelligence and “advanced” societies also limits the life-span of such high-tech civilizations to perhaps a few hundred years on average. As the argument goes, the competitive pressures of evolution eventually create creatures that end up destroying each other and destroying the planet on which they live. If one looks at the behavior of homo sapiens over the past century, the theory certainly seems credible.

The obsessive craving for more power impels the already powerful to do whatever it takes to win—thereby risking the destruction of civilization. All of this reminds me of Nietzsche, who said, “What is strong wins; that is the universal law. If only it were not so often precisely what is stupid and evil.” And although he probably wasn’t thinking of exoplanets when he said “universal law,” he would probably have agreed that his law would apply to little green men and women as well as human beings. It’s just the way things usually work out.

Look at present day United States and consider “what is strong” and therefore winning… a veritable devil’s desiderata: billionaires launching organized campaigns to deny climate change; gun manufacturers preventing common sense gun laws; private prison owners ensuring that we have the highest incarceration rates in the world; military contractors collecting billions in profits through no-bid contracts; for-profit hospitals shunting poor emergency patients to non-profit hospitals; wall-street criminals using their ill-gotten gains to buy a fifth or sixth luxury home; billionaires paying lower tax rates than their secretaries; a completely dysfunctional government where most legislators are bought and sold by wealthy special interests; a US Supreme Court laden with ideologues whose every vote serves the powerful and betrays the vulnerable (note Scalia’s astonishing statement that there is nothing unconstitutional about executing the innocent). The list goes on, but you get the point.

If Beelzebub existed, reading the previous paragraph would no doubt make him feel all warm and fuzzy inside, especially because all of this happened without him lifting a finger; we have done this to ourselves.

Of course, “man’s inhumanity to man” has existed from the beginning, but for the first time in human history, human evil has existential implications. We are in the process of destroying the planet on which our lives and the lives of our descendants depend. Will the “strong” (but stupid and evil) individuals Nietzsche talked about be able to keep us on this course until it is too late? Perhaps. Has this scenario already played out on countless other planets in the cosmos? We can only guess.

Ernest Becker warned us that there are no guarantees of survival for the human race, but he remained optimistic. We may know within a few decades whether or not his optimism was warranted.

Let us give Nietzsche the last word on how we ended up in this mess: “Not necessity, not desire—no, the love of power is the demon of men. Let them have everything—health, food, a place to live, entertainment—they are and remain unhappy and low-spirited: for the demon waits and waits and will be satisfied.”

h1

Clifford’s Law

June 27, 2013
"Leucocephalus" Phil Hansten

“Leucocephalus” Phil Hansten

Some people find Clifford’s Law disturbing and counterintuitive. Its inventor, William Clifford, doesn’t care what you think. Clifford’s Law, for example, leads to the startling conclusion that the 2003 invasion of Iraq would have been a terrible decision even if we had found huge stockpiles of weapons of mass destruction, fully functional and ready for use. This sounds like nonsense, but is it?

Of course, William K. Clifford (1845-1879) weighs in on the issue from the safety of the nineteenth century, but his argument is, in my opinion, impeccable (I’ll explain in a minute). William Clifford was a gifted British mathematician who is perhaps better known today for a philosophical essay he published in 1877 entitled “The Ethics of Belief” (available on the Internet) in which he explored the conditions under which beliefs are justified, and when it is necessary to act on those beliefs.

In his essay, Clifford proposes a thought experiment in which a ship owner has a vessel that is about to embark across the ocean with a group of emigrants. The ship owner knows that—given the questionable condition of the ship—it should probably be inspected and repaired before sailing. But this would be expensive and time-consuming, so the ship owner gradually stifles his doubts, and convinces himself that the ship is seaworthy. So he orders the ship to sail, and as Clifford remarks, “…he got his insurance money when she went down in mid-ocean and told no tales.”

At this point Clifford asks the easy question of whether the ship owner acted ethically, to which only sociopaths and hedge-fund managers would answer in the affirmative. But then Clifford asks us a much thornier question: What if the ship had safely crossed the ocean, not only this time but many more times as well. Would that let the ship owner off the hook? “Not one jot” says Clifford. “When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that.” This is “Clifford’s Law” (a term I made up, by the way).

Clifford recognized that we humans are results-oriented, and we are more interested in how something turns out than on how the decision was made. But bad decisions can turn out well, and good decisions can turn out poorly. For Clifford, the way to assess a decision is to consider the care with which the decision was made. Namely, did the decider use the best available evidence, and did he or she consider that evidence rationally and objectively? These factors are what make the decision “right or wrong forever.” What happens after that is irrelevant in determining whether or not it was an ethical decision.

So, just like the ship owner, the people in power made the decision to invade Iraq based on what they wanted to be the case rather than what the evidence actually showed. So even if by some strange combination of unlikely and unforeseen events the invasion of Iraq had turned out well—hard to imagine but not impossible—the invasion still would have been wrong. So Clifford is saying (or would say if he were still alive) that if they had found weapons of mass destruction, the invasion would have still been a bad decision, because the best evidence clearly suggested otherwise. The decision was “wrong forever” on the day it was made, no matter what the outcome.

Occasionally, one of my students complains about Clifford’s Law. Since they are studying drug interactions, it means that even though a particular drug interaction may only cause serious harm in roughly 5% of the patients who receive the combination, if they ignore that interaction in dozens of people they are just as wrong in the many people who are not affected as they are for the person who has a serious reaction. “No harm, no foul” is legally exculpatory, but does not let you off the ethical hook.

Clifford’s Law applies to almost any decision, large or small, provided that the decision affects other people. With climate change, for example, there is overwhelming evidence that we need to act decisively and promptly. If we do nothing about climate change, but through an “accidental failure of evil fruits” there are no serious consequences, we are no less wrong than we would be if our inaction resulted in a worldwide catastrophe. The outcome is irrelevant. We long ago reached the threshold for decisive action, and our failure to act is “wrong forever” no matter how it turns out in the long run. So we don’t have to wait decades to find out who is right or wrong… we know already, and it is the climate change deniers.

Some powerful people have exercised their predatory self-interest to prevented substantive action on climate change. If they continue to succeed and no catastrophe occurs—not likely but possible—the victorious bleating of the deniers will, of course, be unbearable. But given the stakes for humanity, the cacophony would be music to my ears because it would mean that we avoided disaster. A more likely outcome, unfortunately, is continued lack of action followed by worldwide tragedy. Unlike the tragedies of old, however, there will be no deus ex machina to save us… we will be on our own.

h1

Who Needs Humanities?

February 3, 2013
"Leucocephalus" Phil Hansten

“Leucocephalus” Phil Hansten

It is the debate that never dies. With finite (and often dwindling) resources for university and K-12 education, legislators and educators make cuts. Sadly, we invariably round up the usual suspects and haul them off to the gallows: music, art, theater, philosophy, literature, and languages, both ancient and modern. The executioners mean well, but they ignore a basic truth: science and technology without the humanities (i.e., knowledge without wisdom) is a recipe for societal disaster.

The controversy is not new. Over a century ago Ambrose Bierce (1842-1914?) wrote a marvelous essay entitled, “An Historical Monograph Written in 4930.” (free on iBooks, etc.) Bierce is most famous for his acerbic but hilarious book “The Devil’s Dictionary” but in this essay he pretends to be writing in the year 4930 about the demise of “ancient America” which he says fell apart in the 1990s. He may have been a few decades early on the timing, but his analysis of the etiology of America’s final decline is stunningly prescient. Bierce cites two primary causes:

The Politics of Greed. Bierce identifies selfishness as a fundamental motive of human behavior, and describes how it destroyed American politics: “Politics, which may have had something of the contest of principles, becomes a struggle of interests, and its methods are frankly serviceable to personal and class advantage.” This is precisely what is happening in America today, in which the plutocracy is in control, and our political system has degenerated into a dysfunctional charade. There is little nuance… little insightful analysis… little genuine concern for humanity… just predatory self-interest punctuated by paroxysms of self-righteous demagoguery and rhinocerine obstinacy.

Prosperity at All Costs. Bierce then turns to the related issue of our unflagging focus on wealth and prosperity as our raison d’être. In a magnificent passage, he describes what “ancient America” ultimately sacrificed to achieve its mercenary goals:

“It is not to be denied that this unfortunate people was at one time singularly prosperous, in so far as national wealth is a measure and proof of prosperity. Among nations it was the richest nation. But at how great a sacrifice of better things was its wealth obtained! By the neglect of all education except that crude, elementary sort which fits men for the coarse delights of business and affairs but confers no capacity of rational enjoyment; by exalting the worth of wealth and making it the test and touchstone of merit; by ignoring art, scorning literature and despising science, except as these might contribute to the glutting of the purse … by pitilessly crushing out of their natures every sentiment and aspiration unconnected with accumulation of property, these civilized savages and commercial barbarians attained their sordid end.”

This remarkable prophecy so accurately describes our current sorry state that it is almost as though Bierce were a time traveler who observed the America of today, and then traveled back a century to write about it. Bierce was basically arguing that, in order to remain civilized, societies need the humanities as a counterweight to the hegemony of technology. I think he was correct.

This balance is important for individuals as well as societies. One need go no further than Ernest Becker to see what happens when a person with a first rate mind has a deep, organic understanding of both the sciences and the humanities. If Becker had been “just” a talented anthropologist, or if he had been “just” extremely well read in philosophy, literature, religion, and sociology… he would now be well on his way to intellectual oblivion. Instead, his remarkable synthesis still resonates for us in the 21st century.

Now, one must admit that science and technology have provided humanity with substantial benefits in medicine, agriculture, engineering, and many other fields. So it is not an “either or” proposition; it is not that we need humanities instead of technology. The problem is that we have become seriously out of balance; our power over nature is out of all proportion to our humane tendencies and moral sensibilities. Neil Postman of New York University clearly perceived the Janus-faced nature of technology when he said, “Reason, when unaided and untempered by poetic insight and humane feeling, turns ugly and dangerous.”

So, will preserving the music program at your local high school save humanity? Not likely. Nonetheless, if a critical mass of people around the world recognized that our single-minded worship of technology is on course to destroy civilization, we might be able to clear a bit of room for the humanizing and tempering influences of the humanities. I admit that the truth of our situation is unsettling, but as Emerson said, “God offers to every mind a choice between truth and repose. Take which you please—you can never have both.”

h1

Rooster Tales

November 13, 2012

“Leucocephalus” Phil Hansten

What we can’t think about: We humans are lousy in dealing with issues of cause and effect.

It’s 5:00 AM and he flutters up onto the roof of his pen. Then he lets out a magnificent “cock-a-doodle-do”… and then another, and another. Minutes later the first rays of the sun stream over the top of a nearby hill. He has done it again! Every single morning as far back as he can remember his mighty crowing has caused the sun to rise!

This is “the rooster taking credit for the dawn”—the classic example of the “post hoc fallacy.” As many of you know, the phrase post hoc, ergo propter hoc means “after this, therefore because of this.” It is often abbreviated to post hoc and in my view it is one of the most pervasive and problematic thinking errors of humankind. This fallacy contaminates discourse in science, economics, education, business, sports, politics, as well as our casual conversations; nobody is immune.

Given our evolutionary history, perhaps it was inevitable that we should suffer from this error. After all, when our cave-dwelling ancestors saw a guy eat some nice red berries and moments later clutch his throat and collapse, well… it made sense to avoid eating those berries. This is post hoc reasoning of a rational kind.

In science, of course, we try to avoid the post hoc fallacy. We study drug efficacy using double-blind placebo-controlled studies, and in epidemiological studies we try to control for every possible factor that may interfere with detection of a true causal relationship. We are not always successful, however. Case reports published in the medical literature notoriously suffer from post hoc fallacies. The “blood thinner” warfarin, for example, is affected by all sorts of dietary, pharmaceutical and other influences, so over time the blood thinning effect may fluctuate. But since people on warfarin start new drugs periodically, just by chance the new drug is sometimes started right before one of these fluctuations. Naturally, the new drug gets blamed, even though it may have had nothing to do with it. This is the post hoc fallacyand there is much nonsense in the drug interaction literature as a result.

Social scientists need to be especially vigilant to minimize the post hoc fallacy. Several years ago a study found that couples who lived together before marriage had a greater likelihood of divorcing than couples who did not. They concluded that couples should not cohabitate before marriage. I wondered, however, if they considered the very real possibility that people who lived together before marriage may have substantial differences (other than their living arrangements) from those who chose not to live together. It seems almost impossible to avoid post hoc in this case; couples might object to being randomly forced to live together or not for the sake of research!

It is easy to fall prey to the post hoc fallacy. Most of us are no better than the rooster, especially if we are not overly given to reflective thought. Human nature may foil our efforts to untether completely from post hoc thinking, but we must try nonetheless!