Archive for the ‘Psychology’ Category

h1

Clifford’s Law

June 27, 2013
"Leucocephalus" Phil Hansten

“Leucocephalus” Phil Hansten

Some people find Clifford’s Law disturbing and counterintuitive. Its inventor, William Clifford, doesn’t care what you think. Clifford’s Law, for example, leads to the startling conclusion that the 2003 invasion of Iraq would have been a terrible decision even if we had found huge stockpiles of weapons of mass destruction, fully functional and ready for use. This sounds like nonsense, but is it?

Of course, William K. Clifford (1845-1879) weighs in on the issue from the safety of the nineteenth century, but his argument is, in my opinion, impeccable (I’ll explain in a minute). William Clifford was a gifted British mathematician who is perhaps better known today for a philosophical essay he published in 1877 entitled “The Ethics of Belief” (available on the Internet) in which he explored the conditions under which beliefs are justified, and when it is necessary to act on those beliefs.

In his essay, Clifford proposes a thought experiment in which a ship owner has a vessel that is about to embark across the ocean with a group of emigrants. The ship owner knows that—given the questionable condition of the ship—it should probably be inspected and repaired before sailing. But this would be expensive and time-consuming, so the ship owner gradually stifles his doubts, and convinces himself that the ship is seaworthy. So he orders the ship to sail, and as Clifford remarks, “…he got his insurance money when she went down in mid-ocean and told no tales.”

At this point Clifford asks the easy question of whether the ship owner acted ethically, to which only sociopaths and hedge-fund managers would answer in the affirmative. But then Clifford asks us a much thornier question: What if the ship had safely crossed the ocean, not only this time but many more times as well. Would that let the ship owner off the hook? “Not one jot” says Clifford. “When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that.” This is “Clifford’s Law” (a term I made up, by the way).

Clifford recognized that we humans are results-oriented, and we are more interested in how something turns out than on how the decision was made. But bad decisions can turn out well, and good decisions can turn out poorly. For Clifford, the way to assess a decision is to consider the care with which the decision was made. Namely, did the decider use the best available evidence, and did he or she consider that evidence rationally and objectively? These factors are what make the decision “right or wrong forever.” What happens after that is irrelevant in determining whether or not it was an ethical decision.

So, just like the ship owner, the people in power made the decision to invade Iraq based on what they wanted to be the case rather than what the evidence actually showed. So even if by some strange combination of unlikely and unforeseen events the invasion of Iraq had turned out well—hard to imagine but not impossible—the invasion still would have been wrong. So Clifford is saying (or would say if he were still alive) that if they had found weapons of mass destruction, the invasion would have still been a bad decision, because the best evidence clearly suggested otherwise. The decision was “wrong forever” on the day it was made, no matter what the outcome.

Occasionally, one of my students complains about Clifford’s Law. Since they are studying drug interactions, it means that even though a particular drug interaction may only cause serious harm in roughly 5% of the patients who receive the combination, if they ignore that interaction in dozens of people they are just as wrong in the many people who are not affected as they are for the person who has a serious reaction. “No harm, no foul” is legally exculpatory, but does not let you off the ethical hook.

Clifford’s Law applies to almost any decision, large or small, provided that the decision affects other people. With climate change, for example, there is overwhelming evidence that we need to act decisively and promptly. If we do nothing about climate change, but through an “accidental failure of evil fruits” there are no serious consequences, we are no less wrong than we would be if our inaction resulted in a worldwide catastrophe. The outcome is irrelevant. We long ago reached the threshold for decisive action, and our failure to act is “wrong forever” no matter how it turns out in the long run. So we don’t have to wait decades to find out who is right or wrong… we know already, and it is the climate change deniers.

Some powerful people have exercised their predatory self-interest to prevented substantive action on climate change. If they continue to succeed and no catastrophe occurs—not likely but possible—the victorious bleating of the deniers will, of course, be unbearable. But given the stakes for humanity, the cacophony would be music to my ears because it would mean that we avoided disaster. A more likely outcome, unfortunately, is continued lack of action followed by worldwide tragedy. Unlike the tragedies of old, however, there will be no deus ex machina to save us… we will be on our own.

h1

Guns and “Mental Illness”

March 27, 2013
"Normal Dan" Dan Liechty

“Normal Dan” Dan Liechty

I think it may have started with Wayne LaPierre’s infamous press conference a week after Sandy Hook. In any case, we now here it as a common refrain of the no-holds-barred gun crowd – that the big problem is inadequate enforcement in keeping high-powered weapons out of the hands of the “mentally ill,” since obviously it is the “mentally ill” who perpetrate the mass killings of the type seen in Sandy Hook, Aurora, and dozens of other places around the country. In short, we ought all unite together to make sure no one with “mental illness” has easy access to these high powered weapons, but let them remain freely available for others.

In my view, even apart from the further stigmatization of mental illness this would entail, the policy itself is mind-bogglingly naive.

In the first place, “mental illness” is not a clearly definable condition. Other than a few very rare organic brain disorders, “mental illness” must be diagnosed on the basis of behavior. Therefore, in effect, supporters of this idea are advocating policies that would seek to proscribe allowing high powered weapons getting into the hands of people who have NOT YET behaved in such a way that we would diagnose them as killers.

Oh, but isn’t it true that with just about every person who has perpetrated mass killing people report that the suspect exhibited all sorts of strange and anti-social behavior long before the fact? Of course. But notice that we can only know that all of those behaviors we later recognize as strange and antisocial were actually “leading up to something” AFTER they have led up to something, that is, in retrospect. Thousands upon thousands of people display the same or similar behaviors and remain perfectly harmless.

Are advocates of such policies really saying they want us all to unite prophylactically to keep high powered weapons out of the hands of those many thousands who have been reported to exhibit strange and antisocial behaviors? Given that this would doubtlessly include many hundreds, if not thousands, of NRA members themselves, I rather doubt it.

But, taking them at their word, the complications have only just begun. Let us imagine we have the resources to seriously investigate each case of reportedly strange and antisocial behavior. Whom would we then trust to assess the investigations and decide if that person should be proscribed from gun ownership, and to have weapons in their possession confiscated? Would we trust that kind of power and wisdom to government officials? To mental health experts? To teachers? To police? To judges and lawyers?

Advocates of this approach should ask themselves whom they would trust enough for this assignment, for holding that degree of power over others, potentially including themselves?

I can only conclude that Wayne LaPierre and his followers have not even taken the first step in truly thinking through the implications of what they are advocating. Having it their way, we would very quickly find ourselves defending completely irrational interpretations of “2d Amendment Rights” by totally trampling on 1st Amendment, 4th Amendment and 5th Amendment Rights, creating veritable police state conditions, at best, as our “weapon against weapons.”

A much more reasonable, sensible and workable solution is to cool off a bit and then, with full acknowledgment of the 2d Amendment and the history of its interpretation in our laws and in our courts, begin the process of examining what weapons it make sense for civilians to have in private hands and what weapons it makes absolutely no sense for civilians to have in private hands (though these might still be “owned” by private citizens and accessible in controlled circumstances such as on regulated gun club target ranges.) In the meantime, as I have said in a previous posting, we could impose significant ammunition surcharges and heavy taxation on the weapons manufacturers designated to meaningfully compensate for the undeniable damage to society that all-but-unregulated weapons impose on all the rest of us on a daily basis, similar to tobacco and alcohol taxes designated for cancer care and treatment of victims of drunk drivers.

h1

Flu Shot Resistance – A Symptom of Death Denial?

February 14, 2013
TDF Guest Scott Murray

TDF Guest Scott Murray

Cornell psychologist Thomas Gilovich has made a career out of studying the cognitive processes that sustain dubious beliefs. There is something about flu season that makes me wish I could combine his work with that of Becker and enlist their aid in a cultural intervention. It seems there is a robust resistance to influenza vaccination despite the potential risks of the flu – a subject rife with extensions into Gilovich’s work. If the flu is so dangerous, what dubious beliefs inspire resistance to vaccination? And a glance at Beckerian thought might raise the question of why the fear of death wouldn’t drive more people to get vaccinated. But a deeper understanding of what death denial actually entails helps inform us why many resist the shot.

We’ve had rather mild flu seasons the last year or so (http://news.yahoo.com/why-years-flu-season-bad-173457272.html), but as statistics readily predict, occasionally influenza strains are more potent and many cases of serious flu infection can occur. The cost in dollars – and lives – can be quite serious. So why do some resist influenza vaccination?

Certainly the media, with its drive to fill air time and sell stories that attract attention, is part of the problem. Flu hype sends millions scrambling for a vaccination each year – but the media is known for its hyperbole, slanted use of facts, and even fear-mongering. It’s no wonder, then, that many cautious, critical individuals would see reports of this year’s raging epidemic and dismiss the need for a flu shot. But as Gilovich is insistent in pointing out, we tend to be most critical of that information which does not agree with the beliefs we already possess. As he puts it: “when the initial evidence supports our preferences, we are generally satisfied and terminate our search; when the initial evidence is hostile … we often dig deeper.” (82) In other words, those who suspect media hype and doubt the need for a flu shot may already believe that a flu shot is unnecessary, and be looking for reasons to support this belief.

Is a distrust of the media, with their tendency to overstate or distort facts, sufficient to explain dismissal of flu season severity? Becker might disagree. Cognitive biases are in one respect no different than elbows and thumbs: they’ve evolved to serve a purpose, and where Gilovich leaves off, Becker nicely steps in. Gilovich is a laboratory psychologist, a statistician; he resists the temptation to speculate too grandly on why the human brain comes hard-wired to accept information that supports existing beliefs while remaining doubtful of information that does not. Becker’s focus on death anxiety provides a compelling furtherance of this line of questioning.

Disease and death are deeply connected concepts. Disease is a limiting of life and brings the carefully repressed reality of our mortality painfully to the foreground of our thoughts. It’s no surprise, then, to discover a deep desire to believe that the flu shot is essentially needless. The CDC, in addressing the most prevalent myths that prevent influenza vaccination, have to deal with the ‘argument’ that influenza is only serious for those who are very young, elderly, pregnant or otherwise already ill. Your average healthy person can deal with influenza using natural means – namely, the immune system (http://www.cnn.com/2013/01/11/health/flu-shot-questions/index.html).

The challenge with this argument is that it isn’t untrue; in fact, the majority of people who come down with influenza nowadays – certainly in the more affluent Western world — survive relatively unscathed. But the problem with the argument is that it isn’t actually an argument against vaccination at all. Just because you can survive the flu does not mean you shouldn’t avoid it, if not only for yourself than for the basic public health service of not acting as a transmitter of the virus to someone who is more at risk of hospitalization or even death due to influenza. So the question persists: if vaccination serves yourself and the community, why resist it?

There are other studies being conducted on what is termed ‘naturalness bias’ (like this one at Rutgers: http://mdm.sagepub.com/content/28/4/532.short). The result of the study is essentially that some people make bad decisions because of a cognitive bias towards means that feel more ‘natural’ – in other words, if you tell them a kind of tea is a great counteractive agent to influenza, they will drink it, but if you suggest that they get vaccinated, they will find all kinds of reasons to dispute you. Tea is from nature; vaccines are a mysterious, dubious concoction brought to you by the same science that supported cocaine as medicine and DDT as pesticide.

So all kinds of reasons exist to doubt the flu vaccine. What’s in the flu shot, anyway? Many believe that the flu shot can actually get you sick, which is itself a medical misunderstanding involving a combination of known statistical nemeses. For starters, a large number of influenza strains and influenza-like viruses exist, and the vaccine can only protect you against so many. Some argue that this makes the vaccine worthless, when in fact it means that the vaccine is only as good as the severity of the strains it actually protects against. Those concocting the vaccine work hard to ensure it protects against the most severe strains predicted to be active in any given season. Nevertheless, the likelihood that some who are vaccinated become sick soon after is statistically guaranteed given a large enough sample, either because the vaccination didn’t come in time or because those unlucky few caught something the vaccination could not defend against. To these few (a minority, particularly if you establish a clear window of time after vaccination where sickness constitutes a possible response to vaccination), it is not surprising that they feel cheated and suspect that they’ve been had — not only by flu hype, but by flu shot hype.

The other factor in play has to do with anecdotal evidence and word of mouth narrative; it’s a simple tale of how everyone knows someone (who knows someone?) who has ‘gotten sick from a flu shot.’ The CDC is very candid about why this can seem to happen to any who cares to actually read up on it: (http://www.cdc.gov/flu/about/qa/misconceptions.htm).

In the light of these cognitive factors that influence these behaviors (influenza vaccination denial and the anti-hype hype), is there a motivational determinant that can be identified? Becker’s theories outline what Gilovich’s work suggests: something at work deeper than cognitive tics, deeper than the prevalence of poor science in pop culture. Isn’t it safer for the psyche to believe that disease is less of a threat? That the body has what it needs to defend itself against death? Don’t flu severity denial – and the cognitive factors that help sustain it – work in the service of protecting the embodied self against the reality of its own fragility?

Sources:

Gilovich, Thomas. How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life. New York: Free Press, 1991.

h1

AT&T and the Yang Complex

January 10, 2013
"Svaardvaard" Bill Bornschein

“Svaardvaard” Bill Bornschein

Have you noticed the recent AT&T commercials that feature an adult asking leading questions to children, questions that have “obvious” correct answers? Which is better, big or small? Which is better, fast or slow? These questions serve the purpose of the ad  but also reveal a disturbing aspect of our culture. The immediacy of the children’s answers and the “no kidding, duh” tenor of the commercial reveal a pretty unreflective public, at least in the view of the advertisement’s creators. If we pause for a second to apply the same questions to other subjects  such as cancer or melting glaciers, we get some different answers. Which cancer is better, big or small? What rate of glacial melt is preferable, fast or slow?

Besides revealing a dim view of the public, the ad’s popularity confirms the “truthiness” of that perception. In other words, the knee-jerk response critical for its effectiveness bespeaks our real world perception, our socially constructed reality. This is what I refer to as our “yang complex,” our western preference for the yang elements in the yin/yang dichotomy of Taoism. Yang roughly translates as “the sunny side,” and I am reminded of the American standard Keep on the Sunny Side. Yang qualities include fast, hard, solid, focused, hot, dry, and aggressive. It is associated with the male gender, the sun, sky, fire, and daytime. Yin roughly translates as “the shadow side.” Yin qualities include slow, soft, yielding, diffuse, cold, wet, and passive. It is associated with the female gender, the moon, earth, water, and nighttime.

The yang complex AT&T plays off of reminds me of the critique that  Ernest Becker offers of Norman O. Brown’s unrepressed man, the archetype for many similar New Age visions. AT&T’s approach puts us on the threshold of a very Beckerian question: “Which is better, repression or unrepression?” Rejecting this false dichotomy, Becker maintains that repression is necessary and it is the way in which the repression is managed that is crucial. As it stands, repression is a dirty word in our popular culture. Indeed, unrestraint is the premise of modern consumer culture. The yin qualities we need for balance are present but muted. I maintain that our cultural yang complex puts us in a dangerous position. Bigger, faster, and more more more have become the watchwords for a growth curve that is clearly unsustainable.

Yet, the lack of a yin perspective in our political discourse reveals the depth of the culture of yang. We appear unable to envision anything new, still opting for the obvious answers, just like the kids in the AT&T commercial. We need a new mythology, a new story that admits the insights of Becker, Rank, and Kierkegaard where the answers are not so obvious.

h1

Denying Denial

August 24, 2012

“k1f” Kirby Farrell

Dumb jokes about the longest river in Egypt assume that denial is as familiar to everyone as the Nile is. But explorers were still searching for the source of the Nile just a few years ago (National Geographic News, April 19, 2006), locating it now in “a muddy hole” in the Nyungwe forest, multi-crocodiles, mosquitoes, and miles from the nearest Coke machine. And denial can be just as elusive.

Suppose we launch a denial expedition by following the winding course of the dumb joke. The wordplay on “denial” pokes fun at someone’s inability or unwillingness to recognize that they’re unable—or unwilling—to face a painful reality. The joke calls attention to an attempt to fool yourself or others by screening things out. If you catch someone “in denial” and refer to it as the longest river in Egypt, your joke is using a euphemism that acknowledges the sensitivity of the subject.  But at the same time, the euphemism is slyly mocking not only the avoidance of a painful reality, but also the butt of the joke’s denial of being in denial.

Humor gives us a safe way to think about dangerous subjects.  After all, who isn’t in denial about something? But beyond that, denial usually means someone is being willfully or helplessly blind and therefore out of control—and maybe at risk.  Humor allows you to acknowledge the behavior, criticize it, yet also tolerate it.  And maybe even forgive it.

But wait, you say.  Forgive what?

Conventional wisdom regards denial as a choice or a weakness you can overcome.  There’s some truth to that. It matters of course because denial can kill you as well as save your life. If you’re Franklin Roosevelt, it can make sense for you to be denial about your crippling polio since that gives you more spare time to lead a panicky nation through an economic catastrophe and a world war.  Whereas if you’re a Nazi bent on fighting to the last breath in the final months of WW2, when the war was plainly lost and yet more people died than in all the earlier years of fighting, then denial has some drawbacks.

So we’re of two minds about denial. It can be healthy or toxic. The jokes are ambivalent too.  We can laugh at our limitations and foolishness, but also we can feel the pressure of criticism.  A quip about the Egyptian river is usually a wry dig—perhaps affectionate and concerned, but perhaps censorious.  If the dig is meant to shame someone, it’s usually because in this culture you’re supposed to be brave and smart enough to face life and death resolutely. Therapists and twelve-step programs such as Alcoholics Anonymous coax you to be realistic and adapt to the truth.

One complication is that like eating, denial is compulsory. Without editing, the world would be terrifying. One of life’s big disappointments is that nobody gets out alive. And once dead, you have a long time to get used to it – so long that the idea of it is unthinkable. No wonder it hurts your feelings. No wonder we’ve evolved reflexes to avoid threats. No wonder we often recoil from suggestions of death and misfortune without thinking about it.

And no wonder we compensate by overdoing our appetite for life—for sweets and “sweeties,” food and sex, youth and prestige.  It’s not wholly a choice.  We eat and mate because otherwise we die and go extinct. But if stuffing your gullet counters fears of deprivation and death, even if it makes you sick, then bingeing too can be a form of denial.

Worse, you can’t enjoy your succulent “drumstick” or “coq au vin” unless you kill Chicken Little first.  We thrive on an orgy of slaughter, thrice daily. As a species, we spend much of our working lives raising other creatures to kill them. Or consider that we’re also animals, and as history tells us, tasty meat too.[1]

Denial helps to tame such mind-blowing conflicts. We invent cuisine and table manners and the rules and rites of religion to manage conflicts that are otherwise stupefying.  The “romantic candlelight dinner” – culture – maximizes appetite for food and fertility while screening out the meat cleaver in the kitchen. We rely on culture to harmonize the world, but culture too is colored by denial.

As Ernest Becker insists, we’re impossibly conflicted creatures.[2]  We’re biological animals with teeth on one end and an anus on the other, with limbs to catch prey and start the microwave. But we’re also symbolic creatures who can conceive of the Higgs boson, the Mona Lisa smile, the yo-yo, and infinity. To keep your mental balance you have to manage these creaturely contradictions. You don’t have a choice.

One route up the Nile to explore this peculiar territory would start with the admission that denial is built into us.  It’s natural, not just a rare emergency behavior, or a side effect of a hang-up such as addiction. As explorers, we’d be taking for granted that we’re always paddling up denial against da current.  To put it more exactly, we could start by stopping the denial that we routinely deny denial.

Okay, that last sentence is a tease: a deliberate mind-twister.  It’s a joke and not a joke. A riddle and not a riddle.  You know what the sentence means, but its paradoxes may also leave you with a sense that there are puzzles that you could go back and examine again. Denial is a process, not a destination.  It opens into the largest questions about who we are and the strangeness of being alive.

Hand me that there paddle, mate.

___________________________

1. For a look at the conflicts presented by food, have a look on YouTube at Mark Lewis’s shrewd documentary “The Natural History of the Chicken”:

http://www.youtube.com/watch?v=NkxO91TLKVg

and my Becker Foundation talk about how culture manages such conflicts:

http://www.youtube.com/watch?v=gvXXRf-9Zdw

2. American anthropological psychologist Ernest Becker is known for The Denial of Death and Escape from Evil.  Check out the Becker Foundation’s website:

http://www.ernestbecker.org/

Cross-published from Psychology Today: http://www.psychologytoday.com/blog/swim-in-denial


h1

Roberts and Rationality

August 8, 2012

“Normal Dan” Dan Liechty

Circumstances conspired to put me on a “political news diet” for the last few weeks – no TV, no newspapers or magazines, no podcasts, no internet, very sparse radio contact. I have to say that once the withdrawal shock was weathered, it was pretty nice. What’s more, that the world could just keep right on going, even without me obsessively milking each medium to follow its every move. Furthermore, I felt much more relaxed and optimistic about our species the longer it lasted! But, of course, all good things must end, and like it or not I am now back home trying to work my way through the amassed pile of papers, mags, podcasts and Bill Moyers programs that were sitting here waiting for me. Did I learn anything from the experience? Probably nothing profound or that will stick. In any case, I’m back at it now, and noticing something others have missed.

There is an emerging consensus among academic social and political scientists that people do not make political commitments based on rational considerations of specific policies. Their research points toward the conclusion that across the political spectrum, conservative to liberal, people are heavily and even determinatively influenced by nonrational factors. Rational faculties are more likely to be employed in the secondary step of rationalizing decisions and commitments already made.

One version of this thesis getting a lot of attention right now is that of Jonathan Haidt, author of The Righteous Mind: Why Good People Are Divided By Politics and Religion (Pantheon, 2012). Haidt interprets research findings to conclude that people generally “pick a team” and then on specifics will line up behind whatever the team position is. This goes a long way in explaining why people will so consistently line up behind inconsistent positions on specifics (e.g., “Keep your government hands off my Medicare!”) and even support with a full head of steam policies that are directly contrary to their own personal best interests (e.g., the long-term unemployed person voting for those who will curtail unemployment benefits.) This often leaves liberals (who hold rationality in high esteem) scratching their heads confusedly. But liberals are prone to the same type of thing–misdeeds perpetrated by those on liberal side are much more easily overlooked and forgiven than the same misdeeds on conservative side.

There is a lot that could be done with this analysis, but it appears to break down when we consider the judgment rendered by Chief Justice John Roberts in the recent case concerning the constitutionality of Obama’s healthcare mandate. According to this view, we might have expected Roberts would line up dutifully with his “team” and find the mandate to be unconstitutional.

Instead, even though he dodged the direct question of the mandate, we find him weaving in and out to find some rationale for ruling favorably on the policy itself (as I only learned days later.) This was a real shocker to everyone! The conservatives clearly feel betrayed, that Roberts has double-crossed them and is not a reliable “team” player. Liberals discuss the possibility of welcoming Roberts into “their” camp. So while the team-analysis does seem to hold in terms of the follow-up reaction, it doesn’t seem to explain Roberts’s own motives.

Keep in mind that social science research is never intended to explain the motives of specific individuals in specific circumstances, but only that of large numbers of people in general circumstances. With that said, it seems clear to me that Roberts decided that by hook or crook the Obamacare policy had to be found constitutional. His resort to the taxing powers of the other branches allowed him to do this without total, in-your-face disagreement with “his” team on the question of the mandate. In other words, his brief was clearly a rationalization of a prior decision and not the basis for that decision.

Why this prior decision, when it would have been so easy to simply rule with the other four justices against the constitutionality of Obamacare? I suggest it has to do with the fact that Roberts is beginning increasingly to look at his court from the perspective of History. From this perspective, he sees that future historians will be writing about the Roberts Court as extremely activist and powerfully partisan. Although something like 40% of its actual decisions have been 9-0 or 8-1 (in other words, representing a clear court consensus–cause for at least some optimism about our divided political present) the remaining 60%, which are often the much more visible cases, have been down the line 5/4 decisions in which the majority unapologetically align themselves with the conservative point of view. Roberts understood that if, in the heat of this election year, there were to be yet another in a long line of 5/4 decisions on key watershed issues, this time against the healthcare mandate, thus effectively gutting the main positive policy achievement of Obama’s first term of office, the judgment of History will doubtless be that under John Roberts, the Supreme Court degenerated into little more than a submissive handmaiden of the Republican Party. Obviously, Roberts wanted desperately to avoid this judgment of History, and he found a way to do it (or at least mitigate it somewhat.)

I am surprised that no one else appears to have noticed this (at least I haven’t seen it mentioned in the post-decision analyses I have read so far) but it is really the only way I can make sense of the motives for his decision, and the Byzantine brief he wrote to rationalize it. We’ll have to see if it results in more unexpected twists and turns in future cases.

h1

Hollywood does Becker

July 10, 2012

TDF Guest Don Emmerich

What we can’t think about: In the following essay, I recommend the recently-released film Seeking a Friend for the End of the World and discuss how it illustrates many of the insights found in the works of Ernest Becker.

“The final mission to save mankind has failed,” a radio announcer declares.  “The 70-mile wide asteroid known as ‘Matilda’ is set to collide with Earth in exactly three weeks time.  And,” the announcer continues, his voice gradually taking on a more chipper tone, a tone reminiscent of Casey Kasem announcing the week’s Top 40, “we’ll be bringing you our countdown to the end of days, along with all your classic rock favorites.”

So begins Seeking a Friend for the End of the World, a film that comically—and beautifully—illustrates the different ways people deal with the awareness that they’re going to die.  Several characters respond by putting their trust in different transference objects.  Some, for example, go about buying more insurance, seeming to believe that their wealth and preparedness will save them from the giant asteroid set to obliterate the planet.  “I’m afraid the Armageddon package is extra,” Steve Carell’s character tells one of his clients.  “That protects you and your family against any sort of apocalyptic disaster—asteroids obviously, famine, locusts…”

Other characters immerse themselves in their jobs, carrying on as though everything is normal.  Carell’s maid, for instance, keeps showing up to clean his apartment.  When Carell kindly suggests that she instead spend her last days with her family, she assumes she’s being fired and begins to cry.  Only after he agrees that she can continue cleaning the apartment does she regain her composure.  Her denial, it seems, is so great that the planet’s impending destruction doesn’t even register.

This latter scene reminds me of one of my favorite Becker passages.  “Gods,” he writes, “can take in the whole of creation because they alone can make sense of it, know what it is all about and for.  But as soon as man lifts his nose from the ground and starts sniffing at eternal problems like life and death, the meaning of a rose or a star cluster—then he is in trouble.  Most men spare themselves that trouble by keeping their minds on the small problems of their lives just as society maps these problems out for them” (The Denial of Death, 178).

So, in other words, many of us are like the maid, so consumed with the minutiae of our lives that we don’t have time to confront life’s bigger issues.  More disturbingly, many of us are like a group of rioters we encounter later in the film.  Unlike the maid, these rioters have responded to their imminent deaths by looting their neighborhoods and brutalizing anyone they can get their hands on.  Their actions illustrate Becker’s argument that the fear of death often leads to scapegoating and violence against others (Escape from Evil, Chapter 8).

Needless to say, Seeking a Friend for the End of the World is not a feel-good movie.  Early into it we learn that, just as in real life, we’re not going to be given a clichéd Hollywood ending; we learn that this fictitious world really is going to end and that everyone really is going to die.  And yet the film gives us hope, much in the same way that Becker’s writings give us hope.  The hope comes not from a mystical revelation that life has a transcendent meaning and that we have souls which will survive death.  The hope is that each of us can live happier, more fulfilling lives and that the key to such lives is self-awareness.

In the film’s final scene—which is so beautiful and powerful that, for fear of ruining the movie, I won’t describe here—we see that self-awareness is terrifying.  And yet we see that it is only through such awareness that we’re able to extricate ourselves from the idols which rule most of our lives.  For instance, it is only the film’s self-aware characters, those who have accepted that the asteroid really is coming, who are able to get past their own neuroses and form genuine and loving connections with others.  Carell’s character in particular has what Irving Yalom calls an “awakening experience,” a confrontation with death that shakes him from his self-delusion and apathy and causes him to live a life of intention and value.   (See Staring at the Sun: Overcoming the Terror of Death).  Again, for fear of ruining the movie, I won’t say any more about it here.

Seeking a Friend for the End of the World certainly doesn’t provide the perfect Beckerian solution to the problem of existence.  For Becker prescribed that we embrace both self-awareness and a Kierkegaardian-like religious faith.  Both, he believed, are equally necessary.  Yet I can’t help but consider this a very Beckerian film.  It sends the message that people in our death-denying culture most desperately need to hear, that message being—to quote Yalom—that “[a]lthough the physicality of death destroys us, the idea of death [that is, the awareness of death] saves us” (33).

h1

Of Pets and Humans

June 26, 2012

“Normal Dan” Dan Liechty

After a wonderful week at the Ozark Sufi Dance Camp, it’s back to “real life.” Gratefully, I take with me the many conversations I had there with fascinating people who have pondered deeply the meaning of human existence and are engaged in significant projects of spiritual renewal and social revitalization in our culture. Among these, one that stands out is a gentle and loving man named Bodhi Be, who led a daily workshop on issues of death and dying. He is the originator of a very interesting project in his home area in Hawaii. The Death Store describes itself as an end-of-life community resource center. I encourage you to spend a bit of time at their website (thedeathstore.com) and perhaps get on their email newsletter list.

The conversations we had in that workshop reminded me of a message I received a couple months ago, and I thought perhaps excerpts from that correspondence might be of interest to Denial File readers as well. The message read, in part: Why is death so insulting in our culture? Why isn’t it acceptable to part with a friend like it is to part with a pet? I just wanted to ask for your opinion. Following, in part, is what I wrote in reply. Feel free to criticize and add to the discussion!

Dear friend, you pose a very important question, for which there is no easy answer. You ask why our culture is insulted by death, but I suggest it is not just our culture. Any viable culture, to one extent or another, makes implicit claims to having been founded on a supernatural basis. Therefore, by participating in its cultural pageant, following all the rules and being a good citizen, it offers to its people the opportunity to transcend and elevate mere earthly existence into an imitation or reflection of divine existence. With few exceptions, cultures that initially appear to be the most “death accepting” are exactly those cultures with the most elaborate transcendence ideology, and in which that ideology is strong, intact, and plausible (because everyone a person rubs shoulders with believes it as well.) The very function of culture is to buffer us against the deep anxiety about death we all have, an anxiety that results from simply being human, driven by the conflict between a strongly organismic survival disposition and the cognitive power to understand that death is inevitable.

What appears to make Western culture “stand out” among others is that, at least since the European Reformation and Enlightenment, we have honored as culturally heroic the pioneering spirit of inquiry, embodied especially in science and in iconoclastic and anticlerical” dissenting” religious views, which includes at the edges even agnosticism and atheism. Eventually, as we encourage the heroic spirit of “thinking for yourself,” and “questioning authority,” the eagle eye of iconoclastic inquiry focuses on the transcending mythology of the culture itself, resulting in scholarship and educational that tends to debunk the foundational stories and beliefs, making them seem childish and implausible.

Thus it is that a significant sector of our culture, the highly educated sector, gains its own sense of heroic transcendence (meaning and purpose) exactly by questioning, undermining and debunking the very foundational, mythological stories of our culture which neatly combine doctrines of supernatural religion with sentimental patriotism. But this cultural mythology is the very substance from which a much larger sector of the society continues to gain its own sense of transcending meaning and purpose. The conflict that arises between these sectors is what we have called the “culture wars,” with one side assuming what is needed is “more education,” while the other side is just as sure the problem lies with smugly subversive and vaguely un-American “elites” whose covert agenda is to dominate others and undermine what is most sacred to the majority (Sarah Palin’s “Real Americans.”) We end up with defensive and exaggerated affirmations of the cultural mythology on the one hand (“In God We Trust!” “One Nation, UNDER GOD!” ) and the corollary elevation and adoration of “substitute gods” on the other (the pantheon of rock, sports and movie stars, the superrich, even people famous just for being famous) who function to fill our need for identification with something, anything, beyond the “merely human.”

So now (three times around the barn to get to the house–sorry about that!) we come back to your question: Why isn’t it acceptable to part with a friend like it is to part with a pet? Of course, from a purely logical point of view, it would be, and there is a lot we could do to bring a more logical perspective into our end-of-life customs and activities. (Nota Bene: visit the website of The Death Store mentioned above.) But at the same time, we clearly see that cultural norms are neither formed in nor driven by logic, but rather the deeply emotional and psychological need for assurance of transcendence. The reason we cannot simply bury our friends and loved ones with the same equanimity we do our pets is exactly because, in direct confrontation with death, our knee jerk human response (even of the most secular among us) boils down to a five word cry: We are not just animals!

Addendum: Our particular cultural traditions have made a categorical distinction between humans and other species. Much of the culture warrior resistance and even revulsive disgust against “evolution,” or PETA philosophy, is rooted in the need to defend and maintain this distinction. We might speculate, however, that in a culture whose norms distinguish most highly between, say, plant species and animate species, it would be more acceptable to part with a friend as with a pet. We might further speculate that in our own culture, as the sacrosanct distinction between human and other species breaks down, at least one result is elevation of close pets to something parallel to other family members in relation to their parting–a fact suggested in the fast emerging commerce in pet mortuary services.

h1

Not to laugh, not to lament, not to curse, but to understand

May 17, 2012

“The Single Hound” Bruce Floyd

I read today some comments by a doctor who has spent a lot of time with dying cancer patients. He says he has noticed two methods the dying use to allay their fears of death, two delusions, if you will. One is the belief in one’s specialness, the notion that one is somehow invulnerable, beyond the stain that soils others, beyond the cold hand of death. At some point in life, though, most of us will face a crisis, a major crisis, one which leads on to say, “I never thought it would happen to me.”  Well, why not?

The second method the dying use to deny death is to put their faith in a rescuer. No matter how bad things are, we suspect some thing or some person is watching over us, that we will, ah, always find reprieve, always pulling the game out in the bottom of the ninth. I think right until the end a friend of mine, who had it all at one time, thought that once again he, because of who he was, would beat the odds. Until death glared in his eye, bearded him, the sick man could not believe a virulent and unappeasable cancer had chosen him upon whom to batten. He was a man who always won: the game, the prettiest girl, the most money. When he finally knew the truth, though, knew beyond all doubt, knew in his gut, that he had only a few weeks to live, when at long last, after a valiant battle, he understood he was going to die, he took to his bed, turned his face to the wall, and, retreating into silence, there he died.

Events in my life have taught me that he worst can, and often does, happen. And even though we joke about gaining another reprieve, we are not foolish enough to think cruel words and ugly prognosis are somehow forbidden ever to fall upon our ears. When Oedipus cries, “It has all come true,” I want to say, “Yes, it always does.” Queens have died young and fair, and dust hath closed Helen’s eyes.

I am not sure–perhaps some existential sage could tell me–but I’d hazard that knowing a few truths about life contribute to the living of it, and these truths are dark ones; for example, all those we love and we ourselves are going to die. Another truth, I’d think, is that each of us is on his own. Each of us is what each of us is, which means we have such a thing as will. A concatenation of decisions brought us to where we are today. We make choices in life. Sartre says that each of us is condemned to be free. I’d say, too–might as well go whole hog–that we have to understand that not only are we not special but that the universe has no obvious meaning and that life has, perhaps, no purpose beyond the living of it–or no purpose beyond what we attribute to it. I think Hardy meant what he said when he wrote: “If a way to the Better there be, it exacts a look at the worse.” A “look,” not a prolonged studying and brooding, a quick look, an understanding, and then the moving on with life. Perhaps when one determines that the purpose of life is incomprehensible, the whole damn thing inscrutable, that we come and we go, going probably into oblivion, yes, perhaps knowing these things is liberating to one. It could be, too, that not accepting the dark truth about human existence can lead to a stunting of life, an embrace of illusions. I have no idea which path a person should take. I wouldn’t presume to advise anyone.

I read what I have written, think that I don’t know jack squat about anything, assuring myself, of course, that nobody else does either. We all “see through a glass darkly.” I spend half my time lying to myself and the other half trying to unravel the lies. It is hard to turn one’s back on the heroics culture provides. I’d never mock the need we all have to belong, to fit into a group where we feel warm and cozy. It’s the goddamndest thing when a fellow figures out he’s been booted out the club, not so much by the other club members but by himself, when he determines that he can tolerate his loneliness easier than he can the sensibilities of the club members. It’s all a mystery to me where these different sensibilities come from. It’s a hard lesson when a man finds he no longer fits in. He’s never sure how the rupture happened. All he knows is that it happened, and it’s irremediable. He’d be a fool to brag about his situation.

h1

What Social Research Can Tell Us About the OWS Enthusiasm Gap

February 27, 2012

"Normal Dan" Dan Liechty

Bob Burnett, blogging for the Huffington Post (http://www.huffingtonpost.com/bob-burnett/occupy-wall-street-the-en_b_1125312.html) recently articulated a fact of American political life that has puzzled many observers. While upwards of 80% of average Americans express approval for statements like “Wall Street has too much power,” and “The rich need to pay more taxes,” nearly the same number say they “do not support” the Occupy Wall Street movement. In other words, there is a large, even majority group of Americans, who express agreement with the basic OSW message, but express disapproval of the OWS movement itself. Very strange?

Laurence Kohlberg’s stage theory of adult moral development might give a clue to this surprising problem.  Based on his research, Kohlberg outlined 3 levels of moral development: the pre-conventional, conventional and post-conventional levels. Each of these levels consists of two stages, for 6 stages in total. In terms of numbers, if we were to graph these levels onto a bell curve, it would roughly put 80% in the conventional category and about 10% each in the pre- and post-conventional categories. The conventional level is just that, the level of moral development attained by the average citizen, and reflecting exactly the kinds of moral values on which a solid society is based. These are, in particular, (1) high regard for how you are viewed by others (stage 3), and (2) high regard for maintenance of law and order (stage 4).

It is not difficult to see, therefore, why significant numbers of people might agree to certain policy-oriented statements (on civil rights, ending a war, curtailing Wall Street power, taxing the rich) and yet feel an almost knee-jerk revulsion against those creating “disorder” in pursuit of those ideas and policies. Richard Nixon, of course, was the absolute master of manipulating for his own political ends this gut-level revulsion of the majority against those creating “disorder.” But the American right on the whole seems to better understand this dynamic than does the left. Notice today how even with Tea Party folks showing up fully armed, the rallies still take on the undercurrent of support for “law and order”; even the Militia Movement folks loudly assume this mantle of standing for order against chaos!

Recognition that morality stands above “law and order” is, in Kohlberg’s research, reflective of higher level moral thought, and only reached as a solidly habitual way of thinking by a relatively small minority of people. Many more people, however, can be spurred to consider it in specific cases, such as when police (symbolic enforcers of order) turn dogs and fire hoses loose on children, or casually coat unarmed and peaceful people with pepper spray at close range, in full view of the cameras.

The upshot of what needs to be learned from Kohlberg’s research, however, is that street protest movements need to very early on demonstrate itself as supportive of law and order, and standing again disruption of social order, if they want to gain widespread public acceptance. This is not, of course, an easy thing to accomplish when the actual goal of a movement is truly dramatic upheaval in the current system. But history demonstrates that it is not impossible either.