Previous: The Art of Stupidity
Up
Next: Separated at Birth?

Honesty, Flattery, or Fealty?

Peter A. Taylor
September, 2008


Most people, no matter how intelligent and well-educated, and no matter how high-functioning they are in most areas of their lives, seem to have certain domains or mental "compartments" in which they are persistently unreasonable. Yet much of this unreasonableness can be explained in rational terms, including the pattern of compartmentalization. I like to think of this in terms of an analogy to a retail business with multiple departments.

In Darwinian terms, the human brain is essentially a control system that has evolved to enable the bearer of a particular set of genes to survive and reproduce in an environment consisting in large part of other human beings. Human social groups are naturally hierarchical, and competition within these groups is often a deadly serious matter, especially in the context of mate selection. In many cases, there is a powerful incentive to "cheat" by engaging in some sort of deception. Cheating thus provides part of the context within which human beings think. Different questions involve different incentives with respect to deception. It's as if a question, entering the mind, is like a client entering a business and being met by a receptionist who directs him to the appropriate department. The receptionist's job is partly to evaluate the need for deception. This sensitivity to deception is my concern here.
 


 

Three Classes of Questions

One class of questions gets sent to the Engineering Department (room 1). These are questions for which objectively right and wrong answers exist, and a seriously wrong answer will result in a high likelihood of something bad happening to the person making the decision. An example would be that if I misjudge the distance between my car and a tree, I could end up wrapping my car around the tree.

Some questions do not have right and wrong answers. (Is the glass half-full or half-empty?) Other questions, such as those involving "credence goods," have answers that are not really knowable (Is there a god?). Other questions have answers that don't matter (Could US slavery have been abolished without the Civil War?). Others matter, but not to the person making the decision (Should I give my charitable contribution to poor people in Africa or South America?). Other decisions, made collectively, matter, but the odds against the decision coming down to my one vote are astronomical (Which presidential candidate should I vote for?).

Questions that do not need to be sent to the Engineering Department are opportunities for furthering one's interests at the possible expense of being entirely honest and candid, either with one's self or with others. This could be as simple as not wasting time thinking deeply about a question that isn't important.
 


 

A second class of questions consists of those which do not require strictly correct answers, but which will cause me to gain or lose social status (face) if I say something embarrassing. What I need here is a plausible answer, one that flatters my wisdom and intelligence (bonus points if it flatters any other virtues I might have, or those of people I wish to court). These questions are sent to the Legal Department (room 2). I may not convince the redhead at the church social that my views on drug policy reform are correct, and indeed, they may not be, but if I want her to go out with me, I need to avoid coming across as a dumbass. I need an answer that is at least somewhat persuasive. An answer's attractiveness will often depend on a delicate balance between its plausibility and its ability to flatter.

There may be lots of other reasons why I find a belief attractive besides that it flatters my ego. Perhaps I simply find it elegant. But flattery is interesting because its importance is likely to increase over time. I invest "face" in a doctrine when I endorse it and champion it. This is important to what follows, so for present purposes, I emphasize flattery to the exclusion of many other motives. (See, for some examples, Bryan Caplan's The Myth of the Rational Voter.)

Particularly delicate problems balancing plausibility and flattery often arise in the case of repeat business. The Legal Department may be asked to revisit a question for which it has already committed itself to an answer, perhaps in the face of new and embarrassing evidence. If the cost of abandoning the earlier commitment is small, the Legal Department will cut its losses and start over as if this were a new question. On the other hand, if the loss of face is large and the new evidence is subject to interpretation, the Legal Department may decide to "spin" the evidence as best it can, and accept a modest loss of plausibility rather than reverse itself.

Similar dilemmas arise if I am faced with embarrassing evidence regarding my own behavior. I may be tempted to sacrifice some of my credibility rather than admit that I was probably at fault. Why did my car break down? Is it because I didn't change the oil regularly, or are the Bavarian Illuminati out to get me?

Whether or not I can get away with blaming the Bavarian Illuminati for my car breaking down may depend on the culture in which I live. In some cultures, I may count you as a friend because you let me get away with face-saving nonsense. In other cultures, I may count you as a friend because you don't.
 


 

A third class of questions arises in situations where I am likely to be punished not for being objectively wrong or even for making a fool of myself, but for being perceived as disloyal. What I want here is to make sure that the insignia that are painted on my airplane are seen as "friendly" to the anti-aircraft gunners at the airfield where I hope to land. These questions go to the Paint Department (room 3).

In some cases, I may be asked to demonstrate loyalty by making a sacrifice, and this sacrifice may be in the form of stigma or loss of face. Economist Laurence Iannaccone has argued in "Sacrifice and Stigma: Reducing Free-Riding in Cults, Communes, and Other Collectives" (moved behind paywall) that these demands often strengthen strict churches (or "sects" in sociological jargon). Thus, from an institutional standpoint, the absurdity of some religious doctrines may actually be a feature rather than a bug. Absurdities serve as shibboleths.

Similarly, Theodore Dalrymple has argued that Communist propaganda is not intended to persuade, but to humiliate. Forcing people to accept a measure of humiliation demonstrates the strength of the institution and inhibits rebellion.
 


 

Whitewash

The paint shop is actually a fairly busy place, because they not only paint insignia on things, they also paint over flaws. Let me now distinguish between room 3, where the Paint Department puts insignia on things, and room 4, where they do whitewash and camouflage. Room 4 is where questions go when the Legal Department finds itself committed to an answer that it can neither abandon nor defend.

There are also questions that arrive at room 4 after already having had insignia painted on them in room 3. I suggested that loss of face is sometimes desirable as a way of demonstrating loyalty, but the Paint Department is playing a double game. On the one hand, I embrace an implausible doctrine in order to secure my membership in my tribe. On the other hand, I have to defend this doctrine as actually being correct. One reason to defend the indefensible is that, while my tribe is making me pay a price in terms of credibility in order to be a member, I don't want to pay any higher a price than I have to. A second reason is that my loyalty may be in question if I don't make a show of at least trying to defend my tribe's doctrine. A third reason is that some of the benefits of being a member of a tribe often depend on a collective suspension of disbelief.
 


 

The Receptionist's Criteria

Let's summarize the receptionist's job so far. If Mother Nature is grading my paper, I need a right answer. If a disinterested classmate is grading my paper, and I have a choice of possible answers, I need a plausible answer. If a group of partisan classmates is grading my paper, I need a popular answer. If I have a pre-commitment to an implausible answer, I need damage control.

One way of looking at this is that the receptionist's job is to evaluate the need for deception, or "cheating." In some cases, I must not cheat because I can't get away with it. In other cases, I may cheat if it looks beneficial. In still other cases, I must either cheat or, as the Yugoslav proverb puts it, "Tell the truth and run."

Alternately, one can look at the receptionist's job in terms of minimizing costs. Sometimes we are literally being economical with the truth. Our receptionist may be seen as deciding which questions need costly analysis, and which can be addressed by simply regurgitating marketing nonsense. Is a politically correct answer good enough? A plausible answer? Or does it actually have to be right?

A third way of looking at the receptionist is that her job is to classify questions according to how much tolerance we have for uncertainty in their answers. To a large extent, she is a gambler, evaluating risks. Some questions need to be answered correctly beyond a reasonable doubt (reasonable here being something of a sliding scale). For other questions, a preponderance of evidence is good enough. Some questions are more trouble than they're worth, and a coin toss provides a good enough answer ("rational ignorance" in economic jargon). Sometimes we want a flattering but probably wrong answer and all we need is plausible deniability. Sometimes flattery or popularity is important enough that we are willing to settle for suspension of disbelief. And sometimes we are in the realm of sacrifice and stigma--we shrug our shoulders and paraphrase Nathan Hale: "I regret that I have but one reputation to sacrifice for my tribe."
 


 

Lying vs. Uncertainty

But the first rule of lying is that we have to lie about the fact that we are doing it. So we always tell our customers that we are sending them to the department that really cares about right answers. Thus the receptionist may send the customer to Engineering, room 1; "Engineering," room 2; "Engineering," room 3; or "Engineering," room 4.

This is complicated by the fact that humans evolved to be good at recognizing conscious lies. Furthermore, conscious lying requires us to keep a double set of books, requiring twice as much effort as telling the truth. So not only do we lie to one another about giving one another honest, reliable answers, we also lie to ourselves. The receptionist must direct the customer to the right department without usually realizing herself what she is doing. When we do tell conscious lies, the struggle to maintain an internally consistent story requires careful attention to detail. Ironically, figuring out the correct way to construct a deliberate, convincing lie is a job for the Engineering Department.

It may be useful to distinguish between direct, "bald-faced" lies and the lesser lies known as "bullshit." Bullshit doesn't necessarily consist of known falsehoods, but consists of reckless disregard of truth, reasonableness, or evenhandedness.

I suspect that direct lies are relatively rare. Mostly, we have answers that come out of the Legal Department or the Paint Department, and we are trying to pass them off as having come out of Engineering. That is, we usually lie when we don't know and perhaps don't care whether our answer is correct, yet we pass our answer off as being reliable. Thus I suspect that most lies are indirect; they are really lies about uncertainty about factual matters rather than about the factual matters themselves. Dr. Steven Schneider gave an excellent lecture at Rice University in 1994 about uncertainty in the global warming debate. His point was that the controversy isn't over exactly how much will the mean temperature rise over the next century, but over how big the error bars on the estimates should be.

There is always some uncertainty. Engineers use factors of safety, probability theory, and various philosophical approaches such as maximin, regret, and Dempster-Schaeffer to try to cope with uncertainty and risk. (Industrial engineering textbooks say "risk" when they know the odds and "uncertainty" when they don't.) Our "Engineering Department" is neither omniscient nor infallible. What is characteristic of the questions we send there is not that the answers we get back are necessarily right, but that we have strong incentives to be realistic regarding both the answers we get and the respective levels of confidence we place in them.
 


 

Confidence vs. Commitment

But confidence is not the same as commitment. We can be committed to a belief without necessarily being confident in it, and both of these things can mean something different in different contexts. Our minds contain observations that come directly from our senses, innumerable interpretations of these observations, memories, hearsay, hypotheses, decisions, estimates of likelihood, and of course, lies. I sometime joke that all human knowledge consists of either working hypotheses or lies. We have beliefs about our beliefs, beliefs about other people's beliefs and likely behaviors, innate predispositions to see certain kinds of patterns, and beliefs about the boundaries where one set of beliefs becomes more or less reliable than another. We have beliefs that fall into several categories simultaneously, such as beliefs about honor that are partly about how we perceive ourselves and partly about how others perceive us. We also have beliefs about our sincerity. Here we get into fuzzy gradations of believing vs. pretending to believe, of having degrees of commitment to our beliefs.

A famous illustration of the distinction between confidence and commitment is Pascal's Wager. Is there a God? I have very little confidence that there is, but if I can commit myself to believing in Christianity, I have very little to lose relative to atheism and possibly an infinity to gain. (That is, assuming that God agrees that my commitment is meaningful in the absence of confidence.) Another example would be if I am given a lottery ticket as a birthday present. I may think lottery tickets are a bad idea, that the odds of winning are too low to justify the money spent on them, but given that the price of that particular ticket is now a sunk cost, the odds are high enough to justify checking the number rather than just throwing it away.

A lawyer may be committed to defending his client's interests regardless of whether he believes his client is in the right or is even telling the truth, but he acts in front of the jury as if he believes his client. A scientist may design an experiment specifically to disprove a theory that he believes to be false, but in designing the experiment, he is taking the theory's logical conclusions seriously and putting resources into measuring their effects.

On the other extreme, if I commit to the use of a parachute, and it doesn't open, I die. That's a far greater commitment than a risk of losing face or losing membership in a club. But the difference in risk is merely a matter of degree. The essence of commitment is that I am gambling, taking a calculated risk.

A church marquee once said, "Faith is not belief without evidence, but commitment without reservation." I'm feeling a bit more generous, so I would say that faith is action in the face of uncertainty, even with reservations.

Would God really be impressed by Pascal's Wager? If it is simply a matter of believing vs. pretending to believe, surely God would not be fooled. But what if Pascal were to put his money and time where his mouth is, even while entertaining doubts? What kind of sacrifice would Pascal have to make for his supposed belief in order to convince God that his commitment is genuine and meaningful?

Creeds in general leave me scratching my head about the ambiguity between believing and pretending to believe. Do people really believe what they say when they recite the Nicene Creed? Some people surely do not, and regard this recitation as an oath of fealty rather than a factual statement. For others, there is presumably a degree of belief, a calculated risk that entails some genuine commitment, despite possibly serious doubts. Others, perhaps more prone to self-deception, may not be consciously aware of doubts at all. Others may regard the Nicene creed as an approximation, not entirely or literally true, but true in some approximate or poorly understood metaphorical sense. I would like to say that people assign informal measures of probability to their beliefs, and can honestly say that they believe something is true if their estimate of its probability is high enough, but even this is an oversimplification. In many cases, "faith" is basically a decision to affiliate with a specific group of people.
 


 

How Beliefs Die
 
"Funeral by funeral, theory progresses."
— Paul Samuelson

As I argued earlier, all beliefs have some degree of uncertainty about them. Beliefs thus have an element of mortality to them. They are always in danger of being disproven, at least in principle. But as the Samuelson quotation indicates, scientific progress in overturning a failed theory often requires a generational change. Young scientists may evaluate a new theory on its technical merits, but old scientists tend to be emotionally invested in the theories on which they built their careers. Even in a field where wrong answers are relatively easy to identify, human beings tend to evaluate their doctrines with one eye on objective evidence and the other on potential loss of face.

What kind of things could happen that might cause me to reconsider my commitment to a belief? One possibility is that the risk of a spanking from Mother Nature may have increased significantly, as with the proverbial Christian Scientist with appendicitis. Another possibility is that I may have been presented with mounting evidence that makes my position increasingly implausible. My loss of face due to embarrassment or discomfort due to cognitive dissonance begins to outweigh my gains in terms of flattery or club membership. Neo-neocon has a nice essay on this in terms of Milan Kundera's metaphor of a ring dance. A third possibility is that is it not increasing contrary evidence that alters this tradeoff, but a deterioration in the benefits of club membership for reasons unrelated to belief systems. Perhaps I discover that I can better further my career or meet more suitable women at a different church. A fourth possibility is that I have already been kicked out of the club, perhaps for not being a convincing enough apologist (failure of my suspension of disbelief?), or perhaps for some social gaffe or internal politics unrelated to my beliefs. I may be more than willing to sacrifice my reputation for my tribe, but I no longer have a tribe. Finally, the club may stop being fun, or perhaps even fall apart because of a collective inability to maintain suspension of disbelief.

In many cases, the disadvantages of defending one's previous position must be balanced against the costs of seriously reopening the question. First, even if I end up maintaining my current position, I will have spent time and effort with nothing new to show for it. If I do change my answer, the second thing that happens to me will be the embarrassment of admitting that I have been proven wrong. Third, if I am part of a group of believers, I am likely to get kicked out of the club if I have not been already. In some cases, the belief I abandon may not be a core belief, and I may be able to retain my membership with a minor loss of status within the club. In exceptional circumstances, I may even be able to convince the club to change its position. If "club" membership is informal, several of these things could happen simultaneously. James Lovelock may get kicked out of the environmentalist movement to some degree for his support for nuclear power, he may be accepted to some degree as a minor heretic, and he may eventually convince much of the environmentalist movement to accept nuclear power. John McCarthy writes of institutional "hysteresis" and tries to explain why this last outcome, institutional change, is unusual.

A person's response when his beliefs are in distress can take a number of forms. In some cases, the end comes suddenly, completely, and without warning, like the "brittle" failure of a piece of glass. In other cases, the belief slowly changes shape, like the "ductile" failure of a piece of mild steel on a blacksmith's anvil, perhaps beaten into unrecognizability before the final rupture. Sometimes a belief doesn't adapt or can't be adapted to new and embarrassing information, and its owner responds instead by what science fiction fans call "gafiating," or Getting Away From It All. Formal repudiation may follow a long period of backpeddling, lowering bets, and trying to keep an increasingly low profile. Such a process may resemble corrosion more than hammering.

There are other scenarios in which a belief system seems to go unstable, like the buckling of a long, slender column, where a small eccentricity causes an increase in load that causes an increase in eccentricity.... Steven Den Beste called this "escalation of failure."

When someone tries to use a strategy which is dictated by their ideology, and that strategy doesn't seem to work, then they are caught in something of a cognitive bind. If they acknowledge the failure of the strategy, then they would be forced to question their ideology. If questioning the ideology is unthinkable, then the only possible conclusion is that the strategy failed because it wasn't executed sufficiently well. They respond by turning up the power, rather than by considering alternatives.

This behavior is rational on one level, in being logically consistent, but it is irrational on another level, in the self-defeating unwillingness to question beliefs. This leaves me unsatisfied, waiting for a story about why the questionable belief goes unquestioned.

I am inclined to see escalation of failure less in terms of cognitive errors and more in terms of emotional needs: people often cling to their dogmas because those dogmas give them emotional comfort, and they don't necessarily care about logical consistency of ideas that bring them comfort. When people get egg on their faces, they tend to seek emotional comfort from their favorite sources. If one of the sources to which they turn for comfort is the dogma that got them in trouble in the first place, we have a feedback loop.

My explanation also implies that people are irrational, but in the sense of being unable to exercise the self-discipline necessary to put their short-term emotional needs aside for the sake of their longer-term interests, even though those longer-term interests are reasonably obvious. Robert Bly has described this problem using the metaphor of "shame tanks," which fill up and cause their owners to lose the ability to think clearly. People end up doing things when they are embarrassed that they know will get them in even more trouble later on.

"A lie is a debt, and interest builds up." — Mencius Moldbug

In some cases (some cultures?), a third explanation may be simply that an authority figure's reputation for telling the truth is less important than his reputation for not backing down. An initial minor embarrassment changes the question from whether he has some technical detail right to whether he is powerful enough to punish insubordination.

A fourth possibility is that sanctimony is similar to what economists call a Giffen good, something that people buy more of when its price goes up. The classic (hypothetical) example of this is potatoes. If the price of potatoes goes up, but they are still cheaper than any of the alternatives (ie. meat), I may have to economize by buying even more potatoes and eliminating meat from my diet entirely. This causes the price of potatoes to go up even more, and we have a feedback loop.

Suppose that my craving for social standing is nearly fixed (inelastic), and it initially takes me four hours/week to make the arguments I need to make to feel righteous. If my belief system suffers a major loss of credibility, but I have no easy substitute for it, I may then have to invest eight hours/week in order to feel righteous. The efficiency of my arguing has dropped. But the credibility of my belief system may depend in part on how much hostile attention ("pushback") I and my fellow believers draw to ourselves through our moral posturing. Like buying more potatoes, the additional time and stridency we put into our arguments exacerbates the problem by drawing more hostile attention to them, and we again have a feedback loop.

The "shame tank" explanation seems more plausible to me than the "Giffen good" explanation, but the latter seems more robust. It works over a longer term, even after people have had time to think about the consequences of their behavior, and it doesn't really require people to actually believe in what they are claiming, just that it is socially beneficial to be seen making the claim. In this "Giffen good" case, the behavior is not necessarily self-defeating, but could be a rational response to an unenviable situation. This is especially true if the pushback is driven largely by current events, which the nominal believer may reasonably hope will soon pass.

This may seem stupider than it really is because of sample bias. Maybe 90% of the time, people win when they gamble that they can get away with flattering nonsense, and that any storm they encounter will blow over soon enough. We only notice the 10% of the time when the gamble fails.

What seems strangest about the Giffen explanation to me is the assumption that the demand for social standing is inelastic. Why would the benefits of social standing be nonlinear? One example might be if I am trying to impress a woman in order to get a date. Being turned down after "reinforcing a defeat" may be the same in terms of the practical outcome as being turned down after "cutting my losses." With an energetic whitewash effort, I may still have hopes of getting lucky. A political candidate may similarly feel that losing by 0.1% of the vote is as bad as losing by 30%. But this doesn't explain the behavior of a typical political activist, who can have little hope of tipping the outcome of an election. However, as suggested earlier, such an activist may feel that his membership in the believers' club, or the suspension of disbelief he needs in order to enjoy it, are in doubt if he doesn't deliver an adequate whitewash effort. "Pushback" raises the bar on what is an adequate whitewash effort, and a more energetic whitewash effort generates more pushback.

The existence of a "believers' club" or "echo chamber" makes a huge difference in the life and death of a belief. To a great extent, it converts plausibility problems from mortal threats into nuisances. An institution such as a church, political party, or university phrenology department can create an illusion of consensus among "knowledgable" people (ie. members of the institution) behind almost any belief that they feel is sufficiently important. These institutions survive not because their doctrines are true, but because their members are nice to one another. Practicing good etiquette is thus more important than making sense.

How does this end? Neo-neocon, in her essay, "A Mind is a Difficult Thing to Change," recalls an uncle who was a doctrinaire leftist. Her uncle never recanted his beliefs, but he was instrumental in driving her away from the "ring dance." She vowed not to be like him, but to keep her mind open. Perhaps her tolerance for cognitive dissonance was lower than his, or her sensitivity to social status. Perhaps she was less invested in her political views at various points in her life when she was presented with conflicting information, or her standards of etiquette were different. Ultimately, she "left the fold." She discovered, predictably enough in retrospect, that apostates are a threat to the illusion of consensus that quasi-religious institutions like political movements depend on, and was treated accordingly by many of her former friends.

It turns out that there isn't really that much that separates the "faith based" from the "reality based" communities. It would be more accurate to describe both of them as competing "social status based" communities. Both groups claim to believe in the brotherhood of man, but neither group can ever really be universal because social status is a zero-sum game. The details of their specific factual claims are really irrelevant to the underlying logic of competition. We all want our group to be seen by more people as better than competing groups. We all have to wrestle with uncertainty. And no one is entirely honest.
 


 

Addendum on escalation of failure, 6-27-2009:

Eliezer Yudkowsky describes one process in which a group (e.g. Jehovah's Witnesses in 1975, the "Unarian" cult in 1975 and 2001) becomes more radical in the face of contrary evidence (e.g. failure of the world to end or an alien space fleet to arrive) as evaporative cooling. The idea is that group consensus includes both radical and moderate elements. When contrary evidence appears, some of the less committed, moderate members of the group drop out. The consensus among the remaining members is more radical.

An explanation I like better than any I have listed so far occurs to me as I read Jonah Goldberg's recent book, Liberal Fascism. Escalation of failure is usually accompanied by ad hominem* attacks. Maybe that's the whole point. Suppose that, while I argue with you, I am suffering from cognitive dissonance (an indication that I am losing the argument). I may not be able to come up with an argument that is persuasive to anyone outside of my ideological tribe, so I can't avoid looking like a fool to outsiders, but I still want to treat the symptoms. I want to avoid experiencing further cognitive dissonance, and I want to create a pleasant environment for my friends, so I want to make it easy for them to avoid cognitive dissonance, too. But I don't want to admit that I am losing the argument, so I want to give the appearance that I am still actively engaged in it. The solution is to engage in over-the-top ad hominem attacks. I call you a NAZI. That way, I can pretend to be engaging in dialog with my critics ("speaking truth to power" or whatnot) while actually isolating myself from it. I rely on Godwin's Law. I am, in effect, sticking my fingers in my ears and yelling "Neener, neener, neener, I'm not listening!" while pretending to be doing the exact opposite. I look like an idiot to outsiders (people I don't care much about), but the cognitive dissonance is reduced because I'm not really listening. If you're the Antichrist, then I have a perfect excuse for not listening to anything you say. And if my credibility is already shredded, I may have nothing to lose.

 
Self-righteousness is a loud din raised to drown the voice of guilt within us.

There is a guilty conscience behind every brazen word and act and behind every manifestation of self-righteousness.

— Eric Hoffer, The True Believer

 
A feeling of moral superiority is much too great a pleasure for the morally wretched to forbear. What is this — a cynical word against moral superiority? No: a truthful word against pleasure-seeking wretchedness.

— Deogolwulf, Fewtril #281

*(3-29-2016) Or are these really ad baculum arguments? Consider Godwin's Law as an implicit threat. "Go along with what I say or else my friends who buy ink in barrels will start a massive campaign of slander against you." Maybe the threat isn't even directed at you. Maybe I'm making an example of you in order to implicitly threaten other people.




Addendum, 6-18-2012:

James V. DeLong hints at another reason for escalation of failure: the "last-period problem" in game theory.

Some commentators are under the illusion that the current national crisis will sober up the special interests. But that is not how it works, because a crisis makes special interests less, not more, responsible. The situation becomes, in the language of game theorists, a "last-period problem." As a game approaches an end, the players have no need to cooperate for the sake of protecting long-term relationships. Their incentive is to grab as much as possible before the game ends, or, to translate to the real world, before the society collapses. Do not look for crisis to bring out a sense of responsibility in the advocates for the interests.

Maybe the reason people don't put their short-term emotional needs aside to preserve their long-term interests is because they have no long-term interests. Things have already gone too far. Whatever consequences they expect to experience for their obstinancy have already become inevitable.




Addendum, 1-4-2013:

On pp. 86-8 of The Righteous Mind, Jonathan Haidt describes an experiment by Drew Westen using fMRI to study the mental processes of political partisans. Westen showed his test subjects slides that threatened their beliefs (in Haidt's terms, putting the partisans in mental "handcuffs" of having to believe something they didn't want to believe), followed by a slide that countered the threat (releasing the "handcuffs"). The final slide resulted in a dopamine release in the ventral striatum, a major reward center in the brain.

Heroin and cocaine are addictive because they artificially trigger this dopamine response. Rats who can press a button to deliver electrical stimulation to their reward centers will continue pressing until they collapse from starvation.

Westen found that partisans escaping from handcuffs (by thinking about the final slide, which restored their confidence in their candidate) got a little hit of that dopamine. And if this is true, then it would explain why extreme partisans are so stubborn, closed-minded, and committed to beliefs that often seem bizarre or paranoid. Like rats that cannot stop pressing a button, partisans may be simply unable to stop believing weird things. The partisan brain has been reinforced so many times for performing mental contortions that free it from unwanted beliefs. Extreme partisanship may be literally addictive.




Addendum, 12-22-2014:

Scott Alexander has an essay up, The Toxoplasma Of Rage, that explains insane behavior on the part of political activists in terms of drawing attention and signaling loyalty. Examples are PETA and radical feminists who deliberately champion dubious causes (e.g. Tawana Brawley, Duke lacross team rape accusation). Anyone can champion a clearly just cause, but it takes someone who is extra bonus holy/loyal to The Cause to take up a highly dubious cause. What's really going on is status competition (proving one's holiness) internal to these groups. This may be rational at the individual level, but for the overall movement, it means escalation of failure.




Addendum, 8-22-2016:

Another point that Robert Bly makes on his "The Power of Shame" recording is the use of contempt as a defense against shame. I can often protect myself against feeling shame by looking down on various classes of people as being so contemptable that I don't need to take anything they say seriously. But once I admit that I got something significant wrong, and my contemptible critics got it right, then we meet as equals, and I can't use the contempt defense any more. A small admission of error is as bad as a large one. It only takes a little bit of error to dispell the glamour. This creates a strong incentive to try to whitewash small mistakes, leading again to escalation of failure.



Comments?

Previous: The Art of Stupidity
Up
Next: Separated at Birth?
Links
Home Page