A little while ago, I was invited to dinner out in the country. The couple who invited me are impressive: very smart; very senior; very well informed. As these things do, one of the great contentious issues of the day came up in conversation.

At the time I was still chief economist at the German Stock Exchange, a position which, thankfully, meant there were issues, particularly those that had implications for finance and monetary policy, upon which it was inappropriate for me to voice an opinion. In such circumstances, I tended to adopt a sphinx-like demeanour and deploy deliberately ambiguous facial expressions, hand gestures and quasi-linguistic noises. Conveniently, this routine was usually taken for agreement by those possessed of passionate views.

In the course of the conversation, I was interested in my host’s characterisation of those who didn’t share her opinion. She described them as having a “religious commitment”. In other words, these nogoodniks’ opinions were based on such things as blind faith, or wishful thinking, but not upon reason.

This was very interesting; all the more so because just a few hours earlier, I’d been with a politician who was firmly on the other side. In a striking coincidence, in referring to the people he didn’t agree with, he said: “There’s absolutely no point in discussing anything with them. They’re like brainwashed zealots; not open to reason at all. It’s like a weird cult, some kind of powerful religious sect.”

Both the politician and my dinner host each saw the “other” side’s opinions as immune to reason. And of course, it’s not only safe to disregard opinions based on a-rational foundations; it’s essential. There are sound epistemic reasons at play here. The crucial first step of the Scientific Revolution was the subordination of emotion, and “argument from authority”, to reason. An opinion based upon reason and evidence is one that can change. Indeed, an essential feature of genuine science is its ability to force us to accept things we do not want to be true. 

On the other hand, an opinion based on nothing more than emotion is not susceptible to rational revision.  In fact, as so brilliantly detailed in Tavris and Aronson’s book Mistakes Were Made (But Not by Me), an opinion based on nothing more than emotion or authority actually grows stronger when confronted with disconfirming evidence. This, of course, is a phenomenon we see at play when asset bubbles form. 

Yes, a-rational opinions are to be avoided. 

However, there’s a dangerous flip side to all this: how can we tell if an opinion genuinely is a-rational? That might seem straightforward, but it isn’t. Some of the most important scientific breakthroughs were initially dismissed as a-rational nonsense. A good example is Alfred Wegener’s theory, first coined in 1912, that landmasses aren’t – as everyone assumed – stationary. Wegener hypothesised that continents continually move across the surface of the planet. This apparently a-rational theory was roundly rejected by the geological establishment and Wegener suffered years of ridicule. He died long before the essence of his theory of “Continental Drift” was finally accepted in 1963. 

In reality, it’s all too easy for us to dismiss any opinion contrary to our own as foolish and a-rational. If you want to see this process in action, just log on to Twitter. If a position is de facto a-rational, we don’t have to engage with it, and it’s a lot less work to simply dismiss an argument you don’t like than to understand it.  And we all tend to behave as if it’s entirely obvious what is, and what is not, a rational opinion, which is a grave error. We must remain vigilant to the possibility that an opinion, which prima facie seems preposterous, might nonetheless be correct.

We should also remain vigilant against adopting a-rational opinions ourselves. 

In my view, the greatest danger here lies in “moral creep”: the tendency to frame ever more issues in terms of good versus evil; goodies and baddies; the right side and the wrong side. The question of what exactly constitutes a moral issue is extraordinarily difficult to answer in a non-trivial way. The orbit of morality is not set in stone; what counts as a moral issue forever ebbs and flows. For example, few of us now see the presence of an altar rail as a burning moral dilemma. However, in seventeenth-century England, this was an incandescent moral issue. 

What we can say is that it’s in the essential nature of a moral issue that we do not need to use reason – we can simply choose the “good” side. And as the orbit of morality enlarges, some use that to their advantage. If you can successfully reframe an issue in moral terms, rational investigation is banished; we don’t investigate a moral issue, we take a side. Worse, we feel justified in dismissing, vilifying, or silencing those we see as being on the wrong side. 

Bodies such as the government’s Nudge Unit, which uses behavioural science to study and influence human conduct, are well aware of the power of moral framing. Why should I do this? Because everybody else thinks it’s the right thing to do. So when something is presented as a moral issue, watch out. Somebody wants you to override your powers of reason.

Peter Lawlor is a trustee of the John Hicks Foundation in Oxford. He was formerly the Principal Economic Advisor to the German Stock Exchange (Deutsche Börse), and continues to act as an adviser to senior Wall St figures and political leaders. These are his own views and should not be imputed to any organisations with which he is, or has been, affiliated

More Like This

Get a free copy of our print edition

Columns

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Your email address will not be published. The views expressed in the comments below are not those of Perspective. We encourage healthy debate, but racist, misogynistic, homophobic and other types of hateful comments will not be published.