The BBC's Psychology Behind Conspiracy Theories

To anyone watching the BBC’s recent REEL segment, The Psychology Behind Conspiracy Theories, it must soon have become apparent that the BBC was not dealing with science but had instead wandered into the realm of fantasy. Unfortunately, experimental psychology investigating alleged “conspiracy theory” has been disconnected from objectivity for many years.

While psychology itself has a solid empirical foundation, experimental psychology often falls short of basic scientific standards. In 2015, the Open Science Foundation found that, of 100 published experimental psychology papers, results could only be replicated in 39, and just 36 produced findings from which any meaning could be drawn.

Such a high degree of subjectivity frequently leads to woolly conclusions, promoted as scientific fact in the BBC’s REEL segment. Shortly after the introduction, we are given the expert psychologist opinion that so-called “conspiracy theorists” are likely both to be extreme narcissists and to hold “beliefs” driven by a sense of powerlessness.

Narcissists can be broadly characterised as people with a perceived, and potentially misplaced, sense of higher social status. They often have expectations that they should be treated more favourably as a result.

While narcissists possess delicate egos, they certainly don’t suffer from a sense of powerlessness. Quite the opposite: narcissists frequently have a grandiose sense of self-importance, and the expectations to go with it.

This prima facie mutual exclusion in the double definition of “conspiracy theorists” near the beginning of the BBC’s short report on the psychology of those it chose to call conspiracy theorists gave us an early clue as to the epistemological failure at the heart of nearly all academic research on the subject. In point of fact, when we look more closely at the research claiming to reveal the “psychological traits” of the alleged conspiracy theorists, we frequently encounter the worst kind of pseudo-scientific babble.


A Loaded Question

The BBC began its “investigation” by asking:

Are some people more vulnerable to conspiracy theories, or are we all at risk?


We were immediately told that “conspiracy theories” present some sort of psychological threat to our mental health. Apparently, they harm or damage us in some way, hence the BBC’s declaration that we might be “vulnerable” to their discourse.

Which prompts the question: what is it about supposed conspiratorial thinking that causes us harm?

The BBC didn’t say, but it did air the views of a number of experts who claimed to know.

Jonas Kaplan is the assistant research professor of psychology at, and co-director of, the University of Southern California's Dornsife Neuroimaging Center. He studies the link between neurological activity and thoughts and emotion.

As an example of his work, in 2016 he co-authored a paper which monitored neural activity in a region of the brain called the default mode network (DMN). He and his fellow researchers presented a cohort of forty people, each of whom had expressed strongly “liberal” political opinions, with so-called “counter-evidence” that was intended to contradict their beliefs.

The team monitored the effect of this supposed cognitive challenge upon the subjects' neural response. Specific neural activity was observed, indicating that the DMN region of the brain—associated with identity—was stimulated when personal beliefs were allegedly challenged. This was interesting but, from this point forward, the research started to go wildly astray.

From their observations, Kaplan and his colleagues concluded that resistance to changing beliefs, in the face of this suggested “contradictory evidence”, was stronger for political beliefs than it was for non-political convictions. They consequently inferred that political opinions were more strongly associated with our sense of self than other kinds of beliefs we hold.

Unfortunately, the researchers ignored the gaping hole in their own methodology. They mentioned it, but didn’t seem to fully grasp the full implications of what they had done. 

Rather than actually “challenge” their subjects' beliefs with genuine contradictory evidence, they decided to make most of it up. They said:

In order to be as compelling as possible, the challenges often contained exaggerations or distortions of the truth.


For example, they told the subjects that Russia had a larger nuclear arsenal than the US. This wasn’t a “distortion” of the truth; it was a false statement.

More importantly, the neuroscientists failed to ascertain whether the subjects knew it was a lie. In the case that the subject knew the information was false—and we don’t know how many did—their views had not actually been “challenged.” This massive oversight utterly undermined the paper’s primary conclusions.

The researchers stated:

Our political participants may have been more likely to identify these distortions for the political issues, especially if they were more familiar with these issues. [. . . ] We did find that participants who rated the challenges as more credible were more likely to change their minds, and it is well known that source credibility influences persuasion.


Following their extensive experimental research, Kaplan et al. “discovered” that people were more likely to believe information if it was credible. Conversely, they were less likely to believe information if it was evidently wrong—because the researchers had made it up.

Beyond stating the obvious, Kaplan et al. then delivered subjective conclusions that were not substantiated by their own experimental data:

Our data [. . .] support the role of emotion in belief persistence. [. . .] The brain’s systems for emotion, which are purposed toward maintaining homeostatic integrity of the organism, appear also to be engaged when protecting the aspects of our mental lives with which we strongly identify, including our closely held beliefs.


The problem is that the researchers didn’t know what those emotions were. People might simply have been angry because they were lied to.

Kaplan and his colleagues did not establish that the perceived resistance to changing a belief was the result of any defensive psychological mechanism, as claimed. There was nothing in their research that distinguished between that possibility and the equally plausible explanation that the subjects rejected the “challenging information” because they knew it was wrong.

The researchers' ostensible finding—that the subjects’ resistance to change in the face of counter-evidence was linked to identity, and therefore demonstrated an emotional attachment that could potentially overcome rational thought—was an assumption unsupported by their own experimental data. Kaplan et al. noted where neurological activity occurred, but they did not demonstrate what the associated cognitive processes were.


Building Narratives Based Upon Flawed Assumptions

The press release that accompanied publication of the Kaplan et al. paper made no such clarification. It claimed, without cause, that Kaplan’s research had effectively proven an alleged sociological and psychological truth:

A USC-led study confirms what seems increasingly true in American politics: People are hardheaded about their political beliefs, even when provided with contradictory evidence. [. . .] The findings from the functional MRI study seem especially relevant to how people responded to political news stories, fake or credible.


The above statement represented a huge leap of logic that the paper itself didn’t justify. There was little evidence that the study subjects had been “provided with contradictory evidence” (emphasis added).

Rather, they were given so-called “distortions” and highly questionable opinions. Their reasons for rejecting these had not even been ascertained.

In the same press release, Kaplan declared:

Political beliefs are like religious beliefs in the respect that both are part of who you are and important for the social circle to which you belong. [. . .] To consider an alternative view, you would have to consider an alternative version of yourself.


This is similar to the statement he later made in the BBC REEL piece on the psychology of conspiracy theory:

One of the things we see with conspiracy theories is that they are very difficult to challenge. [. . .] One of the advantages of having a belief system that’s resistant to evidence is that the belief system is going to be very stable across time. If you have to constantly update your beliefs with new evidence, there’s a lot of uncertainty. [. . .] Conspiracy theories are a way of making sense of an uncertain world.


Where did Kaplan get his opinion from? It wasn’t evident from his work. Nor did it bring us any closer to understanding the allegedly harmful nature of the suggested conspiratorial thinking.


What Is Conspiratorial Thinking?

While a definition of “conspiracy theory” isn’t mentioned directly in the BBC REEL segment, we do at least obtain a cited reference to one in the paper of another contributor, Anni Sternisko. Sternisko is a PhD candidate at New York University who researches conspiracy groups. In her co-authored paper, she cites Understanding Conspiracy Theories (Douglas et al., 2019), which does offer some definitions:

Conspiracy theories are attempts to explain the ultimate causes of significant social and political events and circumstances with claims of secret plots by two or more powerful actors.


This ludicrous premise supposedly informs the universally-accepted working definition of “conspiracy theory”. It pervades nearly all academic research on the subject, including the alleged psychological studies of those labelled as “conspiracy theorists”; and, as we are seeing with the BBC, it is being accepted unquestioningly in the mainstream media, too.

Back in the real world, no-one tries to explain “significant social and political events” with “claims of secret plots”. It is, on its face, a ridiculous notion. It might happen with regularity in BBC sitcoms, but does it happen in your social circle?

How can anyone, other than the conspirators themselves, know what a “secret plot” entails? The clue is in the wording; it’s a secret.

Generally, the people who are labelled “conspiracy theorists” by academics, politicians, the mainstream media and other interested parties are eager to highlight the evidence that exposes real plots that actually happened or are currently underway. Examples which made it to full-scale parliamentary inquiries in various Western countries include Operation Gladio, Watergate, the Iran Contra affair and so on. These aren’t “secrets”. If they were, no-one would know about them.

The so-called conspiracy theorists of the real world also point to evidence which appears to expose real plots that are yet to be officially acknowledged. For example, the study by the Department of Civil Engineering and the University of Alaska Fairbanks seems to show that the official account of 9/11 cannot possibly be true.

Taking this example, the only way to determine whether the stories we have been told about 9/11 are true or not is to examine the evidence. Again, this evidence is not and indeed cannot be a “secret”. It can be obfuscated, hidden or denied—but it cannot be known of at all if it remains ”secret”.

There are many reasons why we might hypothesise that 9/11 was, in fact, some form of false-flag attack. None of the evidence suggesting this possibility is “secret”, either. It is all in the public domain.

The logical exploration of evidence is the best way yet devised to find the truth, and has been acknowledged as such since at least Socrates' day. Inductive, deductive and abductive reasoning all rely upon this basic approach. The key factor here is the evidence, without which the facts cannot be known.

While we can, and should, question all theories, the only way to discover the truth is first to identify and then rigorously to examine the evidence, ideally ascertaining some facts along the way.

We are at liberty to argue incessantly about various explanations of events, but there is one absolute certainty: we will never know what the truth is if we don’t explore the evidence, that very activity which is now being presented to us as suspect.


Descent Into Bathos

The Douglas et al. paper continues:

Conspiracies such as the Watergate scandal do happen, but because of the difficulties inherent in executing plans and keeping people quiet, they tend to fail. [. . .] When conspiracies fail—or are otherwise exposed—the appropriate experts deem them as having actually occurred.


As incredible as this may be, as far as these academics and researchers are concerned, unless the conspiracy is officially acknowledged by the “appropriate experts”, it remains a “secret” and therefore cannot be known. We are being sold the line that conspiracies only come into existence once they have been officially admitted.

This is, then, the completely illogical basis for academia's alleged research of conspiracy theory. Conspiracies are only identifiable when they fail or are otherwise “officially” exposed. For these various “experts”, the consideration—by their own acknowledgement—that conspiracies are often real, and not “secrets”, renders their offered definition of “conspiracy theory” self-contradictory rubbish.

If you come to the matter with the worldview that “conspiracy theorising” is an attempt to explain events in terms of “secret plots”, then it is reasonable to deduce that said “conspiracy theory” is rather silly. If, however, you concede that these allegedly “secret plots” are not secrets at all and can be discovered by examining the evidence that exposes them, then your original premise, upon which your definition of “conspiracy theory” is based, is complete junk.

It is difficult to express the monumental scale of the idiocy entailed in the experimental psychologists’ definition of “conspiracy theory.” It is exactly the same as asserting that any evidence offered to indicate that a crime has been committed is completely irrelevant unless the police have already caught the perpetrators and their guilt proven in court.

Sure, your front door has been kicked in, your property ransacked and your possessions stolen, but—according to the psychologists of conspiracy theory—this is not evidence of a crime. The facts have yet to be established by the “appropriate experts”, and consequently the alleged crime remains a “secret” and is unknowable.

This absurd contention, based upon the logical fallacy of appeal to authority (argumentum ad verecundiam), is the foundation for all of the pseudo-scientific gibberish about conspiracy theory and theorists that follows. Douglas et al. also reveal some of the other terms often used in this so-called psychological research.

“Conspiracy belief”, “conspiracy thinking”, “conspiracy mindset”, “conspiracy predispositions”, “conspiracist ideation”, “conspiracy ideology”, “conspiracy mentality” and “conspiracy worldview”—most of these apparently serving no distinct purpose other than an attempt at elegant variation—are all terms based upon the psychologists' own delusional beliefs. For some reason, all those researching the psychology of those they have labelled conspiracy theorist imagine, without reason, that the so-named “conspiracists” don’t have any evidence to back up their arguments.

In a moment of self-conscious admission, the Douglas et al. paper adds:

It is important for scholars to define what they mean by “conspiracy theorist” and “conspiracy theory” because—by signalling irrationality—these terms can neutralize valid concerns and delegitimize people. These terms can thus be weaponized. [. . .] Politicians sometimes use these terms to deflect criticism because it turns the conversation back onto the accuser rather than the accused.


As noted above, the scholars' definition of “conspiracy theory” is etymologically redundant. The associated—and empty—pejorative of “conspiracy theorist” has consequently seeped into the lexicon, and it is based upon nothing but assumption and imagination.

The term “conspiracy theorist” has indeed been weaponised. It was designed to ensure that people don’t look at the evidence, wherever it is applied.

Politicians, the mainstream media, the scientific and medical authorities, and many other representatives of the establishment, right down to neighbourhood level, frequently use it to “deflect criticism” (in Douglas' apt phrase) and to level unwarranted accusations at their critics. As outlined in Document 1035 – 960, this is precisely how the CIA envisaged that the “conspiracy theorist” label would function.

Regrettably, for most people, it is enough for someone just to be called a “conspiracy theorist” for anything subsequently proceeding from their mouth to be ignored. It doesn’t matter how much evidence they provide to support their views. The labelling system has done its job.

We might expect scientists, academics and psychologists to maintain higher standards. Unfortunately, BBC REEL’s The Psychology Behind Conspiracy Theories demonstrates that this is often not the case.


Who Is It That Is “At Risk” From Conspiracy Theories?

This reliance upon an illogical presupposition leads to profound confusion. During The Psychology Behind Conspiracy Theories, Anni Sternisko commented:

Conspiracy theories are not necessarily irrational or wrong. And I think what we are talking about in society at the moment—what is frightening us—are better explained, or better labelled, as conspiracy narratives; that is, ideas that are irrational to believe, or at least unlikely to be true—that are not necessarily theories, such that they are not falsifiable.


Sternisko appears to have been talking to her BBC interviewer about two completely different things: evidence-based arguments on one hand and irrational beliefs on the other. 

Sternisko's problem is that both the rational and the irrational are indiscriminately referred to as “conspiracy theories” in today's academe and media. Thus, in searching for a unifying psychology to account for two diametrically opposed thought processes, the doctoral researcher cannot avail herself of suitable terminology that has gained acceptance in her professional environment and is forced by her own intellectual honesty to start coining spontaneous distinctions between alleged conspiracy “theories” and “narratives”.

This may be welcome insight, but it has become necessary only because the psychologists in her field are floundering around with a working definition of “conspiracy theory” that is itself irrational. Again, we can look to the paper by Douglas et al. to appreciate just how incoherent it is:

While a conspiracy refers to a true causal chain of events, a conspiracy theory refers to an allegation of conspiracy that may or may not be true. [. . .] To measure belief in conspiracy theories, scholars and polling houses often ask respondents—through surveys—if they believe in particular conspiracy theories such as 9/11, the assassination of JFK, or the death of Princess Diana.


This reconfirms that the only benchmark that the academics concerned have for “measuring” what they call “conspiracy theory” is the extent to which the subject agrees or disagrees with the official account of any given event. As long as their subjects unquestionably accept the official “narrative”, they aren’t considered to be “conspiracy theorists.” If they do question it, they are.

Consequently, all of the related experimental psychology is completely meaningless, because the researchers never investigate whether what they call conspiracy theory “may or may not be true”. There is no basis for their claim that “conspiracist ideation” is irrational, or even that it exists.

Without establishing the credibility of the propounded theory, the psychologists, sociologists and other researchers and scientists involved have based their entire field of research upon their own opinions. This cannot be considered science.

In this light, Anni Sternisko’s statement at last reveals something about what the BBC called the “risk” of conspiracy theory. It seems that these alternative explanations of events are not dangerous to the conspiracy theorists themselves, but rather to people like Sternisko, who find them “frightening”.

Questioning power is a fundamental democratic ideal, yet this PhD candidate would appear to be one of millions in Western societies who have come to feel that doing so is scary. Fear, and the resultant stress and anxiety it produces, can be very damaging to our mental health. So the BBC is right, in a sense, to highlight potential risks in this domain.

It is just that the BBC, and the groundless psychological theories it promotes, are wrong about who is at risk. It isn’t the purported “conspiracy theorists”, but rather the people who unquestioningly accept official accounts who are “vulnerable”.

What the BBC presented with its REEL segment was not an exploration of the psychology behind conspiracy theory. It was instead an exposé of the deep-rooted terror of those who apparently dare not look at the evidence cited by the people they label “conspiracy theorists”. 

If their government is lying to them, then, for some reason, it seems they do not want to know. The mere thought of it petrifies them.

The researchers—who insist that it is the “conspiracy theorists” who are deluded—have constructed a mythology masquerading as scientific knowledge. Their resultant research, founded upon this myth, isn’t remotely scientific. Inevitably, the psychologists who expounded upon their own apparent delusions for the BBC soon descended into farce.


It’s Science, Don’t Laugh

Professor Sarah Gorman authoritatively informed the BBC audience that “conspiracy theorists” are so irrational they can believe two contradictory statements at the same time. We have already discussed why so much of this psychological research is flawed, but Gorman was most likely referring to a paper that isn’t just based upon assumptions; it is appallingly bad science for numerous other reasons besides.

Gorman told the BBC audience:

People are very often able to hold in their heads two conspiracy theories that are directly in conflict. So, for example, people will simultaneously believe that Princess Diana’s death was staged, and that she’s still alive and also that she was murdered.

And, on the face of it this doesn’t make much sense, but the underlying principle here is that they believe that something is just not right about the official story, and it almost doesn’t matter exactly what the alternative is; just that there has to be an alternative that’s being suppressed.


Professor Gorman was almost certainly referring here to one of the formative papers in the field of experimental conspiracy theory research, Dead and Alive: Beliefs in Contradictory Conspiracy Theories (Wood, Douglas & Sutton, 2012).

Presumably, she has read it, so why she would make this statement is difficult to say. The paper is a joke.

Wood et al. conducted experiments in an effort to identify what they had already judged to be the psychological weakness of “conspiracy theorists”. They set the subjects a series of questions and rated their responses using a Likert-type scale (1 – strongly disagree, 4 – neutral response, 7 – strongly agree).

The psychologists conducting this research presented deliberately contradictory statements. For example, one arm of the study asked the subjects to indicate their level of agreement with the idea that Princess Diana was murdered and also with the suggestion that she faked her own death. Similarly, another arm asked the subjects the extent of their agreement with the notion that Osama bin Laden was killed by US Navy SEALs but also that he was still alive in captivity.

They collected the responses, analysed the results and, from this, deduced:

While it has been known for some time that belief in one conspiracy theory appears to be associated with belief in others, only now do we know that this can even apply to conspiracy theories that are mutually contradictory. This finding supports our contention that the monological nature of conspiracism is driven not by conspiracy theories directly supporting one another but by the coherence of each theory with higher-order beliefs that support the idea of conspiracy in general.


It seems that Professor Gorman, at least, is convinced by this pabulum and was willing to present it to the BBC as scientific fact. Alas—rather as with Kaplan’s paper—these scientists’ conclusions, seemingly referenced by Gorman, were not supported by their own experimental results.

Had the participants been asked to consider exclusivity, and subsequently indicated that they agreed with two or more contradictory theories, then the Wood et al. conclusion would have been substantiated. But they weren’t, so it wasn’t.

All that the participants were asked to do was to indicate their relative level of agreement. This Hobson's choice of a study design means it is entirely possible, and logical, for a research participant of sound mind to agree strongly with one statement while agreeing somewhat with another, even if the two are “mutually contradictory”.

To illustrate this: the official account of Osama bin Laden’s death claims that he was assassinated by the US military. There is no video, forensic or photographic evidence, no witness testimony—all the members of the SEAL Team Six deployed to Pakistan for that operation have since managed to die—nor indeed anything, beyond the proclamation of politicians, to lend this tale any credibility at all. There isn’t even any evidence of a body, as bin Laden was allegedly buried at sea.

Consequently, if you doubt the official account (and what sane person wouldn’t), a whole range of possibilities exists. It all depends upon your evaluation of the available evidence—which by definition cannot come from the academically-vaunted official sources, because they haven’t presented any.

In such circumstances, it is perfectly legitimate to agree strongly that bin Laden died in 2011 and simultaneously to agree somewhat with the proposition that he was extraordinarily renditioned to a black-ops site somewhere. Nothing can be ruled out. There is insufficient evidence to draw any firm conclusion.

Wood et al. did not ask the study participants to exclude contradictory accounts; only to rate such accounts on a scale of plausibility. The paper’s conclusion, that the results of their experimental psychology proved “the monological nature of conspiracism” was driven by some assumed “higher-order” belief system, was pseudo-scientific claptrap.

The BBC duly conveyed Professor Gorman’s “expert” opinion that all of this somehow made sense. This is standard fare at White City. Anyone who questions the state or its narratives is a “conspiracy theorist”, as far as the BBC is concerned.

So, before we suffer any more of this nonsense, let’s politely ask these experimental psychologists to examine the evidence behind so-called conspiracy theories before they rush into making assumptions about the supposed psychology behind them. Hopefully, they won’t find the experience too frightening.