Beware, neuroscience PhD students: are we running fMRI experiments on dead Atlantic salmon?

by Iciar Iturmendi Sabater

Graphic design by Livia Nguyen

Once upon a time, doctors were called ‘magnetizers’. They were thought to heal through ‘magnetic hypnotism’ – a hypnotic method that used magnets to re-orient living creatures’ drive towards life. This is not fiction, but the medical reality of the 18th century. As unscientific as it may sound today, German doctor Franz Mesmer, the father of this theory called “mesmerism”, may have been onto something about the usefulness of measuring magnetic forces to understand biological processes. 

In 1990, Japanese biophysicist Seiji Ogawa discovered that functional magnetic resonance imaging (fMRI) could be used to detect brain activation by measuring magnetic field changes driven by the brain’s blood oxygenation levels, which increase with brain activity.  For over 30 years, this finding has enabled scientists to link changes in brain function to psychological phenomena, such as memory, perception, or emotions.

Yet, my undergraduate advisor warned me against conducting fMRI research in my MSc or PhD thesis, as if fMRI were as fallacious as mesmerism. Perhaps he did not want me to be ‘fooled by the brain’. He sensed my naïve excitement about producing colourful brain figures one finds in highly cited neuroimaging publications, which research suggests actually bias scientific judgement.1  

My advisor mostly wanted to make me aware of the false positive risks of fMRI research. He showed me the Dead Salmon Study: brain activation was found in a dead Atlantic salmon that looked at pictures of emotional faces during an fMRI scan.2 The Atlantic salmon paper did not imply dead fish can feel or think about humans’ emotions, but rather warned: “fMRI research is at a high risk of false positives”. If fMRI can find activation where there is certainly none, so can it in living human brains. Then how would you deal with such a false positive risk?

“Sample size, sample size, sample size”

In March 2022, Nature published two papers insisting on the importance of increasing sample sizes to reduce the risk of false positives and inflated effect sizes, and to improve the replicability of neuroimaging research.3, 4 The first found that false positives risk in MRI studies only begins falling when sample sizes surpass the thousand. Yet the median sample size in neuroimaging studies is currently 25.3 

Since Ogawa’s fMRI discovery in 1990 up until 2018, sample sizes in fMRI studies have increased yearly at a rate of approximately 0.74 participants.4 Dr. Lucina Uddin, a professor at the Center for Cognitive Neuroscience Analysis Core at the University of California, Los Angeles, recently tweeted: “My first fMRI paper in grad school had 10 subjects. Considering how drastically the standards for publication have changed since then, I’m in awe of students who manage to publish multiple neuroimaging papers before completing their PhD”. 

Graphical user interface, text, application

Description automatically generated

Increasing sample sizes entails greater analytical load, costs, and recruitment efforts. These are limited resources to graduate students, more so after the pandemic, since non-essential in-person experimenting has been restricted or moved online. It is already challenging for students conducting MRI research to reach sample sizes of even 30 participants. Are we expected then to avert false positives by testing thousands

Big Data for Big Problems

Big data could be a solution for graduate students conducting fMRI research. The Adolescent Brain Development Study (ABCD, N = 11,874), the Human Connectome Project (HBN, N = 1200), and the UK Biobank (N = 35,735) are examples of big fMRI datasets accessible worldwide. 

In a collaborative effort by neuroimaging researchers all over the globe, the second March 2022 Nature paper brought hope to neuroimaging by aggregating more than 100,000 structural MRI brain scans collected throughout over 100 studies (including ABCD, HBN and UK Biobank) from participants between 155-days post-conception and 100-years-old.4 The study found that just like height and weight can be measured along the decimal scale and compared across persons, the integration of brain scans could soon generate a scaling system along which individual MRI scans can be compared across population standards.5 Findings from individual studies with small sample sizes could in the near future be evaluated against these ‘brain charts’ to test if they are coherent with big data trajectories.

“Pre-registration, pre-registration, pre-registration”

In the meantime, PhD students may dive into already existing datasets (ABCD, HBN, UK Biobank, etc) to conduct research. A danger of resorting to big datasets is that the researcher conducting (secondary) analysis does not design the study themselves. As such, researchers are not bounded to pre-established hypotheses and are free to “torture” the data until they find interesting results which could be claimed as hypothesized from the beginning. This takes us back to the beginning: data torturing increases false positives. As the number of statistical tests rises so does the probability of finding an effect that does not exist.

The counterpart to the “sample size” mantra could be: “pre-registration, pre-registration, pre-registration.” If you are using big data to test your hypothesis overcoming sample size limitations, share your hypotheses with the scientific community prior to running analyses or accessing data. This can be done through platforms such us Open Science Framework and ensures findings are not selectively chosen.

fMRI findings can be mesmerizing and cloud scientific judgment. But despite the scepticism looming over fMRI research, we may avoid being ‘fooled by the brain’ by increasing sample sizes and responsible open-science practices.


  1. Schweitzer NJ, Baker DA, Risko EF. Fooled by the brain: Re-examining the influence of neuroimages. Cognition. 2013 Dec 1;129(3):501–11. 
  2. Bennett CM, Baird AA, Miller MB, Woldford GL. Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction. J Serendipitous Unexpected Results. 2009 Nov 7;1(1):1–5. 
  3. Marek S, Tervo-Clemmens B, Calabro FJ, et al. Reproducible brain-wide association studies require thousands of individuals. Nat 2022. 603(7902):654–60. 
  4. Bethlehem RAI, Seidlitz J, White SR, et al. Brain charts for the human lifespan. Nat 2022;1–11.
  5. Szucs D, Ioannidis JP. Sample size evolution in neuroimaging research: An evaluation of highly-cited studies (1990-2012) and of latest practices (2017-2018) in high-impact journals. Neuroimage. 2020 Nov 1;221.