Article by Madhumitha Rabindranath
Graphic design by Amy Assabgui
During my undergraduate degree, I only took two statistics-related courses out of over 40 pre-requisites and electives. As an individual interested in scientific research, it is concerning that I may not have sufficient statistical understanding. But I am not alone. By graduate school, most research-stream students have little to no statistical training prior to their enrolment and may not be required to take research methods or statistics courses during their degree.1 This lack of statistical comprehension could span entire research careers and is predominantly due to the fear of the discipline that comes from decreased exposure. This apprehension can prevent scientists from seeking appropriate training. A scientific field without statistical expertise can be limited in its exploration and progress. Notably, the current reproducibility crisis in scientific literature, where many papers are published but their results cannot be reproduced by other research groups, significantly floods the literature with irrelevant results that can hinder scientific progress. To combat this issue, we must consider prioritizing the integration of statistics in scientific training.
Science is predominantly an exploratory field: we observe a specific phenomenon and describe what we see. This essentially indicates that the experimental design and analyses are data-driven and subject to change. For example, if there are three ways to quantify a biomarker in blood samples, I would conduct three different experiments and pick the method that most accurately determines the biomarker’s concentration as reported by previous research groups. Subsequent statistical inferences in a study are not compatible with an exploratory research design.2 Any valid statistical modeling to predict certain outcomes requires a pre-planned study design and analyses to confirm our findings. Many studies are far from a confirmatory stage and are essentially documenting observations.2 For example, when screening novel biomarkers for diagnosing cancer, the aim should be to identify as many potential candidates and understand the potential role of each biomarker for diagnostic purposes before testing their performance and clinical applications. Ultimately, we cannot use our own study to claim specific associations but statistical analyses at this stage of our research can help design further studies to confirm our findings.
The accurate use of statistics is also dependent on understanding that statistics is not equivalent to calculations. A statistical framework is an integral part of the entire scientific process.3 Contrary to popular belief, the bulk of statistical thinking does not occur at the end of a study (i.e., statistical inferencing) but in study design and execution.4 It is at this early stage where methodological biases can be introduced and must be appropriately accounted for. Sampling and measurement bias could include admission rate, non-response, recall, and family information biases which can affect odds ratio calculations depending on the type of study (e.g., case-control versus cohort studies).4 Without any foresight into the study design methodology, bias can plague the results of the study, leading to misrepresentation of research findings.
Inadequate interpretation of results is also affected by one statistical concept: the p-value. Despite being one of the most misused and misunderstood parts of statistics, the p-value has dominated scientific interpretation. Results that are “statistically insignificant” are deemed as having no effect while “statistically significant” results show a strong association.5 Having a dichotomous view on p-values invites misuse such as manipulating analyses to identify statistically significant associations (i.e., p-hacking). A recent commentary in Nature calls for scientists to retire statistical significance in all contexts and use other metrics such as point estimates.5 While this approach may not be well-received, it does highlight the importance of statistical training in science as continued misuse has dire effects. Statistical misinterpretations flood the literature with inaccurate studies which can affect scientific progress and clinical translation.5 The scientific community has to contend with the continued misuse of statistics and meaningfully consider potential solutions.
The most important step to address the misuse of statistics in science is to better integrate statistical courses into research training. This involves requiring graduate students to take field-relevant statistics courses as part of their degrees to ensure that they have a foundation for applying statistical principles to their projects.1 Institutions may need to change their curricula in the sciences and emphasize a statistical framework. For example, there is a statistics course offered in Seattle for undergraduate and graduate students in the sciences that focuses on overarching statistical concepts rather than calculations, ensuring students can confidently navigate statistics.3 Addressing apprehension towards statistical methods through education is an effective way to ensure that as students progress in their careers, they are capable of making accurate interpretations.
Additional statistical training cannot replace the value of consulting statisticians.1 As trainees and scientists, we have limited resources and time to thoroughly understand statistical concepts. Including a statistician as part of the research team can help mitigate any statistical errors by providing novel perspectives and guidance, especially in the study design. Faculty and students should consult the available resources at their institutions to enable collaborations with statistics experts. Institutions should also encourage and facilitate interdisciplinary collaborations or provide appropriate assistance to improve their research output.
Although statistics is widely used in the sciences, there persists an underappreciation and continued misuse of statistical analyses. This has severe consequences for the scientific community as incorrect interpretations of the data can hinder scientific progress and potential applications. Research trainees, scientists, and institutions have an obligation to upgrade their statistical training and to seek collaborations with their statistics departments to change the current scientific literature landscape.
References:
- Weissgerber TL, Garovic VD, Milin-Lazovic JS, Winham SJ, Obradovic Z, Trzeciakowski JP, et al. Reinventing Biostatistics Education for Basic Scientists. PLoS Biol. 2016;14:e1002430.
- Tong C. Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science. Am Stat. 2019;73:246–61.
- Steel EA, Liermann M, Guttorp P. Beyond Calculations: A Course in Statistical Thinking. Am Stat. 2019;73:392–401.
- Sackett DL. Bias in analytic research. J Chronic Dis. 1979;32:51–63. 5. Amrhein V, Greenland S, McShane B. Scientists rise up against statistical significance. Nature. 2019;567:305–7.