The Curious Case of the p-value | Blog | Monk Prayogshala (2024)

Anyone who has taken a research-based course for their university curriculum must have learned about the p-value. At its core, the p-value is a statistical measure that helps researchers determine the strength of evidence against the null hypothesis, which posits that there is no significant difference or effect. A p-value quantifies the probability of observing results as extreme as the ones obtained in a study if the null hypothesis were true. In simpler terms, it tells us how likely it is that the observed results are due to chance alone.

Consider a coin toss: if you flip a coin 10 times and it lands on heads 8 times, you may question whether this outcome is simply due to random chance or if the coin is biased towards heads. The null hypothesis suggests the coin is fair (with a 50% chance of heads or tails), while the alternative hypothesis proposes bias towards heads. Calculating the p-value reveals the likelihood of observing 8 or more heads out of 10 flips if the coin were truly fair. In this case, the p-value is 0.04, indicating a 4% chance of obtaining this result by random chance alone. Similarly, in social sciences research, the p-value quantifies the likelihood of observing results as extreme as those obtained in a study. This is done under the assumption that the null hypothesis, which proposes no relationship or effect, holds true.

Formally, the concept of p-value was first introduced by Karl Pearson in the year 1900 in his chi-square paper. However, it gained significant attention and popularity through the work of Ronald Fisher, especially through his book “Statistical Methods for Research Workers.” Fisher advocated for the use of the p-value as a tool to assess the significance of research findings. The threshold commonly employed, such as 0.05, was suggested by Fisher himself. This threshold provides a convenient boundary for researchers to determine whether the observed results are statistically significant or not. Following Fisher's advocacy, the threshold of 0.05 became widely adopted as the standard for assessing statistical significance in research. This convention has persisted across various fields and disciplines, providing researchers with a clear benchmark to evaluate the robustness of their findings.

However, in 2016, the American Statistical Association (ASA) published a statement on the use of p-value in scientific research due to its misinterpretation and misuse. Their indication was to reiterate that the purpose of a p-value is not to rely on it entirely to make conclusions and policy decisions, or to prove a hypothesis as true. Full transparency and looking at the whole picture is what should be done during any research process while drawing conclusions from their research questions.

Moreover, the pressure to obtain low p-values can incentivize researchers to engage in questionable research practices, such as data dredging, p-hacking, and selective reporting, in pursuit of publishable results. Data dredging involves analyzing data repeatedly until a significant result is found, without a clear hypothesis driving the analysis. P-hacking refers to manipulating the data or analysis methods to achieve a desired p-value below the threshold of significance. Selective reporting involves selectively highlighting only the statistically significant findings while disregarding or downplaying non-significant results. When p-values are used incorrectly, it can also lead to inflated effect sizes, false-positive results, and ultimately, misinforming subsequent research and decision-making. When effect sizes are inflated, it implies that the observed effects are larger or more significant than they truly are, potentially exaggerating the importance or impact of a particular finding. Additionally, false-positive results can occur, where the data shows a significant effect even though there isn't one in reality. This can lead to misinformation and incorrect decision-making based on the data.

Such practices can undermine research integrity and can have an impact on reproducibility as well. Therefore, while p-values can provide valuable information about the likelihood of observing the data under the null hypothesis, their misuse or overemphasis can have detrimental effects on the scientific community and the validity of research outcomes.

Consider a hypothetical situation where a new drug is being tested for its effectiveness on a particular medical condition. Then the p-value was found to be slightly lower than 0.05, indicating a statistical significance. If policymakers solely rely on this value without considering factors like effect size, clinical relevance, or potential side effects, they might hastily approve the drug. This over-reliance can lead to inefficient resource allocation, inappropriate treatment decisions, and potential harm to patients if benefits are marginal or outweighed by risks.

Now, let's shift our attention to exploring alternative approaches to p-value. These methods aim to complement or supplement the p-value, offering valuable insights beyond traditional significance testing. One such approach is the estimation of effect sizes, which provides a measure of the magnitude of the observed effect rather than just its statistical significance, which only explains whether the outcomes were due to chance. Effect sizes offer valuable context and can help researchers assess the practical relevance of their findings. Another alternative is the use of confidence intervals, which provide a range of plausible values for the true effect size with a specified level of confidence. Unlike p-values, which only indicate whether an effect is statistically significant or not, confidence intervals offer a more nuanced understanding of the uncertainty inherent in statistical estimates. Bayesian methods represent yet another alternative paradigm, offering a more flexible framework for hypothesis testing and model estimation. By incorporating prior beliefs and updating them in light of observed data, Bayesian analysis provides a coherent and intuitive approach to statistical inference that addresses many of the shortcomings of classical statistics.

To this day p-value remains a widely used statistic when it comes to hypothesis testing in research. While it provides a convenient summary of statistical evidence, its misuse and misinterpretation have raised serious concerns about its reliability and validity. There is no “one-size-fits-all” in a situation like this. As such, researchers should approach p-values with caution, supplementing them with alternative measures and adopting a more nuanced understanding of statistical inference.

Apoorva Thakur

The Curious Case of the p-value | Blog | Monk Prayogshala (2024)

References

Top Articles
Latest Posts
Article information

Author: Carmelo Roob

Last Updated:

Views: 5508

Rating: 4.4 / 5 (45 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Carmelo Roob

Birthday: 1995-01-09

Address: Apt. 915 481 Sipes Cliff, New Gonzalobury, CO 80176

Phone: +6773780339780

Job: Sales Executive

Hobby: Gaming, Jogging, Rugby, Video gaming, Handball, Ice skating, Web surfing

Introduction: My name is Carmelo Roob, I am a modern, handsome, delightful, comfortable, attractive, vast, good person who loves writing and wants to share my knowledge and understanding with you.