Stats (Dec 2023)

Revisiting the Large <i>n</i> (Sample Size) Problem: How to Avert Spurious Significance Results

  • Aris Spanos

DOI
https://doi.org/10.3390/stats6040081
Journal volume & issue
Vol. 6, no. 4
pp. 1323 – 1338

Abstract

Read online

Although large data sets are generally viewed as advantageous for their ability to provide more precise and reliable evidence, it is often overlooked that these benefits are contingent upon certain conditions being met. The primary condition is the approximate validity (statistical adequacy) of the probabilistic assumptions comprising the statistical model Mθ(x) applied to the data. In the case of a statistically adequate Mθ(x) and a given significance level α, as n increases, the power of a test increases, and the p-value decreases due to the inherent trade-off between type I and type II error probabilities in frequentist testing. This trade-off raises concerns about the reliability of declaring ‘statistical significance’ based on conventional significance levels when n is exceptionally large. To address this issue, the author proposes that a principled approach, in the form of post-data severity (SEV) evaluation, be employed. The SEV evaluation represents a post-data error probability that converts unduly data-specific ‘accept/reject H0 results’ into evidence either supporting or contradicting inferential claims regarding the parameters of interest. This approach offers a more nuanced and robust perspective in navigating the challenges posed by the large n problem.

Keywords