top of page

Importance of checking the signal-to-noise ratio in the big data era

Updated: Mar 21, 2023

In this article, I demonstrate how important it is to evaluate or test for the signal-to-noise ratio in statistical analysis, especially when the sample size is large or massive.


What is the signal-to-noise ratio?


Consider, as an example, a linear regression model of the following form:

We often test for a linear hypothesis such as

against H1 where H0 is violated. The test is conducted using the F-test, which can be written as


where T is the sample size, and


The quantity m is called the signal-to-noise ratio. and it measure how much the restriction under H0 (or the corresponding X variables) contributes to the goodness-of-fit of the model.


Taking a simple case as an example,

the signal-to-noise ratio m in this case measures the contribution of the variable X1 to the model's fit or in-sample predictability, relative to its noise-component.


Why is it important?

Notice that the F-test statisic given above is a factor of the signal-to-noise ratio. The factor is driven by the sample size (T), given the values of K (number of X variables) and J (number of restrictions). In fact, many other statistical tests (such as the t-test) can be expressed similarly as a factor of the signal-to-noise ratio.


The problem occurs when the sample size is large or massive. Even if the value of m is small or even negligible, the F-test statistic can be large enough to reject the null hypothesis.


Hence, even if the contribution of the variables being tested is negligible, your t-test and F-test can reject the null hypothesis. This is a serious limitation of hypothesis testing in the big data era. That is, when the sample size is large enough, any practically negligible deviation from H0 can be rejected with an infinitesmal p-value.


In fact, the signal-to-noise ratio is also known as Cohen's f2 in behavioral science as a measure of effect size. According to Cohen (2013), the m values of 0.02, 0.15, and 0.35 respectively serve as thresholds for a small, medium, and large effect.


Hence, it is imperative to check the value of m (or effect size) in practical applications, especially when the sample size is large, using the values suggested by Cohen (2013) as benchmarks.


A testing procedure for m has been proposed in my working paper, which is posted here.


Example


The above table reports the regression results from a paper published in a top journal in finance, where T = 119785. Each column represents alternative regresion models for the same dependent variable. Compare the regression (1) and (2). The variable (UEHIGH × HISR) in (2) shows the t-statistic of 4.58 (=0.55/0.12), but the value of m is nearly 0 (=(0.043–0.042)/(1–0.043)). The statistical significance of (UEHIGH × HISR) is driven almost entirely by the large sample size, while the variable adds virtually nothing to the explnatory power of the model.








Comments


bottom of page