Why does the “linear regression t-test” return a p-value (two-tailed) from regression that is twice the p-value from ANOVA? (Binary predictor)

I'm using the "linear regression t-test" guide at https://stattrek.com/regression/slope-test.aspx

The guide shows calculating t =b1/SE, where b1 and SE are provided by the regression function (here lm() - using R.) The guide shows the p-value gets doubled as this is a two-sided test (hypothesis: the slope is zero).

The output from R, using a binary predictor, gives a p-value for the coefficient (other than the intercept) that is equal to the reported F-statistic's p-value. This is the same F-statistic return from aov().

In the guide, the p-value associated with the coefficient is getting doubled (at least with their continuous variable example).

Doesn't ANOVA measure the significance of the predictor in this case? So why is the "linear regression t-test" saying the p-value (associated with a slope of 0) is twice this?

Also, I can't get math tags to work. This, with square braces: {math}b_1{math} : gives \(b_1\)
Last edited:
Apparently, lm() returns results that are already doubled for a two-tailed test (coefficient t-tests and the F-statistic) , while the guide is assuming the output from a typical regression package didn't double the t-test coefficient results.