Confusion over the F-ratio in a hierarchical multiple regression

#1
Hey all,

I'm not used to work with multiple regression analysis, so I would be glad if anyone could help me interpreting this result:

I performed a hierarchical multiple regression in SPSS (forced entry) in two Blocks to predict to variable SM:
Block 1: Predictors: IQ, AGE
Block 2: Predictors: PT

Results for Block 1:
R: .730
R Square / Adjusted R Square: .533 / .492
F Change: 13.125
Sig. F Change: .000


Results for Block 2:
R: .783
R Square / Adjusted R Square: .614 / .561
R Square Change: .081
F Change: 4.602
Sig. F Change: .043


I'm a little bit confused because of the F-Change: Obviously, Model 2 explains more variance (because of the R Square Change) than Model 1. But the F-ratio is lower. Or does F Change mean that the models F-ratio did improve with 4.602?
(The ANOVA-Table gives me an F-ratio of 13.125 for Model A and 11.654 for Model B).

So is Model 2 really better than Model 1?

Thanks in advance,
 

noetsi

Fortran must die
#2
Hiearchical regression does not tell you if one model is better than another. It tells you if adding variables to a set of other variables already in the model adds predictive value.

The F ratio logically will be smaller in later blocs if your theory of which variable are most important is correct. Because this is essentially testing whether your new variables are adding predictive value to the model and the first set of variables added will add more predictive value than the later variables. What is important is that the F change test is statistically significant at the .05 level for the 2nd block. This means that this second set of variables added predictive value.

You don't use hiearchical regression to tell you whether one set of variables adds predictive value relative to the first set (or earlier sets). Or at least I have not seen it done this way. You use it to tell you if adding variables in a given bloc adds predictive value thus suggesting at least they add value even taken into account the earlier values. I suppose you could add the variables in reverse order, that is run the 2nd set first in another hiearchical regression, and compare the change in adjusted r square between the runs, but I don't know if this is a good test of whether the variables are relatively more important if that is what you want.