I have two dependent variables, Y1 and Y2, which are related to each other (they are two measures from the same sample). I have a regression model with three independent predictors, A, B and C, and all their two- and three-way interactions entered. I would like to run this regression on each of my two dependent variables separately. I would then like to take the regression coefficient for the A*B interaction term (which is the one of interest) from each model and compare them (by dividing the difference between betas by the square root of the sum of the squared SE’s), to show that the A*B term predicts significantly more of the variance of dependent variable Y1 than it does of dependent variable Y2.

My questions are:

a) Is running two identical regressions on Y1 and Y2 an acceptable method of comparing the effect of a specific term on Y1 and Y2?

b) The overall model (and the A*B term) for Y1 is significant, whereas the overall model (and the A*B term) for Y2 is not significant (which follows our predictions). When comparing the two A*B regression coefficients, do I take them from these two full regressions with all the A,B,C and interaction terms in? Or can I reduce the model for Y1 by removing all C terms (given that C is not significant, and neither are its interactions) and using the reduced (still overall significant) model to provide me with my A*B regression coefficient?

c) And finally, if I do take the A*B regression coefficient from the reduced model for Y1, do I also reduce the model for Y2 so I can compare regression coefficients from identical models?

Many thanks!