Testing whether two related slopes/intercepts are different from each other

#1
Hi, I'm new and stuck.

I want to see if the intercepts and slopes of two linear regression lines are significantly different. I thought I found the solution but now it looks like that might not work because the data is overlapping. Specifically:
I have data for X, Y1, and Y2. I can find the regression for both (X,Y1) and (X,Y2), and determine whether those are significant. But now I want to test whether the intercept/slopes are different between the two regressions. Alternatively, I could test whether they were the same a 0/1. Mainly, I'm interested in the intercept, if say 3 and 5 are different, given the variances of this data. The slope is ~1 for both and r is very high (>0.98) for both.

Is there any way I can test this significance? Either using the raw data or given standard errors etc.? Information about this in linear regression is appreciated, as well as higher order regression (but not as crucial and might be less meaningful). Also, I don't know if ANCOVA/MANCOVA is appropriate or how to implement that. I can do this mainly by hand/Excel, SPSS, or MATLAB (R etc. I might be able to translate).
Much appreciated.
 

Dason

Ambassador to the humans
#2
Shouldn't be too bad. Use the difference of the Y1 and Y2 as your response and fit a regression on X. Then test if the slope and intercept of this are different from 0. If they are then you have evidence that the regression lines are different.
 
#5
Shouldn't be too bad. Use the difference of the Y1 and Y2 as your response and fit a regression on X. Then test if the slope and intercept of this are different from 0. If they are then you have evidence that the regression lines are different.
Thanks. Sounds deceptively simple. Just a t-test type thing?

The OP's question/problem appears to look like a "Chow test" scenario.

Check by looking here: http://en.wikipedia.org/wiki/Chow_test
Heh. That's funny. Not 10 minutes after posting the OP I came across that page while searching for a completely unrelated stats test. I didn't know if it would work because it says, "different data sets." All the data is within-subjects/paired, with X as the same pretest.
 

Dragan

Super Moderator
#6
Thanks. Sounds deceptively simple. Just a t-test type thing?


Heh. That's funny. Not 10 minutes after posting the OP I came across that page while searching for a completely unrelated stats test. I didn't know if it would work because it says, "different data sets." All the data is within-subjects/paired, with X as the same pretest.

I can't quite tell.

Are Y1 and Y2 different response variables, while X is the same for both Y1 and Y2? If that is the case, then a Chow test would not be appropriate as Dason suggested.

On the other hand, if Y1 and Y2 are, say time period 1 and time period 2, where Y is the same response variable, then the Chow test could be appropriate.
 

Lazar

Phineas Packard
#7
You could do it in R in a deliberately elaborate manner as follows (was a bit bored as I wait for a model to converge):
Code:
data(iris)
library(lavaan)
model <- "
Petal.Width ~ s1*Sepal.Length
Petal.Length ~ s2*Sepal.Length
Petal.Width ~ i1*1
Petal.Length ~ i2*1

diffInt:= i1 - i2
diffSlope:= s1 - s2
"
fit <- sem(model, data = iris)
summary(fit)
Which ends up just giving you what Dason suggested but mhew.
 
#8
Stupid example, because it's easier than explaining the thing:
The task is a driving game. They do the exact same task 3 times (counterbalancing etc.): 1) X = Sober, 2) Y1 = Drunk, 3) Y2 = Caffeine or something innocuous. I can determine which coefficients are different from 0 and 1 between each Y and X, now I want to know whether the intercepts (and possibly slopes) (X,Y1) are different from (X,Y2). The goal is that the Y1 comparison is different in 1 or both coefficients while Y2 is slope = 1 int = 0 or not significant as it has no effect. I don't think the first test implies the second.
 

Dragan

Super Moderator
#9
Stupid example, because it's easier than explaining the thing:
The task is a driving game. They do the exact same task 3 times (counterbalancing etc.): 1) X = Sober, 2) Y1 = Drunk, 3) Y2 = Caffeine or something innocuous. I can determine which coefficients are different from 0 and 1 between each Y and X, now I want to know whether the intercepts (and possibly slopes) (X,Y1) are different from (X,Y2). The goal is that the Y1 comparison is different in 1 or both coefficients while Y2 is slope = 1 int = 0 or not significant as it has no effect. I don't think the first test implies the second.
Okay I see now.

I think, perhaps, that one efficient way to do this is through the Bootstrap. That is, you run Bootstrap regression models on Y1 and X; and Y2 on X. Each of the two regression models would provide confidence intervals on the intercepts and slope coefficients. The "easy" way out is to look and see if the the confidence intervals satisfy your null hypotheses for both cases. Note that they are related so you might want to use a Type I error rate of something like 0.025 instead 0.05.
 
Last edited:

Dason

Ambassador to the humans
#10
Okay I see now.

I think, perhaps, that one efficient way to do this is through the Bootstrap. That is, you run Bootstrap regression models on Y1 and X; and Y2 on X. Each of the two regression models would provide confidence intervals on the intercepts and slope coefficients. The "easy" way out is to look and see if the the confidence intervals satisfy your null hypotheses for both cases. Note that they are related so you might want to use a Type I error rate of something like 0.025 instead 0.05.
What's wrong with the method I suggested? I mean you'd ideally do a full vs reduced test against the model E(y) = 0 but either way should be fine.
 

Dragan

Super Moderator
#11
What's wrong with the method I suggested? I mean you'd ideally do a full vs reduced test against the model E(y) = 0 but either way should be fine.
Yeah, sounds good to me...now that I understand what is attempted to be done. I guess I'm becoming to much of a computational statistician and trying to avoid assumptions like normality :=) ......I don't what the OP's sample size is.