Effect of a variable in this regression

#1
I am doing a logit regression on some football stats. I have this rating that I created and I am using it and another statistic that is a 0 or a 1 (home or away).

wdpts is my statistic. It is simply the winning teams rating minus the losers team rating. AWin is if it was the away team that won, and HWin is if it was the home team that won. Each is 1 if so, and 0 if not. This means (and i have checked) that HWin = 1-AWin.

THE PROBLEM: When I run a logit regression with wdpts and AWin, AWin's p-value is .0016 and appears to be correct since being the away team hurts you (in theory). However, when I run the same regression with HWin, HWin is insignificant with a p-value of .655. Since they are simply 1- the other, should their coefficients not just be the opposites?

Here are the regressions:

_________________________________________________________________

Convergence achieved after 4 iterations

Model 8: Logit, using observations 1-1280
Dependent variable: both

coefficient / std. error / z / p-value
--------------------------------------------------------
wdpts 1.90892 / 0.173985 / 10.97 / 5.22e-028 ***
AWin -0.307525 / 0.0974326 / -3.156 / 0.0016 ***

Mean dependent var 0.557813 S.D. dependent var 0.245940
McFadden R-squared 0.070984 Adjusted R-squared 0.068708
Log-likelihood -816.2828 Akaike criterion 1636.566
Schwarz criterion 1646.875 Hannan-Quinn 1640.437

Number of cases 'correctly predicted' = 825 (64.5%)
f(beta'x) at mean of independent vars = 0.246
Likelihood ratio test: Chi-square(2) = 124.74 [0.0000]
\(
\begin{tabular}{cc|cc}
& & Predicted & \\
& & 0 & 1 \\
\hline
Actual & 0 & 281 & 285 \\
& 1 & 170 & 544 \\
\end{tabular}
\)


________________________________________________________________


Convergence achieved after 4 iterations

Model 9: Logit, using observations 1-1280
Dependent variable: both

coefficient / std. error / z / p-value
---------------------------------------------------------
HWin 0.0366679 / 0.0821187 / 0.4465 / 0.6552
wdpts 1.67870 / 0.166850 / 10.06 / 8.21e-024 ***

Mean dependent var 0.557813 S.D. dependent var 0.241935
McFadden R-squared 0.065361 Adjusted R-squared 0.063085
Log-likelihood -821.2229 Akaike criterion 1646.446
Schwarz criterion 1656.755 Hannan-Quinn 1650.317

Number of cases 'correctly predicted' = 817 (63.8%)
f(beta'x) at mean of independent vars = 0.242
Likelihood ratio test: Chi-square(2) = 114.86 [0.0000]

\(
\begin{tabular}{cc|cc}
& & Predicted & \\
& & 0 & 1 \\
\hline
Actual & 0 & 224 & 342 \\
& 1 & 121 & 593 \\
\end{tabular}
\)
 
Last edited by a moderator:
#2
Is there no statistics guru to help me?

I am really struggling to understand why this happens. It would seem that the regression would just produce the opposite coefficient, but it does not.

*BUMP*

selfreply++;