Can I calculate covariation for binary logistic output from SPSS

Hi, I am using SPSS to identify moderation relationships. I am using binary logistical regression when the outcome variable could not be transformed to normal, so have recoded into high and low groups. I am then using a very cool program I found on the internet to graph the moderation relationship called modgraph:

This program also calculates simple slopes. My problem is that to calculate the simple slope I need to provide the covariation of variables and SPSS binary logistical regression doesn't calculate covariation. It does provide an estimation of correlation and I wonder if there is any way to calculate the covariation, or whether this is not possible/sensible using binary logistical regression?

Advice would be very much appreciated.



Super Moderator
Hi, I am using SPSS to identify moderation relationships. I am using binary logistical regression when the outcome variable could not be transformed to normal
Back up for a second. Linear regression doesn't require your outcome variable to be normally distributed. It only requires the errors to be normally distributed, and even that assumption becomes less and less important the larger your sample. By categorising your outcome into a binary variable you're deleting variation, and reducing power. Are you sure you need to do this?
Thanks for taking the time to respond. I'm using a coping inventory and with some strategies, such as denial, the responses are stacked at the 'don't use' end of the scale so the variable has a heavy positive skew. The scale only ranges between 4 (don't use, which is the sum of four questions) through to 16. Residuals in linear regression were far from representative of normal unfortunately. I have a sample size of 107.


Super Moderator
Great that you took the time to check out the residuals. However, your sample size provides some protection against error non-normality. If we assume normal errors then the sampling distribution of regression coefficients will be normal - but even if the errors aren't normal, the sampling distribution of the coefficients will converge toward a normal distribution the greater the sample size.

Error non-normality also isn't required for OLS estimates to be consistent, unbiased, and efficient in the sense of being best linear unbiased estimates. I.e. it's only required for significance testing and confidence intervals. So a compromise that would allow you to avoid throwing away variation in your original data could be to use OLS regression, but compute confidence intervals via bootstrapping instead of assuming normal errors.
Thank you for this advice. I'm not sure what OLS regression is. What would it be called in SPSS? Sorry, my stats knowledge is limited to 3rd and 4th year psychology.


Super Moderator
Sorry. I think you'll find it listed as linear regression. OLS = ordinary least squares, i.e. the usual way of estimating regression models.
Oh, thanks, I'm with you now. I'm doing this research for my PhD so wonder whether you know of a solid reference that I can use to support using linear regression when assumptions have been violated? Otherwise I fear I will risk being criticised for such a move.


Super Moderator
The below ref can be used as a source for the fact that the OLS estimator is the best linear unbiased estimator (and is consistent) even when errors are not normal, as long as other assumptions are met:
Wooldridge, J. M. Introductory Econometrics: A Modern Approach. Mason, OH: South-Western Cengage Learning, 2009.

If you did use bootstrapping for confidence intervals and/or hypothesis tests to ensure robustness of these tests against normality, a useful ref could be: