# Exp(B) or CI values with the letter "E"

#### rezzak

##### New Member
Dear all,

I am running a Multinomial Log. Regression. Everything seems fine but I do not understand when Exp(B) oder CI values are like 1,000E-013. I see this result although, the goodness of fit of the overall model is appropriate.

For example in: B=-107,4, p=0,031 Exp(B)=1,000E-13. Now it looks significant but I cannot interpret the odds. Can I trust this p value?

Thank you very for you help!

Rezzak

#### hlsmith

##### Less is more. Stay pure. Stay poor.
Can you post the full output from the model?

I am guessing you have an extremely small OR, you could flip the reference group to make it positive. I would also guess you have a very large SE as well, so it seems like there is a huge association which is marginally significant, but when taking into account sampling variability your confidence is much weaker, meaning one of the bounds of the interval is close to 1, the null value for OR. Reasons could include overparameterization of the model with multicollinear terms or perhaps a small sample size.

Though I am just speculating based on the little information provided.

#### hlsmith

##### Less is more. Stay pure. Stay poor.
What is up with your 66% of data missing?

So is this the output for two models? The last table at the bottom reads as though there are two models. Also. can you write out what is in each model, age and gender are apparent, but then there is a categorical variable with 3 or 4 groupings.

Perhaps you can drop gender, if you don't need to control for it just to let others who may suspect its influence on the null.

What you need to do is create a contingency table, 2x2 table of outcome versus 1L3 as one row and whatever the other group (reference) is for the other row. Then you will be able to see if 1L3 almost perfectly predicts the second outcome group.

#### rezzak

##### New Member
Thank you very much for your answer. This 66% amount tells that there are no observations in 66% of the possible combinations between variables. I believe it is very often in multinomial logistic regression. I had a dependent variable with 3 categories (benign motor, benign motor-cog and poor motor-cog). Age, LEDD, disease duration and IL-3 are continuous predictors and gender as categorical predictor. I took the first group (benign motor) as reference group, and as my dependent variable has 3 group, this regression model compares 1. vs 2. and 1. vs 3. group.

As you have suggested, I dropped gender but it did not make a difference. Also I am not sure if I can make a contingency table since IL-3 is a continuous variable. Here it seems that IL-3 can predict group membership between the first and the third group. But the odds ratio Exp(B) looks crazy. I really do not know how to interpret this. From your first answer I understood that the effect could be small. But the B value is also very big.

#### hlsmith

##### Less is more. Stay pure. Stay poor.
My oversight, I had forgotten that you wrote that this is multinomial. So instead of a contingency table, since IL-3 is continuous you could possible create a box plot. One box for reference group's values and one for the non-reference group's values. If you can do this, then please post the output, I am interested to see what it looks like.

Also, include a count of how many observations were in each group. I am guessing there may be a clear linear separation of the continuous values between the two outcome groups.

#### ondansetron

##### TS Contributor
My oversight, I had forgotten that you wrote that this is multinomial. So instead of a contingency table, since IL-3 is continuous you could possible create a box plot. One box for reference group's values and one for the non-reference group's values. If you can do this, then please post the output, I am interested to see what it looks like.

Also, include a count of how many observations were in each group. I am guessing there may be a clear linear separation of the continuous values between the two outcome groups.
I think you're on track with this (maybe collinearity, maybe a complete or quasi-complete separation issue, possibly due to all that missing data reducing the effective sample size).

For future reference: very large or very small coefficients (and giant standard errors) mean that something is probably whack. The same can be said of confidence intervals that tend towards infinity (positive, negative, or both, depending on what you're doing) or something very large (or small).