Reporting ordinal regression results

ABN

New Member
#1
Hi all - some help with reporting results of ordinal regression please:
I have two group of variables (two separate groups of independent variables) and one ordinal dependent variable. I ran an ordinal regression for each group separately. After allot of work with a couple of guides I got the results (which mostly confirm what was expected) and I have a couple of Q:
  1. Can I use the results to determine which group of variables better predicts the independent variable? Which of the measures should I use to show this? The Chi-squares? The Nagelkerke? The beta/estimated values?
  2. How do I report this? I found it hard to find an examples for ordinal regression reporting. APA was unhelpful as well. If you have a few examples, preferably in social science, or a guide, that would be great.
Thanks in advance, A.
 
#2
1. You can use coefficients from the model to show which group of variables predicts the variable

Usually the results of the ordinal regression model output is difficult to interpret as it involves logit (log of odds). If the dependent variable involves 5 levels then you will have 4 logits to interpret. We can use odds ratio to interpret the estimated coefficients
You can refer the following paper

Harrell, F. E. (2015). Ordinal logistic regression. In Regression modeling strategies (pp. 311-325). Springer, Cham.
 

ABN

New Member
#3
Thank you very much.
As I now understand, after further reading, I can use the Nagelkerke's Pseudo r-squared to compare the models.
I am still not sure though which indicator can suggest the model is at all adequate?
 

hlsmith

Not a robit
#4
R-square values in none linear regression are usually frowned upon. It is just a metric to try and make people family with linear regression comfortable but doesn't have a comparable meaning . You can use AICC to compare models or plot observed versus predicted values ala hosmer-lemeshow to get an idea of fit. Many people dislike the vagueness of AUC, but calculating it along with confidence intervals can help understand fit as well.
 

ABN

New Member
#5
R-square values in none linear regression are usually frowned upon. It is just a metric to try and make people family with linear regression comfortable but doesn't have a comparable meaning . You can use AICC to compare models or plot observed versus predicted values ala hosmer-lemeshow to get an idea of fit. Many people dislike the vagueness of AUC, but calculating it along with confidence intervals can help understand fit as well.
Thank you very much for the answer,
I understand that Pseudo R-square can be used to compare two models - is that true?
What would be the best way to determine, beyond the question of "which model is better?" if any of the models are adequat? The Pseudo R-square values are lower than 0.1, but from what I read on IBM's site, does is not relevant (as they should only be used to compare models)- so what is? Is the Chi-square of the model relevant? the 2LL? what values do I need?
 

noetsi

Fortran must die
#7
I think the AIC might be your best bet to choose among models. There are something like 34 pseudo R squared for logistic regression and they vary a lot. None that I have seen are easy to explain, they are not the percent of explained variance. Often they do not add to 100 percent (that is one).

You can report the overall model significance (there are three common ways these are measured although usually they are similar). The individual slopes are not very obvious in their meaning. I think the norm is to report the odds ratios instead and the wald chi square test (which serves the same purpose as the t test in linear regression).