AIC models

Q&A

New Member
#1
Hello,
If I have a model with an aic of 8765 and another model reduced with two variables less with an aic of 8823.
Can I accept the reduced model with the AIC of 8823?

What is the difference between the two AICs that should not be exceeded?

Thank you
 

Buckeye

Active Member
#2
In addition to looking at AIC, I would do a test for model comparison. In R:
Code:
anova(reduced_model,full_model)
A significant result suggests that the full model provides a better fit to the data than the reduced model. Lower AIC is better.
 

Q&A

New Member
#3
In addition to looking at AIC, I would do a test for model comparison. In R:
Code:
anova(reduced_model,full_model)
A significant result suggests that the full model provides a better fit to the data than the reduced model. Lower AIC is better.
But I am looking for a reduced model because a model with several variables does not appeal to me in practice.
What should I do?
 

Miner

TS Contributor
#4
But I am looking for a reduced model because a model with several variables does not appeal to me in practice.
What should I do?
Look at the prediction intervals of the reduced model. Does that model predict "close enough" for your needs?

In industrial statistics, we often have to balance the benefits of a more complete model against the cost of the added complexity. Depending on the application and the cost of the added complexity, we go with the simpler model. Other times, we have to use the more complete model because more precision in prediction is required.
 

hlsmith

Less is more. Stay pure. Stay poor.
#5
@Buckeye presented the standard approach to get an estimated out of sample "deviance" You just use that R code. @Miner described rationales. I will add that you can also (if you have enough data - just hold some out some data to examine accuracy.
 

Q&A

New Member
#6
Look at the prediction intervals of the reduced model. Does that model predict "close enough" for your needs?

In industrial statistics, we often have to balance the benefits of a more complete model against the cost of the added complexity. Depending on the application and the cost of the added complexity, we go with the simpler model. Other times, we have to use the more complete model because more precision in prediction is required.
Yes, that's right, I need a scale model. So in the case where we have an M1, M2 and M3 model.
M1 has 6 variables and has the smallest AIC (5676)
I would like to have a model with fewer variables. If I remove one variable from the model, the variable that has the highest p-value among all the others, with a p-value of 0.004, for example, my AIC is (5701). What to do? Knowing that I still want to reduce M1 because 6 variables is a lot?

I can be interested in the confidence interval, right? If the interval is large, I can remove the variable even if the AIC is larger afterwards?
 

Q&A

New Member
#7
@Buckeye presented the standard approach to get an estimated out of sample "deviance" You just use that R code. @Miner described rationales. I will add that you can also (if you have enough data - just hold some out some data to examine accuracy.
So if my p-value is significant, does that mean we should keep the full model?

But if the full model contains a lot of variables, isn't there a way to remove some of them?
 

hlsmith

Less is more. Stay pure. Stay poor.
#8
You need to create a table with the listed variables in the model along with the AIC values and share it - selection based on pvalues isn't always great. Model reduction can also be investigated using Least absolute shrinkage selection operator based regression. Though it is important to understand the relationships between the variables when removing or adding them.
 

hlsmith

Less is more. Stay pure. Stay poor.
#9
P.S., This concept would fall under bias/variance trade off. Less variables means more potential bias and more variables (given finite sample) means greater standard errors.

If you have a loss function or value you are trying to optimize (e.g., MSE or accuracy) -- having a random holdout set is the best approach of constructing the best subset that is generalizable.
 
Last edited:

Miner

TS Contributor
#10
Yes, that's right, I need a scale model. So in the case where we have an M1, M2 and M3 model.
M1 has 6 variables and has the smallest AIC (5676)
I would like to have a model with fewer variables. If I remove one variable from the model, the variable that has the highest p-value among all the others, with a p-value of 0.004, for example, my AIC is (5701). What to do? Knowing that I still want to reduce M1 because 6 variables is a lot?

I can be interested in the confidence interval, right? If the interval is large, I can remove the variable even if the AIC is larger afterwards?
There are two unrelated questions being asked. Your question about AIC is directed toward finding the optimum "statistical" model. Your question about removal of terms is directed toward how far from the optimum can I simplify the model for purely practical reasons and still have a model that works. The answer to the second question implies that you will ignore AIC and focus on whether the ability of successive simpler models is acceptable. The best way to accomplish that goal is to evaluate the prediction intervals of each simplified model to determine whether it predicts "good enough" for your purpose.
 

Q&A

New Member
#11
There are two unrelated questions being asked. Your question about AIC is directed toward finding the optimum "statistical" model. Your question about removal of terms is directed toward how far from the optimum can I simplify the model for purely practical reasons and still have a model that works. The answer to the second question implies that you will ignore AIC and focus on whether the ability of successive simpler models is acceptable. The best way to accomplish that goal is to evaluate the prediction intervals of each simplified model to determine whether it predicts "good enough" for your purpose.
The prediction intervals are the confidence intervals, right?