It would help if you can not readily do multilevel modeling with a 4 point DV to know what can analyze this. I am testing the result of area on satisfaction.

- Thread starter noetsi
- Start date

It would help if you can not readily do multilevel modeling with a 4 point DV to know what can analyze this. I am testing the result of area on satisfaction.

Ok I have a different question. OLS regression is very common including among academicians. But MLM modeling argues essentially that if something can be nested inside something else then they will generate errors. That is the chance of getting a type 1 or type 2 error is much greater. My question is how you know when this is a problem or not. They do linear regression in my area (vocational rehabilitation) yet virtually all data is going to be nested (customers inside units inside areas).

How do you know if the linear method is valid in this case, as compared to having to use multilevel models. I am not sure statistical test are valid period in honesty because I usually have at least 60 percent of the data and often 100 percent.

To make things more confusing to me what I am interested in is the nesting inside areas. There are only 7 of these. Based on this comment I am not sure it is even valid to analyze this few groups with multilevel data.

"Guidelines for sample-size requirements and their implications for model complexity, the regression coefficients, variance components, and their standard errors are given in various studies and texts. For example, models with fewer than 20–25 groups may not provide accurate estimates of the regression coefficients and their standard errors, or of the variance components and their standard errors."

https://ies.ed.gov/ncee/edlabs/regions/northeast/pdf/REL_2015046.pdf

I do have a lower level called units, but no one cares about units and there are issues because some units overlap each other while being administrative separate.

How do you know if the linear method is valid in this case, as compared to having to use multilevel models. I am not sure statistical test are valid period in honesty because I usually have at least 60 percent of the data and often 100 percent.

To make things more confusing to me what I am interested in is the nesting inside areas. There are only 7 of these. Based on this comment I am not sure it is even valid to analyze this few groups with multilevel data.

"Guidelines for sample-size requirements and their implications for model complexity, the regression coefficients, variance components, and their standard errors are given in various studies and texts. For example, models with fewer than 20–25 groups may not provide accurate estimates of the regression coefficients and their standard errors, or of the variance components and their standard errors."

https://ies.ed.gov/ncee/edlabs/regions/northeast/pdf/REL_2015046.pdf

I do have a lower level called units, but no one cares about units and there are issues because some units overlap each other while being administrative separate.

Last edited:

"Traditionally, researchers tended to use model results at one level to draw statistical inference at another level [individual to group]. This has proven incorrect. The results from the two single level models frequently differ either in magnitude or in sign. The relationships found at the group level are not reliable predictors for relationships at the individual level. "

Individual variables are variables that operate at the individual level, group variables operate at a higher level like a school. So my question would be, ignoring wrong SE which can be dealt with by robust SE, can you run OLS with variables where some variables are nested inside others like person in school. I know this is done a lot - formally it violates independence. But does it seriously bias the results?

And if it does does this mean all OLS with a variable that can be placed in a hierarchy is wrong?

"

An entirely different question to not create a new thread.

I want to run an ICC test of see if group matters at all. I have a group with only 7 levels which is not normally enough to apply multilevel analysis. There are certain procedures to correct for this, bootstrapping, but I am not sure you need to do this simply to run the ICC (which is an empty model).

Anyone know if ICC is valid with a very small number of groups, or do you have to transform the data first to run it?

And let me go another step. Say you have only 7 groups but you are very interested in differences between group on some variable (that is how some variable varies on groups). Should you just stick to linear regression on those variables? If you want to see how slopes vary by group on some variable (controlling for others) what is the best way to address this?

I want to run an ICC test of see if group matters at all. I have a group with only 7 levels which is not normally enough to apply multilevel analysis. There are certain procedures to correct for this, bootstrapping, but I am not sure you need to do this simply to run the ICC (which is an empty model).

Anyone know if ICC is valid with a very small number of groups, or do you have to transform the data first to run it?

And let me go another step. Say you have only 7 groups but you are very interested in differences between group on some variable (that is how some variable varies on groups). Should you just stick to linear regression on those variables? If you want to see how slopes vary by group on some variable (controlling for others) what is the best way to address this?

Last edited:

Did you get the Wang book?

Did you get the Wang book?

If you only have 7 groups, too few I know, will this impact the ICC test? I am far from sure. Normally I know bootstrapping is recommended for ML analysis with so few groups, but I am not sure if you even need this for just doing an ICC to see if groups matter.

The bootstrapping approaches I have require level 2 residuals. I am simply using an empty model to determine how much group matters as a very preliminary analysis. Not sure if I need to do bootstrapping to deal with the very small group size or not.

While I am at it, can you even do ICC if your DV is categorical (some of my DV are interval some categorical).

The bootstrapping approaches I have require level 2 residuals. I am simply using an empty model to determine how much group matters as a very preliminary analysis. Not sure if I need to do bootstrapping to deal with the very small group size or not.

While I am at it, can you even do ICC if your DV is categorical (some of my DV are interval some categorical).

Last edited:

This is the empty set code.

class district_pri ;

model q2w = /solution;

random intercept /subject= district_pri;

part of the results

My understanding is the ICC is 65073/ (65073 + 11619424) which is small, about a half of a percent. I don't know for sure if this small effect could be much larger if I bootstrapped or not (except for the macro which I can not run, I do not know how to bootstrap the data).

Seems like the smallest ICC value ever. What happens if you run a leave one out analyses. So run you model with all groups and no random effects. Now rerun it again, but each time drop one of the groups. So you will run it with all groups, then seven more times, each time just dropping a single other group. This sensitivity analyses will tell you how sensitive the results are base on any single group. If the effect estimates are comparable, you could plot them - then perhaps random effects are not overly relevant here.

Did you model each group independently to see diffs in intercepts and slopes? I got the email, but up against a bunch of deadlines the next couple of weeks. That and deconstructing the macros matrix algebra wasnt as easy looking as i thought