# Understanding multilevel modeling

#### trinker

##### ggplot2orBust
I'm trying to learn a bit more about multilevel modeling so I'm going to start this thread here and post questions about the process as I know there are several regulars here who use multilevel models frequently. I know the nmle package exists but I'm mostly trying to figure out how lme4 works.

My goals are:
1. Understand vocabulary associated with multilevel models
2. Understand concepts including what models to test and select and why
3. Understand how to input formulas in lme4 that correspond to different models
4. Understand what the output is telling me

I'll also link to resources I find as well. Please critique thoughts, language I'm using inappropriately what ever.

===================================================================
Resources

#### trinker

##### ggplot2orBust
In working through an example and I see the following output and statement (p. 8):

Code:
> anova(m1, m4, m5, m6)
Data: data
Models:
m1: RT ~ Offer + (1 | Subject)
m4: RT ~ Offer + I(Offer^2) + (1 | Subject)
m5: RT ~ Offer * Condition + I(Offer^2) + (1 | Subject)
m6: RT ~ Offer * Condition + I(Offer^2) * Condition + (1 | Subject)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
m1 4 10921 10939 -5456.4
m4 5 10898 10920 -5444.1 24.5790 1 7.132e-07 ***
m5 7 10901 10932 -5443.4 1.4566 2 0.48272
m6 8 10897 10933 -5440.6 5.6165 1 0.01779 *
---
Signif. codes: 0 ***' 0.001 **' 0.01 *' 0.05 .' 0.1  ' 1
After completing the exercise, we can conclude that the results we reported earlier in the regression section were accurate, even when modeling for the repeated structure of the data.
This statement is in response to another chapter where the lm function was used (p. 11).

Questions:
• In this case is m6 the best model to describe the data? I assume so because it is significant; meaning it explains significantly more variation than the previous model.
• If it was not significant we'd opt for m4. Is this statement correct.

#### noetsi

##### No cake for spunky
I took a course in this about two years ago geared towards the way education approaches it. I used HLM7 so I can't comment on the R code at all, but I will be interested in this and maybe can help confuse you a little more

Of course R will likely generate imput that is totally different than what I am used to seeing...

One critical element of multileval analysis to me is that you are really trying to explain variation (random effects) between predictors at the highest level of the equation - that is what causes this variation. Commonly the higher equation itself, what caused the Y, seems less important in such analysis than explaining why the X varied within the nesting factor.

Another important element is that methods such as OLS will generate incorrect SE because the results are nested inside other variables. The multilevel I used incorporated a form of WLS to address this although I don't know if this is generally true of all multilevel analysis.

#### Jake

Questions:
• In this case is m6 the best model to describe the data? I assume so because it is significant; meaning it explains significantly more variation than the previous model.
• If it was not significant we'd opt for m4. Is this statement correct.
Actually I think on the basis of AIC/BIC it looks like m4 is the best.

#### spunky

##### Can't make spagetti
Actually I think on the basis of AIC/BIC it looks like m4 is the best.
i'd argue that m4 and m6 are a little too close to each other in terms of BIC to be able to safely claim one is better than the other...

#### Dason

i'd argue that m4 and m6 are a little too close to each other in terms of BIC to be able to safely claim one is better than the other...
Sure but one model is quite a bit simpler...

#### CB

##### Super Moderator
Well, the AIC and BIC take the difference in simplicity into account.... But then again the difference in BIC between m4 and m6 is 13, which means:

BF = exp(BICdiff/2) = exp(13/2) = 665. (BIC approximation to Bayes Factor from Wagenmakers).
So it's not really such a small difference: that's one big mother of a Bayes Factor.

#### trinker

##### ggplot2orBust
From this tutorial...

Question 1
What is the differences between the following two model notations:

Code:
[COLOR="red"]lmer(frequency ~ attitude + (1|subject) + (1|scenario), data=politeness) [/COLOR]
[COLOR="blue"]lmer(frequency ~ attitude + (1|scenario/subject), data=politeness) [/COLOR]
My thought is you use the blue one when the subject is nested in scenario and the red is when there is no nesting but the red allows for a random intercept for both subject and scenerio. Those are my thoughts but they're murky.

Question 2
Code:
lmer(frequency  ~  attitude  + (1|subject) + (1|scenario), data=politeness)
In this model does the order of (1|subject) and (1|scenario) matter?

#### Jake

Question 1
What is the differences between the following two model notations:

Code:
[COLOR="red"]lmer(frequency ~ attitude + (1|subject) + (1|scenario), data=politeness) [/COLOR]
[COLOR="blue"]lmer(frequency ~ attitude + (1|scenario/subject), data=politeness) [/COLOR]
My thought is you use the blue one when the subject is nested in scenario and the red is when there is no nesting but the red allows for a random intercept for both subject and scenerio. Those are my thoughts but they're murky.
First I want to put out the following FAQ section which you may find helpful: http://glmm.wikidot.com/faq#toc27

As explained in the FAQ, the blue model is equivalent to:

lmer(frequency ~ attitude + (1|scenario) + (1|subject:scenario), data=politeness)

Which estimates random effects for each scenario, and for each subject within each scenario.

There is an issue of "implicit nesting" versus "explicit nesting" of factor levels that you should be aware of. If subjects are nested in scenarios, then the data file could be written so that this nesting is either implicit or explicit. In explicit nesting, every unique subject gets a unique ID. So if I have 10 subjects per scenario, the subjects in the 1st scenario might be 1, 2, ..., 10, and the subjects in the second might be 11, 12, ..., 20. The IDs are different. In the case of implicit nesting, sometimes different subjects actually have the same ID. So the subjects in scenario 1 might be 1, 2, ..., 10, but then the subject IDs might start over again in scenario 2, going again from 1 to 10--even though it is really a different 10 subjects. In this latter case we call the nesting "implicit" because it is not clear just from looking at the data file that subjects are nested in scenarios. It looks kind of like they are crossed with scenarios (but they aren't). With "explicit" nesting it is clear just from looking at the data file that we have different subjects in each scenario, i.e., they are nested.

I bring all this up because, if in your data file subjects are explicitly nested in scenarios, then your red and blue models are exactly equivalent. But if they are implicitly nested in scenarios, only the blue model fits the correct model. The red model will attempt to fit a crossed random effects model, which is not appropriate for this dataset.

In this model does the order of (1|subject) and (1|scenario) matter?
No.

#### spunky

##### Can't make spagetti
wonderful explanation Jake. i honestly had no idea that there was an issue of explicit VS implicit nestedness.

i'm having a little bit of trouble wrapping my head around this "implicit nestedness" issue (the explicit one is much more straightforward, at least in my mind). lemme see if i get it right. so if i have: (1) Suzie (2) John and (3) Ed assigned to "Scenario 1" and (1) Martha (2) Emma (3) Ian get "Scenario 2", from just looking at the labels (1,2 and 3) i would say "oh, cool. the same people 1, 2 and 3 get Scenario 1 and Scenario 2 so they must be crossed" but if i see the names i would say "Suzie only gets Scenario 1 and Martha only gets Scenario 2, so they cannot be possibly be crossed since there is no Suzie in Scenario 2 and no Martha in Scenario 1"... something like that, right?

now, what i don't quite get is why do you say the red model fits crossed random effects. i thought the (1|scenario) + (1|subject) meant you are allowing the intercept to vary across the various scenarios and across the various subjects. so... it is nested within subject AND within scenario... isn't it?

#### Jake

i'm having a little bit of trouble wrapping my head around this "implicit nestedness" issue (the explicit one is much more straightforward, at least in my mind). lemme see if i get it right. so if i have: (1) Suzie (2) John and (3) Ed assigned to "Scenario 1" and (1) Martha (2) Emma (3) Ian get "Scenario 2", from just looking at the labels (1,2 and 3) i would say "oh, cool. the same people 1, 2 and 3 get Scenario 1 and Scenario 2 so they must be crossed" but if i see the names i would say "Suzie only gets Scenario 1 and Martha only gets Scenario 2, so they cannot be possibly be crossed since there is no Suzie in Scenario 2 and no Martha in Scenario 1"... something like that, right?
Yeah, exactly.

now, what i don't quite get is why do you say the red model fits crossed random effects. i thought the (1|scenario) + (1|subject) meant you are allowing the intercept to vary across the various scenarios and across the various subjects. so... it is nested within subject AND within scenario... isn't it?
The things that are "crossed" in a crossed random effects are the grouping factors. I think you are referring to the intercept terms being nested within subjects and scenarios (i.e., nested within the subject*scenario interaction). Which is true. But it is a "crossed random effects model" because it has random effects for multiple grouping factors, and these grouping factors are crossed with one another. See what I mean? The crossing refers to the grouping factors, not the terms of the model.

#### GretaGarbo

##### Human
I was wondering where Trinker got his data from, so that it would be possible to run the code.

On the same site I found an other paper how to load the data:

http://www.u.arizona.edu/~ljchang/NewSite/papers/FileIO_HO.pdf

So now it seems to be possible to run the models from the paper that Trinker was referring to.

Here is the code:

Code:
 website="http://sites.google.com/site/uarworkshop/file-cabinet/"

#
library(lme4)

sep = ""), header = TRUE, na.strings = 999999)
data$Condition <- relevel(data$Condition, ref = "Computer")

ls()
summary(data)

m1 <- lmer(RT ~ Offer + (1 | Subject), data = data)
summary(m1)

#### spunky

##### Can't make spagetti
(i.e., nested within the subject*scenario interaction)
OMG! you are *SO* right! thank you! i didn't realize that when i said "nested within subject AND scenario" it's kind of like saying they're nested *within* the interaction. that's why i wasn't getting the crossed part of it because i was just focusing on the interaction, where they are indeed nested.

thank you Jake!

ps- so... when's is it that you're teaching your virutal seminar on linear mixed models? you know, the one you PROMISED me would be uploaded on YouTube?

#### trinker

##### ggplot2orBust
spunky said:
ps- so... when's is it that you're teaching your virutal seminar on linear mixed models? you know, the one you PROMISED me would be uploaded on YouTube?
Yeah jake, this sounds like an excellent idea

PS you know you're using knitr too much when you go to include comment tags and begin to type:

Code:
{r