18. Multilevel Models#

Contributed by Jason Naramore.

In multilevel models, we can model the parameters to be associated by group. Random effects models and mixed effects models are both examples of multilevel models.

For example, we could have \(J\) intecepts in the following linear model that are associated with the students at school \(j\):

\[\begin{split}\begin{align*} y_i & = \alpha_{j(i)} + \beta x_i + \epsilon_i, \text{ for students } i = 1, .. , n\\ \alpha_j & = a + b \mu_j + \eta_j \text{ for schools } j = 1, ..., J \\ \end{align*}\end{split}\]

Advantages of multilevel models can be to learn about treatment effects that vary by group. This type of model allows us to “borrow strength” from all of the data to make inferences about smaller sample-size groups.

We also may be able to achieve better fitting models, by accounting for the uncertainty at different levels. Another example is with \(Poission(\lambda)\) regression:

\[\begin{split} \begin{align*} y_{i} & \sim Poisson(\lambda_i)\\ \lambda_i & = \text{exp}(\beta_{0} + \beta_{1} x_{i1} + ... + \beta_k x_{ik}) \end{align*}\end{split}\]

One of the model assumptions is that the expection and variance of \(y_i\) are equal. However, many times in real-life data the variance is larger. In order to account for the higher than expected variance, we can add the random effect term \(\xi_{j(i)}\) to the linear equation of covariates. This new term can act as a “sponge” to collect the excess variance, possibly for a better fitting model.