# Observation oriented modeling & Procrustes rotation of matrices of binary data

#### CowboyBear

##### Super Moderator
Hi all,

I've been reading a book by James Grice about observation oriented modelling (an approach to data analysis where one focuses on single cases rather than variables and aggregates). This article summarises the approach.

Grice suggests an unusual way of analysing data based on Procrustes rotation, with the goal of determining the causes of the behaviour of individuals. In this analysis type, the data describing individuals is always recorded in dichotomous/binary form (i.e., whether or not each person possesses some specific attribute).

For example, imagine we have a group of people who either do or don't have the attributes depression and anxiety, as recorded in the matrix below (the "target" matrix - one which basically shows the DVs):

Depression Anxiety
0 1
1 0
1 1

We also know each person's gender, and whether they're receiving behavioural therapy or not, and we record this in the "conforming" matrix (the IVs):

BehTherapy Gender
0 0
0 1
1 1

Now, to determine if behavioural therapy and gender cause whether or not these individuals have depression and anxiety, we use a binary Procrustes rotation to rotate the conforming matrix towards the target matrix (with the goal of minimising the sum of squared differences between the target and rotated conforming matrix). This produces a rotated or "conformed" version of the original matrix of IVs.

We then tally up the number of hits - e.g., was the rotation able to predict that participant 1 did not have anxiety, that participant 3 did have anxiety, etc? (IIRC this is done simply by looking at whether each value in the rotated/conformed matrix is closer to 1 or closer to zero, and then comparing this dichotomised prediction to the true value for that cell in the target matrix). You can then count up the overall percentage of correct classifications.

After this, a permutation test is performed to see whether the percentage of correct classifications is greater than one would expect by chance alone.

Grice seems to regard this analysis as something completely different from traditional variable-focused statistical analyses and significance tests. But I look at all this and think to myself that this all seems very similar to perfectly standard analyses.

I'm especially interested whether this process of Procrustes rotation shares any specific similarities with other established statistical methods - I have a hunch that this is actually just a subtype of canonical correlation or multivariate regression. I wondered whether some of you with better mathematical statistics knowledge than me might have some more sophisticated thoughts on the differences between Procrustes rotation and other methods for looking at the relationship between two matrices? Is this just old wine in new jars, or is the technique doing something genuinely novel?

Last edited:

#### spunky

##### Doesn't actually exist
Well, to be honest with you I don’t really know much about this Grice person until you mentioned him and I think I concur with you. I don’t honestly find much difference between what he is proposing and most of the stuff that we keep using.

Here’s my take (down?) from reading the article you linked to (I haven’t read the book so maybe some of my concerns are answered there).

First, if he really is interested in moving the analysis from the variable space to the subject space, we’ve been able to do that for a while now. Latent Class Analysis and Latent Profile Analysis are all about that stuff, but I guess that’s neither here nor there. Second, Procrustes Rotation (used to be big in Factor Analysis which is why I know it) is really just like any other type of rotation (varimax, oblimin, etc.) a straight-up matrix multiplication. I have trouble seeing why just because you rotate one dataset to another one, it somehow implies one causes the other. Especially because you don’t know if the space of the target matrix and the space of this “conforming matrix” are even comparable. I get it in the case of latent variable models because you’re really just making up this thing called the latent space, scaling it in some convenient way and rotating it in whichever way you want. For an actual variable-to-variable Procrustean rotation… I dunno.

You’re also right in that this is nothing new when you compare it with other analytic methods. I’m pretty sure I read somewhere back in the day that if you do orthogonal Procrustes rotation on your variables is equivalent to performing a multivariate regression analysis (something about the equivalence of least-squares solutions). There’s also the issue of him forcing all his variables to be binary, not only because I seriously doubt you can force most problems to only be recorded in 0/1 terms, but because binary data itself restricts the artificially range of the eigenvalues needed for the Procrustean rotation matrix. Does he suggest whether orthogonal or oblique rotation should be preferred?

I dunno, in all honestly, I kinda feel that with all the advances we’ve made with classification algorithms, trying to rely on classical, linear-transformation approaches to a well-known problem like cross-tabulation (because at the end of the day it seems like that’s the core of this OOM thing) is like using your dad’s computer in the age of the iPhone. I’m pretty sure that because he’s dressing this whole thing up with philosophical platitudes and mumbo-jumbo like "[philosophy of moderate realism in the tradition of Aristotle and St. Thomas Aquinas" some people may buy into it, but I think I’m gonna pass.

PS- I tend to get really suspicious when people say things like "In this example, the observations (ordered as male/female and sexual/emotional infidelity) are clearly non-parametric". Because…. And seriously, like OMG, how many times have we said this on the forum: The parametric VS non-parametric distinction has to do with THE STATISTICAL METHOD, NOT THE DATA. Saying “my data is non-parametric” makes absolutely no sense! If this guy can’t get that basic distinction right, it makes me wonder whether or not what he’s doing can be substantiated theoretically or if he’s one of those people who find a method that hasn’t been used in a while, tries to re-package it and sell it to the average, unsuspecting social scientist hoping to become the new Karl Joreskog or Bengt Muthen.

This Crisis of Replicability sometimes makes me think we’re gonna end up worse than where we started because more and more people are coming out of the woodworks with their own version of what *the* new data analysis paradigm should be.

Last edited: