I am looking into various methods of combining non-independent cognitive outcome measures on different scales in a clinical trials setting , and was hoping for some help in terms of being pointed to relevant literature/resources. The context is;

- A parallel study of n individuals

- I have 2 treatment groups and a placebo group

- Assessments are taken at baseline and then 2 subsequent time points

- 5 outcome measures that I use as dependent variables in a separate mixed model analysis of each.

I analysis each of the outcome measures separately to investigate the influence of treatment on change in each of the outcome variables using cohen's d to quantify the magnitude of the change.

It is also common practise in clinical trials, to calculate a composite score. This is done by some sort of standardisation of the outcome measures (usually calculating a z score), taking the average of the standardised scores (the composite), and then fitting another mixed model to the data using the composite score as the dependent variables.

My confusion stems from a number of issues;

- in the standardisation process, should all the scores be standardised using the means and std deviations of only the placebo group, or over all the score data at a given time point i.e. can I assume that all the scores come from the same distribution in order to calculate a standardised score?

Is this the best way to combine the data? Other alternatives could be;

-do a principle components analysis after the data is received and perform the linear mixed model analysis on the first PC (however the weights are not known a priori, which makes writing analysis plans difficult)

-look at multivariate analyses i.e. model all the outcome variables cojointly and would the results of such an analysis give rise to interpretable statements about effect size changes between the treatment groups.

I would appreciate anyone pointing me in the direction of papers/book chapters that look at these issues in detail?

Thanks in advance for any help