Complex comparison of dependent effect sizes

#1
Hi everyone,

I am new to this forum, and still figuring out how to navigate the site. So, I apologize if this question has already been asked.

In short, I am attempting to compare two dependent effect sizes.

To be more detailed, I have two sets of repeated measures. At Time 1, all respondents were instructed to be as honest as possible and were asked to answer a set of questions on two different scales (i.e., forced distribution ranking and Likert-type). At Time 2, all respondents were instructed to fake good and were asked to answer the same set of questions on the same two scales (i.e., forced distribution ranking and Likert-type).

I then ran paired samples t-tests in order to examine if there were mean differences between the different instructions within each of the measurement types. These t's were then converted to r's using the formula √(t^2)/(t^2+df) (Rosenthal, Robert (1991) Meta-analytic procedures for social research (revised). Newbury Park, CA: Sage. p. 19). The t's can also be easily converted to d's using the formula t / sqrt(n) if necessary.

Now, I want to compare my two effects sizes to see if they are statistically different. That is, I want to see if the differences between results given under different instructions change as a function of measurement type.

Even though I want to compare two dependent correlations, the method laid out by Meng, Rosenthal, and Rubin (1992; <(http://psycnet.apa.org/journals/bul/111/1/172/>) --and therefore the SPSS syntax given by IBM (http://www-01.ibm.com/support/docview.wss?uid=swg21477321) or Hayes (http://psyphz.psych.wisc.edu/~shackman/meng.sps)--does not necessarily apply here.

Therefore, my questions are, “Is there a test of statistical significance between two dependent effect sizes other than Meng et al’s and if so what is it? Or, alternately, is there a way for me to get my data into a form such that the IBM or Hayes syntax will run?”

Thank you all so much for any help you can provide,
-Alan