- Thread starter jill
- Start date

Are you slelcting x number of patients from N patients who underwent surgery? If the selection is random I think it's a random sample.

The scans before and after surgery for each patient is dependent. You can use a paired t-test to analyze the data, (or a Wilcoxon Signed Rank Test if the population is not normally distributed).

Let me know if you have further questions.

How much statistical power and how large an effect size are you looking for? Is the sample of n=40 representative of your study population? How are the samples collected and are you doing any randomization? Sorry for all these questions, but I need to know these information in order to justify any sample size. I actually have many more questions.

Jill,

Just so I get your study straight in my head:

40 patients who have had knee surgery

20 will be assigned at random to 1 of 2 different scan methods

you want to do a before-after comparison for each of the two groups of 20

Well, yes, it is a convenience sample, but so are lots of biomedical studies - I challenge anyone to find me a single biomedical study that is a purely random sample of the population...

Anyway, a few questions:

- could you get a sample (20 - 40) of similar aged people who have not had surgery? if so, why? I guess I need to understand your study more.

- what's the problem with n=40? especially with a paired design, any statistical test with n=20 and matched pairs should be a pretty powerful test....

- with the scans - exactly what are you "measuring" or evaluating - i.e., what is your dependent variable?

From n=40 All patients will have both types of scan following which thay will have a 'gold standard' op with a camera to determine the definative result

Each of the scans has a yes tear or no tear answer which will be the variables

I hypothesize MR arthrography is significantly more accurate at detecting specific meniscal pathology than plain MR alone in patients who have had previous meniscal surgery.

If you were a statistician, you wouldn't be here, and a lot of other people wouldn't be here, and Jin would never get this site to where he wants it to be

Anyway-

Now I see where the sample size issue comes into play. It looks like you're basically comparing the proportion (percentage) of accurate diagnoses between scan method, and you really need a large sample size for proportions.

If you think that one method will be definitively better than the other, then maybe you will still be able to detect a statistical difference, however, if you think that the two methods will be somewhat close, then there may be a problem with statistical power.

With n=40, and worst case accuracy is around 50% (i.e., no better than flipping a coin), then the standard error of a proportion will be:

SEp = sqrt(p*q/n)

where p=rate of correct diagnoses

q = rate of incorrect diagnoses = 1-p

n = sample size

so, in this example, SEp = sqrt(.5*.5/40) = .079 or approx 8%

Now for a 95% confidence level, you'll need to multiply this % by 1.96, so that brings it up to .155 or 15.5%

- in order to detect a difference between methods, the difference would need to be at least 15-16%, but this is a "worst case" estimate

If the actual diagnosis accuracy rates are higher, then the 15-16% would drop off a bit.....

how many participants were in that study?

you may not have enough participants to analyze sens, spec, PPV, NPV for each method, because those %'s are pretty close, although the method w/dye appears to be slightly better in all 4 respects.

however, if you want to just make a basic statement about diagnosis accuracy, then you should be OK.

JohnM

Jill

So if i want to use the past average of around 65% without dye to estimate the actual percentage to within +/- 3 percentage points, at a 95% confidence

n >= ((p*q)/d^2) * Z^2

n >= ((0.65* 0.35)/(0.03^2)) * 1.96^2

n >= (.228/.0009) * 3.8416

n >= 97

Therefore i would need 97 patients. Is this what you ment for me to do ??

Jill