Power Analysis help

#1
Hey All,
First, I want to thank you all up front as statistics is something I know very little about so any help would be greatly appreciated. I am in the process of submitting a journal article for publication and the editor wants a power analysis to determine what number of patients with a BMI greater than 30 were needed for outcomes to be significant.

Group 1 BMI < 30 (n=36)
Group 2 BMI >/= 30 (n=30)

An independent t-test was utilized to calculate the difference in pre- and post-operative subjective scores regarding a surgical procedure that was performed and, comparing the two groups, there was no statistical difference between scores.

I am truly lost on how to perform a power analysis. What other information would be needed to perform this calculation? Again, any and all information is most appreciated. Hope everyone is staying safe and healthy.

CS
 

obh

Well-Known Member
#2
Hi Csisovsk,

So you run to independent t-tests? pooled-variance or Welch?
What are the pre and post?
Do you compare separately "pre score" and "post score" between the two groups?

I assume you didn't calculate the required sample size before doing the experiment?

The test power depends on the difference you want to identify.
For example, you need a larger sample size to identify a difference of 0.1 ((like 4 and 4.1) than to identify a difference of 3 (like 4 and 7)
Or if you use the same sample size, the test power will be stronger to identify a larger difference.

Did you define before the experiment what difference you want to identify?

Ps How do you calculate the score?
 
#3
So you run to independent t-tests? pooled-variance or Welch?
Neither, it was a student's t-test.

What are the pre and post?
Are you asking what are the pre and post scores as in their values? Or are you asking what are they as in a description of them for you to understand what I mean by pre and post?

Do you compare separately "pre score" and "post score" between the two groups?
Yes, we did compare them separately.

I assume you didn't calculate the required sample size before doing the experiment?
We did not calculate the required sample size prior. We took what we already had in our patient population (n=66) and divided them in groups with BMI < or >/= 30. Once we did that, determined average scores preoperatively and at 5 years.

The test power depends on the difference you want to identify.
For example, you need a larger sample size to identify a difference of 0.1 ((like 4 and 4.1) than to identify a difference of 3 (like 4 and 7)
Or if you use the same sample size, the test power will be stronger to identify a larger difference.

Did you define before the experiment what difference you want to identify?
No we did not.

Ps How do you calculate the score?
When I say scores, these are subjective score forms the patients fill out preoperatively and at 5 years so there is no calculating involved other than adding the numbers up.

I really appreciate your insight and I hope this helps.
 

obh

Well-Known Member
#4
Thanks, CS for the clear red answers :)

Neither, it was a student's t-test.
When you compare two samples using the t-student distribution there are two optional tests, assuming equal standard deviation (pooled-variances) or unequal standard deviation (Welch).

Yes, we did compare them separately.
So it is two different tests.
when you do several tests, you may consider significance level correction

When I say scores, these are subjective score forms the patients fill out preoperatively and at 5 years so there is no calculating involved other than adding the numbers up
What is the scale of the score? one question? several questions? what are the possible answers?

No we did not.
Do you know to define now what "difference" do you want to identify?
If you don't have a clue you can always just take a medium standardized effect size (0.5)

You may use the following to calculate the priori test power (generally you should do before the experiment)
https://www.statskingdom.com/30test_power_all.html or GPower application.
 
#5
Figured I'd switch it up this time! Thanks for your help thus far.

When you compare two samples using the t-student distribution there are two optional tests, assuming equal standard deviation (pooled-variances) or unequal standard deviation (Welch).
I did these calculations utilizing equations on Excel so whatever TTEST variant is used there is what I used.

So it is two different tests.
when you do several tests, you may consider significance level correction
I guess it is 2 different tests but unsure what this means for our data.

What is the scale of the score? one question? several questions? what are the possible answers?
Several questions but most questions ask for a number 1-10. Below are the links to the score forms if you're curious. The AOFAS score form has specific questions pertaining to pain and function and some subjective information that was also filled out but not by the patients.

https://orthotoolkit.com/aofas-ankle-hindfoot/
https://orthotoolkit.com/ffi/
Visual Analog Scale (VAS)


Do you know to define now what "difference" do you want to identify?
If you don't have a clue you can always just take a medium standardized effect size (0.5)
I am unsure what you're asking based on the wording of your question, sorry.

You may use the following to calculate the priori test power (generally you should do before the experiment)
https://www.statskingdom.com/30test_power_all.html or GPower application.
I did click on the link and I am not sure which calculator to use as none say "priori".

Again, thank you again for your time and help.
 

obh

Well-Known Member
#6
Hi CS,

I did these calculations utilizing equations on Excel so whatever TTEST variant is used there is what I used.
Even excel has the test type:
TTEST(array1, array2, tails, type)
1 – Paired
2 – Two-sample equal variance (homoscedastic)
3 – Two-sample unequal variance (heteroscedastic)

I guess it is 2 different tests but unsure what this means for our data.
Generally, when you are doing several tests in each test you have the option to get incorrectly a significant result (type I error)
If you do several tests the probability to get a false positive is higher, especially if the tests are independent.
Some people correct the significance level to reduce such a problem (for example Bonferroni correction, overcorrection ..)

Several questions but most questions ask for a number 1-10. Below are the links to the score forms if you're curious. The AOFAS score form has specific questions pertaining to pain and function and some subjective information that was also filled out but not by the patients.

https://orthotoolkit.com/aofas-ankle-hindfoot/
https://orthotoolkit.com/ffi/
Visual Analog Scale (VAS)


Do you know to define now what "difference" do you want to identify?
If you don't have a clue you can always just take a medium standardized effect size (0.5)
I am unsure what you're asking based on the wording of your question, sorry.

If you want to be able to identify the "difference" of 2, for example, Group1: 7.5 and group2 9.5, you may need a small sample size.
but if the difference will be smaller than 2 you may not get a significant result, and this will not prove that there is no difference between the groups ...
If you want to identify a small difference like 0.2, for example, 7.5 and 7.7, you may need a larger sample size.

You may use the following to calculate the priori test power (generally you should do before the experiment)
https://www.statskingdom.com/30test_power_all.html or GPower application.
I did click on the link and I am not sure which calculator to use as none say "priori".

Priori only say that you do the calculation before the experiment, and based on the expected difference, not the actual difference.(post-hoc)
I believe we should use only the prior test power.
You can't do it before, but you can do it based on the expected difference.
So it is the same calculation, but the question is what is your input data,
https://www.statskingdom.com/32test_power_t_z.html (choose "T" distribution,"two samples:)
 
#9
Hi CS,

I did these calculations utilizing equations on Excel so whatever TTEST variant is used there is what I used.
Even excel has the test type:
TTEST(array1, array2, tails, type)
1 – Paired
2 – Two-sample equal variance (homoscedastic)
3 – Two-sample unequal variance (heteroscedastic)

I guess it is 2 different tests but unsure what this means for our data.
Generally, when you are doing several tests in each test you have the option to get incorrectly a significant result (type I error)
If you do several tests the probability to get a false positive is higher, especially if the tests are independent.
Some people correct the significance level to reduce such a problem (for example Bonferroni correction, overcorrection ..)

Several questions but most questions ask for a number 1-10. Below are the links to the score forms if you're curious. The AOFAS score form has specific questions pertaining to pain and function and some subjective information that was also filled out but not by the patients.

https://orthotoolkit.com/aofas-ankle-hindfoot/
https://orthotoolkit.com/ffi/
Visual Analog Scale (VAS)


Do you know to define now what "difference" do you want to identify?
If you don't have a clue you can always just take a medium standardized effect size (0.5)
I am unsure what you're asking based on the wording of your question, sorry.

If you want to be able to identify the "difference" of 2, for example, Group1: 7.5 and group2 9.5, you may need a small sample size.
but if the difference will be smaller than 2 you may not get a significant result, and this will not prove that there is no difference between the groups ...
If you want to identify a small difference like 0.2, for example, 7.5 and 7.7, you may need a larger sample size.

You may use the following to calculate the priori test power (generally you should do before the experiment)
https://www.statskingdom.com/30test_power_all.html or GPower application.
I did click on the link and I am not sure which calculator to use as none say "priori".

Priori only say that you do the calculation before the experiment, and based on the expected difference, not the actual difference.(post-hoc)
I believe we should use only the prior test power.
You can't do it before, but you can do it based on the expected difference.
So it is the same calculation, but the question is what is your input data,
https://www.statskingdom.com/32test_power_t_z.html (choose "T" distribution,"two samples:)
Sorry, I don't know how to use quote boxes :oops:

obh,
I input my numbers in the equation you sent me (n1= 30; n2= 36) and kept everything the same. The "test power" number I got was 0.51. Does that mean our sample size is adequate to determine significance?

Thanks again

CS
 

obh

Well-Known Member
#10
Sorry, I don't know how to use quote boxes :oops:

obh,
I input my numbers in the equation you sent me (n1= 30; n2= 36) and kept everything the same. The "test power" number I got was 0.51. Does that mean our sample size is adequate to determine significance?

Thanks again

CS
Hi CS,

Just press "post replay" button ir "quote" button.

You need to know what test you are using...

Anyway the difference in the power is not so big between the tests.

Usually people using power of 0.8. 0.51 is too weak, it says the test may not have enough power to reject incorrect H0. And if it will reject the null assumption the effect size may be incorrect.