Hello everyone, I have a question regarding Absolute Risk Reduction (ARR) vs Relative Risk Reduction (RRR) in what a study is powered to detect. The article in question is the ACCORD trial. Here is the link:
https://www.nejm.org/doi/full/10.1056/NEJMoa0802743
“The study was designed to have a power of 89% to detect a 15% reduction in the rate of the primary outcome for patients in the intensive-therapy group, as compared with the standard-therapy group, assuming a two-sided alpha level of 0.05, a primary-outcome rate of 2.9% per year in the standard-therapy group, and a planned average follow-up of approximately 5.6 years.”
In particular, what I have bolded has me wondering, is that 15% reduction they are powering the study to detect the absolute or the relative risk?
https://www.nejm.org/doi/full/10.1056/NEJMoa0802743
“The study was designed to have a power of 89% to detect a 15% reduction in the rate of the primary outcome for patients in the intensive-therapy group, as compared with the standard-therapy group, assuming a two-sided alpha level of 0.05, a primary-outcome rate of 2.9% per year in the standard-therapy group, and a planned average follow-up of approximately 5.6 years.”
In particular, what I have bolded has me wondering, is that 15% reduction they are powering the study to detect the absolute or the relative risk?