Power of a study to detect risk

Hello everyone, I have a question regarding Absolute Risk Reduction (ARR) vs Relative Risk Reduction (RRR) in what a study is powered to detect. The article in question is the ACCORD trial. Here is the link:

“The study was designed to have a power of 89% to detect a 15% reduction in the rate of the primary outcome for patients in the intensive-therapy group, as compared with the standard-therapy group, assuming a two-sided alpha level of 0.05, a primary-outcome rate of 2.9% per year in the standard-therapy group, and a planned average follow-up of approximately 5.6 years.”

In particular, what I have bolded has me wondering, is that 15% reduction they are powering the study to detect the absolute or the relative risk?


TS Contributor
The rate is assumed as maybe 2. 9 %, so an absolute reduction would be 2.9 % - 15 % = -12.1 %. A negative risk?

They explicitly state "reduction in the rate". Not being a native speaker, I'd read that as 2.9 % - 0.15 * 2.9 %

With kind regards