Hi, thank you for your response, it is nice to find someone who knows not only the statistics, but also the methodology and jargon of the clinical trials !

Hlsmith is pretty darn good

I do not have a paper talking about the low rates, while searching on Google I found a few websites which stated that when the rates are low, ratio is better than difference. I do not know why, they did not specify.

I have heard actually the opposite of those websites, but it's more for an ethical reason. Assume drug x reduced the disease recurrence from 0.5%(0.005) to 0.25% (0.0025). The absolute risk reduction is 0.25% but people want this to look more substantial, so they choose a relative measure such as relative risk reduction to get (0.0025/0.0050) -1 = -0.5 *100% = -50% or a 50% decrease in risk! WOW!!!... or so they would like it to appear. It's obviously contextual whether the 0.25% absolute risk reduction is clinically meaningful, but some people will focus on the relative number as a dishonest way to portray the results for something with minimal clinical significance.

Long story short, it depends on the context, and it isn't necessarily unethical to use a relative measure (I'm assuming this is what was meant by ratio). However, one must clearly give information for both sides, the absolute and the relative, in order to fully inform those reading. It's the same as people changing the axes on graphs to make a tiny trend look huge, for example. If there is a mathematical reason for it, such as an estimation/modeling procedure that needs it, that's more justified obviously. A bit off topic, but somewhat related to that issue.

I ran a SAS simulation, in which I draw 1000 samples from the Binomial distribution according the my assumed proportions. I did this for several sample sizes. For each sample I ran both the Fisher's exact test and the logistic regression from which I took the CI. Both simulations gave almost identical results regarding the power ! The CI is a Wald CI. I think this answers the Fisher vs. OR question.

Maybe hlsmith can clarify, but I believe you may want to ditch the Wald CI stuff in smaller samples in favor of profile likelihood CIs (or another option from an exact logistic regression, if I recall). So, this could depend on how many participants you end up recruiting.

The control group is a standard of care. This is a procedure in which complications are rare, but when occur, very severe. The new treatment comes in addition to the standard of care, and suppose to reduce the rate of complications. ...

I think with this then perhaps it would be important to calculate things like NNT/NNH. What do you think?

The trial will be randomized, however not double blinded, only single blinded, due to restrictions of the treatment (it's not a drug, so the physician will know what he is using).

I've heard the trend is to simply state who was blinded and who was not blinded. I agree with the old terminology of single or double blind to mean patient or patient and anyone not a patient, but maybe my interpretation is incorrect. I have heard several researching faculty at my school harp on this as a source of confusion and they say it makes it unclear when you're evaluating the quality of evidence because you can't tell who was blinded since people have different meanings. The long story short is, it may be helpful to clarify precisely who was blinded; was it only the patient, or were data analysts, nurses, and other blinded with the exception of the physician? Just something to consider.

I am dealing with humans, and of course ethics committee will be involved. Regarding your question about a one-sided test. I don't think it will make a difference. As far as I know, when you approach the FDA with a suggestion for a one-sided superiority test, they ask you to use a significance level of 2.5% instead of 5%.

I've never dealt with the FDA. Do they have a preset guideline that if you are doing x, you must do y?

I have a question regarding your advice to use risk ratio. If I decide to do just that, what is the preferred method? Should I calculate the risk ratio and report the 95% CI ? Is there an hypothesis testing procedure for the RR ? Any idea how SAS/R does it ? Should I write it manually?

Thanks again for your input, it's very helpful !

The risk ratio will approximate the odds ratio as the prevalence of what you're measuring declines (think rare..just write out the formula for each odds and risk to see how this works, it holds for the risk ratio and OR, too). If I'm not mistaken, some people use logistic regression for this purpose-- hlsmith, what can you comment? Either way, there will certainly be methods for you to use.