How to argument for non-significant results

#1
Dear all,
I deal with the results of a study done on 20 patients and 20 controls who were evaluated for expression of five genes using PCR - numerical variables representing gene expression levels are provided (no apriori power analyses defining sample size were done - number of participants was based on expert opinion/prior studies in the field/high costs of PCR).
Since PCR is a very sensitive method, it was expected it would detect a difference in expression of genes if they were indeed dysregulated in the context of this disease. However, differences in expression of those genes were really small - expression levels were nearly the same for all five genes and accordingly P values were convincingly insignificant. Post-hoc power analyses that I've also done show that more than 2000 patients would need to be included for the results to be statistically significant further supporting my opinion that real clinically important differences do not exist. All five genes represent same biologic process and since none of them showed different behaviour, this is also consistent with conclusion that this process seems not to be active player in this disease.

I would need help and suggestions how can I further argument for non-existence of an effect - to the best of my knowledge I can not prove statistically that difference does not exist. Also, is argument that 20-patients-per-group-is-enough strong enough to be published in a paper based on an expert-in-the-field opinion that PCR is a very sensitive method and if would detect different expression in this sample size if the difference is really present (most studies in the field are on the similar number of patients). Any suggestions and criticism are welcomed.
Th
 
#2
I can also argument that biologically important difference in any of the measured relative gene expression would be at least 5 delta CT units. Largest standard deviation in measurements was 2.1 for one of the gene expressions, and given these assumptions I can calculate that my study had 100% power to detect clinically important difference of 5 delta CT units, given observed variability of measurements (s.d. of 2.1), sample size of 20 per group and alpha level of 0.05. Is this valid power analysis argumenting for expert opinion for this sample size?
 

ondansetron

TS Contributor
#3
In general, post-hoc power/sample size calculations are only valuable for helping plan a future study. In the current study, it adds no new information as the post-hoc power calculation is just a transformation of the p-value.

You are also correct that you can't use p-values or CIs to argue there is no difference. You can look into Bayesian inference to generate a probability that there is no difference. Maybe someone else can chime in here.
 

hlsmith

Not a robit
#4
@Markica85

I see other issues, if you were looking for multiple possible differences you have greater risk for false discovery, which needs correction for. Post hoc power analyses are biased as well. Does knowing the gene info change treatment or prognosis approaches? Look into GWAS literature to guide you.
 
Last edited: