Observational Study

noetsi

Fortran must die
#1
"In an observational study of more than 96,000 COVID-19 patients at 671 hospitals on six continents, the malaria drug hydroxychloroquine was tied to an increased risk of death, researchers reported on Friday. It was not clear whether taking the drug provided patients with any benefit, according to their paper in The Lancet. Overall, 14,888 patients received hydroxychloroquine or chloroquine, with or without an antibiotic, and 81,144 did not receive those drugs."

So how valid are these....
 

hlsmith

Less is more. Stay pure. Stay poor.
#3
I have seen this basic synopsis as well. I will perhaps skim the paper at some point - but I think the bigger thing is that the totality of research on the drug so far has not shown much and the drug comes with the cardiac risks. There are a lot of procedures for observational data these days, where the data are processed as emulation trials. It is true that you cannot be sure there isn't a lurking confounder. Though you can conduct sensitivity analyses to quantified how big of an effect it would need to be to negate the results. However in this scenario the confounder would have to move the bound of the confidence interval (not too far) to the over side of the null value, just to make the statement that you failed to show a difference for the cardiac risks.
 

hlsmith

Less is more. Stay pure. Stay poor.
#4
@noetsi

Well I am skimming the article. It was based on a registry of reported data from centers. One may want to think about which institutions may participate in such a process. There were varied treatments that were examined. They controlled for lots of demographics and comorbidities. However, this doesn't mean there could not have been another confounding variable (e.g., confounder by indication).

As presumed, a sensitivity analysis was conducted to see how large of a confounder it would take to nullify the significant risks. It would take a confounder with a hazard ratio around 1.5-2.0. A general rule is to say are there controlled for confounders that large or larger. If not, this would mean it would take an unknown confounder larger than any confounder we know about to nullify results. The only variables with this large of an influencer were the treatments being examined. However, I will note that a confounder or a set of confounders with HRs between 1.5-2.0 could exist or be manifested by systematic errors. But big picture, that would just quantitatively nullify the results, not make the investigated drug groups superior.
 
#5
a sensitivity analysis was conducted to see
That's interesting i have not seen 'tipping point' before. According to table 2 treatment was almost as harmful as being black, but not quite. I can't tell if those are adjusted estimates?

I think the main challenge to this type of evaluation is that the decision to treat may be caused by poor health, and not the reverse. They tried to control for this. I could have sworn there was talk on TV about randomized trials of this? Well if it agrees with randomized findings, that'd have to be back to the drawing board for most drugs.
 

hlsmith

Less is more. Stay pure. Stay poor.
#6
So their outcome was in-hospital mortality (which could include other things - probably not many not relate to the disease course). They said to shift the lower bound of the hazard ratio and nullify the result it would require not accounting for a unknown/undocumented confounder about the size of congested heart disease - which they controlled for. Well such a thing could exists as a constellation of misclassification, selection bias, and confounding. However, that would just get you to the nullification. It would take a very large issue to completely flip the results.

Yeah, without reading every line of the paper I would have to say the lower bound was for an adjusted estimate. They controlled for the basic confounders of interest (including disease severity), but yeah there definitely could been some confounding by indication. Yes, there are some trials on this - I believe. Given this and other preliminary data - I feel they say it is not optimistic at this point.

The ~1956 Cornfield paper on the association of smoking on lung cancer was the first documented use of external sensitivity analyses. I have not seen it labeled as tipping point though. I could see issue with that phrase given such things are probabilistic and not deterministic.
 

noetsi

Fortran must die
#7
Can we make your question more vague
I was asking about the validity of these kind of tests in general. How much confidence can we put in observational studies. And Hlsmith addressed.

No statistical test, including ones with random assignment can get rid of the possibility of confounds. Some are better than others.
 

hlsmith

Less is more. Stay pure. Stay poor.
#8
Observational studies arent that bad. The ideal setting is when you can intervene on a variable and have temporal collection of data. Of note, even small randomized studies can have residual confounding. Though when it is present day anc unethitical to randomize r some treatments, observational studies provide some information to fill the void. Smoking, sun exposure, asbestos cause cancer, alcohol causes fetal alcohol syndrome/effects. These topics have to be studied observationally, and if results are consistent, have temporal relationship, semblance of dose response, and are biological plausible evidence gets built.
 
#9
Of note, even small randomized studies can have residual confounding
I've sort of come to feel that the superiority of RCT stems as mcuh from the C than the R. In this big data, there can be wackiness of all sorts and it is pretty easy to data manage the data to say what they are supposed to without some independent auditing type things. Same for pre-specified analysis plans, or lack thereof.
 

hlsmith

Less is more. Stay pure. Stay poor.
#11
Since events occur after random assignment, and individuals drop out of studies, you are always going to have confounds.
Well you are going to have bias. Epidemiologists usually break issues into confounding, information bias (measurement error /misclassification), and selection bias. Drop out may results in selection bias if it is systematic.
 
#12
Since events occur after random assignment, and individuals drop out of studies, you are always going to have confounds.
Check and mate I guess. Well im ready to wrap this thread up. Question was too vague I think....