Is this study clinical trial or retrospective cohort?

#1
Hi

We have a set of patients with dental implants in their mouths. We wanna evaluate whether implants in each patient's mouth are as sensitive as normal teeth (of the same patient) to tactile stimuli.

So it can be a retrospective cohort, because we have split the sample based on the exposure (implant or new tooth), but not the outcome (sensitivity to tactile stimuli).

But it can be a clinical trial I think, because we are evaluating the effect of an intervention (implantation) on the sensitivity to tactile stimuli. I mean the exposure is an intervention here.

Please note that the implants are not inserted during the study, they have been placed beforehand (but I personally think this fact doesn't affect the understanding that implantation is some kind of intervention rather than being an exposure).

Please be so kind to just give me a short answer:

Is this study a cohort or a clinical trial?

Your kind help is highly appreciated.
 

Link

Ninja say what!?!
#2
It doesn't matter if it's an intervention or not. To be a clinical trial, it has to be planned beforehand as such and carried out that way.
 

Link

Ninja say what!?!
#4
It's neither - it's a cross-sectional study. A cohort study means you've followed people up over time.
I have to respectfully disagree. In a cross sectional study, you wouldn't know whether the exposure came before the outcome or not. Here, you do. To be more technical, I'd call it a point treatment study.
 

bukharin

RoboStataRaptor
#5
Thanks Link, but I also have to respectfully disagree. Unless you tested the sensitivity before the implants, you do not know that the exposure preceded the outcome. Depending on the indication for implantation it's entirely plausible that the sensation was abnormal prior to the implantation.
 

Link

Ninja say what!?!
#6
Hi bukharin. Thanks for clarifying.

Taking a closer look at it, I'd still have to say point treatment.

The exposure here is "dental implants in their mouth", and we're told in the initial post that they already have them. We're comparing an outcome of "sensitivity to tactile stimuli" between them and people who don't have implants.

I do, however, agree it's plausible that sensation was abnormal prior to the implantation. Unfortunately, that's one of the criticisms of both retrospective cohort and point treatment studies (among other limitations). The only way to overcome this is to have a prospective study where you can measure the outcome before the intervention is implemented, or in a randomized controlled trial (both of which are very expensive).

In a cross-sectional study, a researcher would gather data and analyze the exposure's relationship to the outcome. However, the researcher would not be aware of which came first. Confounding presents as a big issue here. As an example, take the "healthy worker effect". If one were to do a cross-sectional analysis on coal factory workers and respiratory problems, the healthy workers would be at the factory exposed to the air pollution. The sick workers would be at home resting, not exposed. However, because the workers at home are sick, it's more likely that they have respiratory problems. We have a much weaker causal connection between the air pollution and respiratory problems with this type of study.
 
#7
Unless you tested the sensitivity before the implants, you do not know that the exposure preceded the outcome. Depending on the indication for implantation it's entirely plausible that the sensation was abnormal prior to the implantation.
Thanks a lot dear Bukharin. Yes this is correct and can be considered a confounding factor. However, there are two strategies to control and rule it out. First a significant difference between the average results from several patients clearly indicates that such a change in sensitivity has happened in most of them, implying that it is because of implantation, not because of prior abnormal sensitivity (which cannot happen in "most of them" only by chance). Second, we are going to assess patients who have implants in one side of their mouths and normal teeth in the other side. So it is very unlikely to include [many] patients with normal sensitivity in one side, and abnormal sensitivity in another side. Even, if by chance, some patients are as said, their small number cannot affect the mean results.

The exposure here is "dental implants in their mouth", and we're told in the initial post that they already have them. We're comparing an outcome of "sensitivity to tactile stimuli" between them and people who don't have implants.
Thanks Link. But we compared implants with normal teeth in each subject's mouth.

I do, however, agree it's plausible that sensation was abnormal prior to the implantation. Unfortunately, that's one of the criticisms of both retrospective cohort and point treatment studies (among other limitations). The only way to overcome this is to have a prospective study where you can measure the outcome before the intervention is implemented, or in a randomized controlled trial (both of which are very expensive).
But I think another way exists too. IMHO power calculation and statistical significance can determine whether a difference observed in a retrospective or prospective design is actual or not.

Besides, even if we measure the sensitivity before the implantation (in a prospective design), there is no guarantee that the change in sensitivity observed after implantation is only and only due to the intervention. An abnormal sensitivity may happen at any time. If we worry it can exist in a retrospective design, we should also worry it might happen later for the same unknown reasons. We cannot make sure it won't happen once we confirmed its absence before the intervention.

Thank you both. I learned a lot here.

--------------------

When reviewing the literature I found studies similar to my design. The authors had called their studies clinical trials (not retrospective cohorts). Can anybody explain more?

-----------------

Hi again

I would be grateful if I can have your valuable opinion.

I gathered a long list of studies similar to the one mentioned in the header post. Out of 6 studies, 3 had called their design as clinical trials, and the rest had not named their studies. I know it might fail to meet every requirement of a clinical trial, or as Link pointed out, it might be a point treatment study, but apparently dental journals don't know or care about it, or perhaps it might be actually a clinical trial (probably because those studies have assessed the effect of an intervention [implantation] on a response [tactile sensitivity] which falls, at least partly, within the definition of a clinical trial).

But I have another question now.

First, let me briefly repeat the design:
We pooled some patients with single implants in their mouth. Then we measured the tactile sensitivity of each implant and the counterpart natural tooth in each subject, and then compared the average values of the implants and teeth with a paired t test. Those similar studies had done so, and they had called their studies split-mouth clinical trial.

First, lets assume that those studies (and mine) are actually some clinical trials (regardless of the valuable clarifications you made above which showed that they are not 100% clinical trials because the baseline values are missing - but please assume it is a clinical trial to pass to the next question).

My question is that they had also named their study split-mouth randomized clinical trial.

I know that a randomized trial needs random assignment of treatment to subjects, for example we can pool 20 subjects and administer some drug to 10 randomly chosen one and give the other 10 a placebo.

However, in those "randomized split-mouth clinical trials", the reason that they had called their study "randomized" was not something routine as presented above (or in the examples available on the net). Because it was not like this "we split the jaw to two right/left sides and randomly assign to one of them a treatment (implant)". Instead, there was no randomization in the assignment of treatment (implantation) to each half-jaw [=quadrant]. But the randomization was in the procedure of measurements. If you need further details, the measuring was done by placing foils between the teeth (or between implant and a tooth) and asking the patient to bite on the foils and declare whether he/she has sensed any foreign body or not. The foils had thicknesses from extremely thin to easily detectable ones. Therefore, the patient response allowed the researchers to understand the average tactile sensitivity of implants and teeth in micron.

They had randomized the order of foil thicknesses to prevent guessing strategies of the patient and excluding that bias. So, based on this particular randomization, they had called their study a randomized one. It was not stated that if the randomized order of the implant and tooth for each patient was the same or not (and it was out of their [or my] interest), but all foil thicknesses had been used on both implants and teeth (with random order though). I am not sure whether that kind of randomization can apply to the one included in the definition of randomized control trial, as I didn't find any similar study in the examples available on the net for the RCT. As far as I could understand, the RCT needs random assignment of treatment, not random order of the measuring tools.

I am a little confused, especially when I see accredited journals have published those articles so it is less likely to be incorrect. So I would appreciate any help.

-----------------------------------


A second question is: If those were correctly randomized studies, could the fact that "the patients/observers did not know the order of foil thicknesses" make those studies "double-blind randomized"? Because again a double-blind design refers to a setup where the observer and the patient don't know whether they are using a drug or a placebo? However, in my case, the treatment and control are obvious and the only thing unknown to the observer and patient is the order of the foils.
 
Last edited:

Link

Ninja say what!?!
#8
It bothers me a little bit that they managed to get away with that. Randomizing measurements is a good thing that should be practiced. However, it should not define the study as a randomized clinical trial. Their effort at up-selling it can confuse readers into assuming it actually was.

A more common and technical term used (which may explain why they got away with it) is Randomized Controlled Trials (RCT). You are right when you say that in a RCT, the treatment is randomized to the study population. There is also a control group for comparison of the treatment. I'm guessing that by calling it a randomized clinical trial, they successfully argued that it was only a clinical trial with some form of randomization present. Guess I should keep this in mind for future reference.

Your second question is also a good one. Typically, it is the treatment that is blinded to the patient and assessor. The purpose of blinding is to prevent measurement and reporting bias. Here, the measuring tools are blinded to the patient and assessor (assuming that neither can visibly discern the thickness). Though it's presumed that both the patient and assessor know which quadrant has the treatment, this method would still (at the very least attempt) to prevent measurement and reporting bias. Hence, I can see a reasonably argument for calling it double-blind.

Personally though, I would not label it and just say directly what we did.
 
#9
I know this is late and probably over but it is a clinical trial because you are "testing" something on the subjects/patients (in this case simple sensitivity) unless you are doing it with an interview or questionnaire. A retrospective study involves only medical chart examination with possible (but unlikely) patient interview or survey - it involves looking at what has already happened - large retrospective studies are performed regularly by the FDA and companies such as United Health Group, Quintiles and United Bio Source - generally involving claims related medical information, healthcare group medical charts and such.

A retrospective cohort is not exactly possible as a cohort trial involves following a certain group of people over a period of time and is generally compared to a different group of people over a period of time (e.g. one exposed to nuclear waste, other group not exposed - extreme example but relevant).

It could in a sense be called a cross-sectional study - if an interview is used and no actual "testing" of the patient is done.