Is this a type of randomisation-allocation method for prospective trials?

I'm a student and am currently undergoing an interventional, comparative trial with human subjects. The plan was for the study to be randomised, using stratified randomisation. However there were two issues..

1) One of the two interventions is only available for two weeks in May
2) My project hand in is May, and we only started recruiting 8 weeks ago. Sign up has been slow.

If I waited until recruiting all patients before doing stratified randomisation I would not get data, so we decided to have some patients in one group based on when they signed up (baring in mind they were all invited at the same time and signed up randomly) and some assigned to the intervention in May. THere is clearly a random element to this, but how would I describe it and is there any literature to back it up?

I think I follow, but try describing this again, perhaps with an illustration.
I know it is quite hard to explain, I'll try my best to explain the original plan, give context on why it didn't work, and the new allocation method.

OK so this is a small pilot study of a new interventional (surgical) procedure vs control procedure. The original plan was to recruit all patients first and then to randomise them to each arm of the study (new intervention or control). This was stratified sampling so needed all the patients recruited first to see what variables may need controlling. However due to various reasons we only started recruiting in Feburary. Sign-up was slow, 4 weeks in we only had 30% of our target sample size, and it was a small aimed sample size. Because my final write up for this project is due at the end of May, if we waited for the full sample size of patients to join, it would be too late for me to get any data.

Further context - due to the nature of the procedure and some of our inclusion criteria, we have a relatively homogenous group of patients with same disease, similar age-range, and being male or female should not affect the procedure.

A final constraining contextual factor is that we cannot randomise the patients as they come in using traditional single randomisation because only the control group are able to have the procedure done at the moment. The new intervention group can only be done in May as this is when my senior colleagues were able to bring equipment needed for the new interention into the unit.

As a result, we decided to allocate those who happened to sign up first to the control arm. This allows me to get data in time before my project hand in. There is a random element to this as the invitations were all sent out at the same time and we have no control over who replies back first, however I am struggling to think of a way to describe this even after searching around. I would appreciate if there is a formal description of this type of randomisation, and any literature I can use to back it up, (in my head it is like 'uncontrolled' randomisation as the researchers have no control over the chance of any patient happening to reply first).

I hope this explains it better. Appreciate any help/advice


Less is more. Stay pure. Stay poor.
Well, first if you write it up you have to divulge the issues and that the protocol was heavily deviated from. Second, you no longer have randomization, so you have to stop trying to use that term. The biggest issue is whether subjects that were laggards in scheduling differ by any meaningful characteristic then the timely subjects. You can measure subject characteristics, but you will never know if there was a lurking variable (undocumented confounder). The best you can do is compare the subjects on known potential confounders, look at standardized mean differences. If there is a differences on a characteristic you can control for it in the analyses, but you will have to think about whether your sample size will allow you to have additional variables in the model without them being too saturated. You can also state post hoc how big an unknown factor would have to be in order to negate your results. Lastly, you can run the standard of care intervention on some of the May patients to examine for differences from the April subject outcomes, but statistical power will come into play again, since this was not a planned sub-comparison. You need to weigh benefits to the subjects vs. whether you may be able to power the study. It is not the worth the risk to the subjects if at the end you can't prove your hypothesis.

Study design is that you provided intervention A in April and intervention B in May - it is that simple.