Calculate the number of trials to distinguish between 2 binomial distribution models

#1
Hello,

This sounds like a problem that has to be well known, but I can't find a good answer...

I have two competing models for the fraction of special objects ('n') in a sample ('N'). One model predicts that f= n/N = 0.15 of the objects are special, the other that f = 0.3 are special. Ie both models are binomial distributions in which there are N-n normal objects, and n special objects. For now let's assume these fractions are without an implicit uncertainty, but if it is easy to include, it would be useful to assume each is quoted as f +/- some amount.

If I could run an experiment where I looked at 'M' samples and the result was I found 'm' special objects with an associated uncertainty in my experimental measured fraction of special objects, how would I decide the minimum number of samples 'M' to look at in order to distinguish between the models at some given confidence level (say 3 sigma).

In other words, I want to determine at some confidence level which of the models is correct, and I want to do so as efficiently as possible allowing for the uncertainty in my experiment (let's say for example that I mistakenly identify some number of the objects, but that I can estimate the size of that uncertainty).

This would hopefully give me something as general as possible whereby I can look at the uncertainty in my measurements, the uncertainty in the models and the degree of confidence I require to happily distinguish between them and see the effects of changing these parameters.

Thanks in advance

James
PS efficiency is important as these are astrophysical observations and telescope time is expensive, so finding the minimum number of samples to reach a given confidence in model distinction is important