How many events before statistical significance.

lets say i am testing a machine that can produce several different events, and i know the average amount of times each of these events succeeds when the event triggers for a typical machine of this type. I want to test and see if the event will occur either at this average, or above average.

Event A has on average a 5% chance to succeed when it triggers.

Event B has on average 15% chance to succeed when it triggers.

Event C has on average .2% chance to succeed when it triggers.

How many times does each event have to occur before i can conclude at a given confidence interval that the machine is performing at, or above expectations? Assuming .05 confidence interval.


Less is more. Stay pure. Stay poor.
@Miner - this is all you.

Seems like a run chart thing. The precision (e.g., confidence) is dictated by sample size. More observations = more precision. So it may all depend on what you are doing. You could have 2 events and present confidence with it. Or you could have 100 observations. Some times people like to dictate the number of obs based on the precision or width of confidence compared to estimate.