Uncertainty & confidence for pass/fail.

Say I am testing a single cylinder gasoline engine.
Counter1 counts number of times a spark is sent to the
cylinder. Counter2 counts number of times that the
fuel-air mixture ignites and does its work. (Count1 -
Count2) / Count1 is probability of missed command. Call
this Pmc. This engine is very reliable so it must run for
a long time to show a significant number of missed
commands. I only want to run it long enough to achieve a
given confidence or uncertainty level for Pmc.

An industry standard publication gives the following
definitions (without further explanation or references) to
determine how many missed commands are required:

1) Percent of uncertainty = 100*2/SQRT(Count1 - Count2).

2) Probability of getting event observed with confidence
level of 95% = Pmc*(1 +- 2/SQRT(Count1 - Count2).

For 2), I want to be able to set confidence lower than

I have looked at numerous statistics books, books on
experiments, quality control books, and did an internet
search but can't find a close match for 1) or 2). The
word "uncertainty" is used a lot but I found no
mathematical definition. Someone said that it is
1 - confidence. Is that true?

I did find definitions of confidence limits that had SQRT
of samples in the denominator but they required STD of
the sample, and what I saw involved continuous (analog)
data rather than binary pass/fail.

So, where would I find a practical applications definition
of confidence and uncertainty for this type of problem?