I am stuck with a probability problem regarding rare events:

lets say I want to automatically detect cars passing by with a camera. I suppose that works pretty well already, meaning that the event of an error is rare (lets say 1/100).

I want to find out how much I have to drive around (or better: how many cars I have to detect) to make assumptions about my error rate. Also I would like to comment on the significance of this assumptions. Simply speaking: how many samples (cars) do I need to come to which sinifiance of an error rate.

When I looked at literature, I always come across hypothosis testing such as t-tests and alike. However, I think I need another approach since I am not looking for a mean. When I do not find an error for 100 events, surely my error rate does not have to be 0.

I came across this paper (https://www.ling.upenn.edu/courses/cogs502/GoodTuring1953.pdf) clearly stating that r/N (r=events, N=total samples) doesn't make much sense if r is very unlikely.

I hope someone can help me out with ideas on how to tackle the problem.

Thank you.