Confidence Level of a Sorting Method?

#1
We recently had an issue in production. I had come up with a test method that seemed capable of sorting the bad product out from the good product. When tested, bad product would be destroyed and good product would be unaffected. Originally, we took 4 samples that we knew should fail the test and they did indeed fail the test (if they fail, its destructive). We then pulled 37 (just because thats what they gave me) random samples from the lot (of lets say 500 devices) and put them through the test. All 37 passed. We them took the 37 samples to failure to verify they were indeed good samples and they were indeed good. So it appears as though the test method is acceptable.

I have been asked to provide a statistical confidence in the test method at finding bad samples. I came across Sensitivity & Specificity (see link below), but that just basically told me that the test (based upon the data so far) was had a Sensitivity & Specificity rating of 100% (yea!). I could pull more samples if needed.

Is there any way to calculate my confidence in the test, either based upon the sample size that I have selected or the acceptability of the results? When taken to failure, I do have quantitive data (force) when a part has been taken to failure that can be used in any required calculations.

http://www.poems.msu.edu/EBM/Diagnosis/SensSpec.htm