Relationship between Probability of failure and satndard deviation


I have run into the following problem. I am running a test on a sample size of N. The test only detects if the samples pass or fail. If X number of samples fail the test then the failure rate is (X/N)*100. What I want to know is what standard deviation or sigma does this failure rate represent ?

Would appreciate any help I can get.