Hi all,
I haven't done any stats for a long time, I am looking to incorporate some stats into my Ops Risk role to get a little edge. I think I am on the right track with this first little attempt but just wanted to run it by the forum for any feedback.
So at work, we are starting a new program in which we review all critical risks for divisions, we then sample test the controls for these risks to ensure they are working as expected. We currently just sample test 10% but I thought it'd be a good idea to get a statistically significant sample size when testing. The test results are a simple pass/fail.
So to get total accuracy we have to test 100% obviously. Now to get comfort that we are testing an appropriate sample size, the bank needs to agree on a margin of error and a confidence level, which will ultimately determine the sample size. Do you guys agree with this?
Also any tips/suggestions for determining a ME and confidence level when results are a simple pass/fail?
Help much appreciated.
Jono
I haven't done any stats for a long time, I am looking to incorporate some stats into my Ops Risk role to get a little edge. I think I am on the right track with this first little attempt but just wanted to run it by the forum for any feedback.
So at work, we are starting a new program in which we review all critical risks for divisions, we then sample test the controls for these risks to ensure they are working as expected. We currently just sample test 10% but I thought it'd be a good idea to get a statistically significant sample size when testing. The test results are a simple pass/fail.
So to get total accuracy we have to test 100% obviously. Now to get comfort that we are testing an appropriate sample size, the bank needs to agree on a margin of error and a confidence level, which will ultimately determine the sample size. Do you guys agree with this?
Also any tips/suggestions for determining a ME and confidence level when results are a simple pass/fail?
Help much appreciated.
Jono