I have written a computer program that does a monte carlo simulation of certain game playing situations to determine how likely various outcomes are for a given decision. A typical result from 5,000 tests may look like this:
Outcome 0 1 2 3 4 5 6 7 Ave Outcome
Results 45 558 1,518 1,421 977 396 81 4 2.85
The outcome is a measure of success in the game so average outcome for a given decision is the key number to look at.
Calculating the standard deviation for each individual result is easy but what I really want to know is how reliable the 2.85 average is. ie I want to calculate a standard deviation for it so I can say there is a 95% chance of the correct answer being 2.85 +/-2*SD. Hence the user can adjust how many tests they do in order to get a result average that is as accurate as they need.
How could I do that calculation ?
Outcome 0 1 2 3 4 5 6 7 Ave Outcome
Results 45 558 1,518 1,421 977 396 81 4 2.85
The outcome is a measure of success in the game so average outcome for a given decision is the key number to look at.
Calculating the standard deviation for each individual result is easy but what I really want to know is how reliable the 2.85 average is. ie I want to calculate a standard deviation for it so I can say there is a 95% chance of the correct answer being 2.85 +/-2*SD. Hence the user can adjust how many tests they do in order to get a result average that is as accurate as they need.
How could I do that calculation ?
Last edited: