Dice roll results vs. probability

Hi all!

I'm trying to find the way how can I understand what is the acceptable deviation (to still be able to state that a dice is not biased) of real results from theoretical probability of 16.7% (1/6) when rolling one six sided dice for n amount of times. My question is:

For example, I roll the dice for 10, 100, 1000 and 10000 times and get the following results

How can I determine what would be an acceptable delta (or at least some approximation) for each amount of rolls to be able to state that dice is fair or what is the way I can think about it? Please, try to keep it as simple as possible, as I'm doing my first steps into probability.

Thank you in advance
Last edited:


Less is more. Stay pure. Stay poor.
Well given the law of large numbers the percentages should asymptotically converge to the truth. I will point out your simulation, if physical, may not hit the 16.67% value exactly given environmental, physical, and the shape of the die.

If you think of the binomial being a bunch of Bernouli trials (yes/no), then for any side you could use the binomial distribution assuming the truth is 16.67 and run a test to see how common your results would be given the truth is 16.67. Then if the probability is too small, you would have to create your threshold for too low, you could rule out or not that the die is fair.