Ideal level of confidence/certainty

Staging the scenario
A casino offers you a gamble with a 1% chance of winning a try.

How many tries will it take to win at least once? The answer involves two variables, the chance of success each try and an allowance to be wrong in exchange for predictive accuracy. For this example, I choose 95% confidence, a willingness to be wrong once in twenty:

tries = log(chanceToBeWrong) / log(chanceFailureEachTry) = log(1/20) / log(99%) ≅ 300 tries

The question posed
But surely there must be a better way than to just arbitrarily select a level of confidence. It looks to me like an optimization problem, where tries are an expense to be minimized, and confidence spans across a range of 0..1

My first stab at it is to just rearrange the variables to solve for 1-confidence:

chanceToBeWrong = chanceFailureEachTry ^ numTries

But this is a surface of answers, like there's still some missing threshold, some cost/benefit crossover point(plane?)
Can anyone shed some light on how I might calculate the most appropriate level of confidence?
Last edited:


Ambassador to the humans
If you're looking to do a cost/benefit analysis then at the very least you would need to know how much you would win if you do in fact win and how much it cost to play. You haven't indicated knowing either of those so it seems a bit undefined at the moment.
Here's another version of the problem, for clarity.

I can tell you with absolute certainty what the outcome of a roll of two six-sided die will be — barring meta-events such as the cat batting one under the couch, or the scientist forgetting to record the results: It will be a number in the range 2-12. This is what 100% certainty offers, information on the full spread of outcomes without reference to their likelihood. It's a timeless and impartial image of the universe in all it's possible configurations.

But certainty can be spent on predictive insight, a focus of our gaze. The cost is an allowance to be wrong. The more confidence spent the more specific the view of the future, but also greater the chance it's just a fairy tale.


There seem to be two conflicting forces at play:
  • Correctness, where higher proportion of possible outcomes is better, reducing the chance of being wrong
  • Accuracy, where a lower proportion of possible outcomes is better, painting a more specific prediction.
And so it seems to be a measure of the importance of Accuracy vs Correctness that would determine choice of confidence level. This seems to be a form of cost/benefit analysis between accuracy and correctness.

How would you approach deciding which level of confidence to select?
add two columns to ur table. one called 'cost', this is the net cost of being wrong given the predication. one called 'benefit', this is the net earnings for being right on a given prdiction. Note that net earnings/losses may include some costs associated with the number of outcomes allowed for in the prediction, or anything else you like.e

calculate the expected earnings, for each predication, as 'proportion'*benefit - 'cost'*(1 - proportion).

choose the prediction with the greatest expected earnings.

repeat until millionaire, the focus of our gaze.


Active Member
I think another thing to consider is probabilities might change
No im afraid that is incorrect. According to anglewyrm, they are "timeless and impartial image of the universe in all it's possible configurations.".