However I'm finding some odd results at low data volumes, which I thought was my code, but I'm also seeing here: http://developers.lyst.com/bayesian-calculator/

This leads me to believe I don't understand the maths properly

If we take an AB test with the following parameters:

A trials: 100

A successes: 0

B trials: 10

B successes: 0

There is a 90% chance that B is better according to the analysis, however I don't understand how this can be the case with no successes recorded yet? The true success rate could be 0.00001% and this analysis should still be insignificant at this point surely?

How can I adjust parameters to ensure that there is no assumption on the success rate (or at least that I can control this assumption)?