I wonder if someone can kindly help. I have a dataset which I've uploaded and I'm trying to work out a sensible distribution. It represents the number of throws a darts player needs before he can aim for a double. The minimum possible is 8 and the maximum possible is theoretically infinite although good players would very rarely go beyond 30 or so.

I thought a lognormal distribution might fit best but would be very grateful for a second opinion. You will see that the data peaks on certain numbers of darts, presumably because certain scores (eg 180) are more common than others due to the fact that darts players have particular habits and scoring is not random.

Any thoughts/advice would be most welcome and appreciated.

Thanks in advance.