What type of statistical analysis should i use for this problem

#1
Lets say you are a video game developer and you own a few thousand game servers for a game that has 4 levels in it.

Every day a certain amount of people join a server and start the game at level 1, and make it to level 2. A certain percentage of people who make it to level 2 make it to level 3, and so on until people make it to level 4. Also typically people who beat the game get bored, and don't play another round.

Each of these levels vary in difficulty. For example people on level 1 might have on average a 5% chance to win, people on level 2 might have a 20% chance to win, and level 3 a 5% chance again.

You start to notice that some servers are more popular than others, and some people on some servers are better at the game than other people, but

The goal is you are trying to figure out which servers have the best players on them, and the metric that determines this is number of players who completed the game (made it to level 4/number of players).


The problem is that in any given day/week/month period the number of people playing on the low population servers will not be high enough for someone to win the game but you still need a way to compare performance of these servers to other servers.

One way you could potentially do this instead of judging based on (reached level 4 that day/number of players for that day) you could judge by who made it to level 3 that day/players for that day, and then assume that a certain percentage of people who make it to level 3 are going to make it to level 4. You would do this because at low levels of games played the people who make it to level 4 will either be 0 or 1, and that can be very misleading with low numbers.


You can do this all the way up to level 2, so if 30% of people go from 1 to 2 on a particular server, and the average of all the servers from 2 to 4 is 10% you estimate the (made it to 4/number of players) for that day would be 3%.

So the higher you go up on the chain the more relevant data you have, because the number of players that reached the later levels increases but the more assumptions you are making about the data.

From a statistics point of view what i am trying to figure out is at what point is the number of players on a server high enough to be able to accurately score a specific server with the given metric. Because obviously if only 5 people have played on a server, thats not going to be an accurate measure of how many people can make it to level 4 because on average it takes anywhere from hundreds to a few thousand people to have played in order for someone to get there. But at what point does it become a significant data number?


Things to note:
Generally speaking, regardless of the server the difficulty of the levels is consistent when looked at with large enough numbers. So the % of players that go from 1 to 2 might be 5% on one server, and 7% on another server, or if numbers are very low 20% but it would never be consistently 50% if the numbers were large, and the average was 5%. The dataset i am looking at the average of the data is 1.5%, and the standard deviation is .5%.

I was also considering using survival analysis, and having the event be getting to level 4 but i am not entirely sure if that would work in this situation.

Questions:
At what point can i determine a server to have enough data to make decisions with a certain level of confidence and how would i use math to show this level of confidence?
If i am going to use the methodology where i use a metric like level 1 to level 2 and then assume the same performance after this for all servers, what would be the best way to determine what i should assume? Should it just be the average performance?
 
Last edited: