How to rank test scores when number of questions answered is different

#1
I want to rank people based on their test scores but the number of questions each individual has answered is different. People with "n" answers or less will be excluded from being ranked.

Everyone gets one question daily. A running test score is calculated based on the number of correct answers vs. the number of questions asked. Questions are randomly pulled from a pool. Eventually, everyone will get the same questions, just in a different order. When all questions in the pool are answered, the process starts over from the beginning.

An individual may feel that some questions are harder than others but they are all of equal weight. Each response is tracked so we know who answered what question, the result, and how long it took them to answer.

Their “progress”, the percentage of questions answered vs. the total number of questions available, is also known. Everyone is at a different progress percentage since people start, complete, and restart at different times. So, at any point in time, some people will have answered only a few questions while others may have answered a hundred.

I’d like to compare and rank participants based on their score and progress but don't know how. I've seen similar questions asked, but no responses. Any direction is greatly appreciated.
 
#2
Do you know that the questions are of equal weight?

If it were me, I would use item response theory to calculate the actual difficulty of a question, then weight each question based on their difficulty. From there you have a few choices on how to rank them.

Let me know how you are actually determining the difficulty of the questions.
 
#3
We don't weight them, they are all considered equal. Individuals would think some are harder than others but you wouldn't get two people to agree on which ones. The point of the questions is to make sure everyone knows the answers to ALL questions.
 
#4
if they are truly equal (though I would say this is something a reviewer might dock you on) then your easiest way is to look at the "mean time to miss" an item, use that as your threshold on when you start "counting" scores and then just use the percentage correct.

For example, on average, mean time to first error is the 4th question, so you need four questions answered to be ranked.
 
#5
The questions are all on federal regulations so it doesn't matter if they are easy or hard, they need to know them all. So, let's say we go with the "mean time to miss"... sounds like a great time to start ranking. But what do we do when we start ranking? We still have folks who might have just reached the mean time to miss and others that are 4 x mean time to miss. Do we rank them the same?... should we do something like test score X progress... or something else?
 
#6
How about something simple like ranking scores, then ranking progress and maybe adding the two together to rank the sums?... or at least using the progress rank in some way.
 
#7
I have resolved my issue. Ranking progress was the clue. First, I rank all the test scores high to low and store the "test score rank" in the person's test results. I then rank the number of test questions answered from most questions answered to least and store that as the "progress rank". Then I simply add the two ranks together and rank their sum for my overall rank. The resulting list is exactly what I was looking for. It produces a list that intuitively seems fair, is simple to explain, and easy to implement.