Bias type / sorting

Hi all. I'm not a math person so apologies if any part of this question is airheaded, just hoping to get a few pointers on how to think about this.

I work for a small games studio in Asia. Small enough that it's just me and the devs, so no mathematicians or data scientists on staff to deal with this kind of thing.

We're a B2B company so we don't control most of the sites our games appear on, rather we provide lists on suggested ordering to customers, which are essentially descending revenue lists for their markets.

We have a general issue here that by FAR the biggest predictor of revenue is the order in which games appear, i.e. the top one makes the most. We've at times ran a few tests putting some games we know are terrible up top briefly, and that always holds true.

So at the moment we're suggesting ordering based on revenue, but that ordering is a self-fulfilling prophecy.

We have a bunch of games at the top of that list that just aren't good, they're really old titles that due to some legacy reason showed up on a banner somewhere or other 10 years ago and now continue to be popular because their placement determines their popularity rather than the other way around. Or at least that's what I think, but can't prove.

So my questions I guess are:

1) I think this is some kind of bias? Is there a term for it? I've googled various bias types and nothing I've come across seems to describe this exactly. It'd help me start discussing this internally if I can put a name to it.

2) How would we go about determining the underlying popularity of games without causing massive disruption, i.e. handicapping the artificial increase caused by existing sorting.

On 2) just to note, we can play around with our sorting and run a few tests on some minor sites we control on behalf of customers, but we can't really comprehensively A/B test this.

Mainly that's just because it's not possible given we don't control most end sites - but even for the few we do it's just too revenue sensitive to show completely random lists for long enough to generate a useful sample. I think ML solutions are probably out of the question for the same reason.

Any advice here greatly appreciated.


Less is more. Stay pure. Stay poor.
I'll start by saying the isn't my content area - but yeah there has to be a name for the bias (name recognition). Kind of like companies calling themselves AAA games to get listed first.

It seems like randomly ordering them and then use a/b testing. There are some new books on A/B testing. If I come across them I will provide a link.
Thanks for the reply. The problem we have generally is we can't really A/B test much. We can run a few limited tests, but we can't just randomize the lobby order and iterate it a bunch to get to the bottom of the question. That's why I was hoping there was a deductive / analytic way to approach the question - but maybe there just isn't.


TS Contributor
I am not certain that this applies, but there are two types of bias that impact the use of multiple choice questions in surveys. They are: