Prior concentration in Bayesian contingency tables

#1
Hi all!

I'm working on a simple project for the uni where I work. I'm trying to find out if teachers grade students differently. I'm looking at an assignment that can only be passed or failed.

I thought this might make a nice project to delve into Bayesian statistics. So I found this very helpful article and am doing my analysis in JASP.

Everything is going well: I'm using an independent multinomial sampling plan, with the rows fixed (number of assignments per teacher). The Bayes factor is very low (BF10 = 0.771), which I interpret that there is little evidence that there is difference in grading standards. Which surprised me a little, given the table:

Code:
Teacher        Pass        Fail
A            2            7
B            1            7
C            6            8
D            0            4
E            4            2
F            1            7
G            2            4
But I guess that can be put down as human expectations of randomness (which is usually too uniform).

However, I need to specify a prior concentration. The default is 1, but it's said in the article "Note the uniform choice assumes that differences between marginal probabilities are expected to be large. If smaller effects are expected, the a parameter may be increased." But what does that mean? What are "large" differences? By how much should I increase the parameter? I understand the parameter sets the Dirichlet distribution of the prior, but I have no clue how to translate that to the real world. My real world expectation would be that the underlying rate of teachers falls somewhat in the range of 10%-50% pass (this is a hard exam). How do I translate this to this prior concentration parameter? As a ball park figure?

Not that it seems to matter, I can set the prior concentration to 10000 and it only increases the BF to 1. But I'm still interested in the answer. Thanks a lot!
 
#2
Another question, am I correct in setting the rows fixed? The number of assignments each teacher was expected to grade was fixed, but the outcome was not. That makes sense, right?