Power rank/Composite score help needed

#1
Hey everyone! I've been assigned to create a power rank for 160 call centre market research employees (called interviewers). Getting more into it, I realise the necessity of statistics. Now, for some reason, when I do study statistics (I last studied Psychology Honors in 2016), I can eventually grasp it and do quite well. But as soon as i stop studying, I swiftly forget it (which is a tad worrying since I'm doing my Masters in Clinical Psychology next year!). So, what I'm asking is probably basic, and just needing some help.

At the moment, they have asked me to focus on the following:

1. Achievement Rate (AR): The amount of surveys done per hour. There are multiple projects (about 15), each with varying rates.

The manager at the moment thinks it would suffice to work out 3 month average for every interviewer on each job - if you're < -10%, you are coded a 1 (below expectations); if you're <= -10% to >= 10%, you are coded a 2 (meet expectations); and if you're > 10%, you are coded 3 (above expectations). Those 1-3 are then added up to give you your project average across all projects. Say interviewer01 has done 6 projects in 3 months with the following rankings: 2 3 3 1 2 2. their overall project average is 2 (13/6 = 2.16667 and rounded down). Interviewer02 has done 2 projects in 3 months with the following rankings: 3 3 2. Their overall project average is 3 (8/3 = 2.6667). I really don't like this.

2. Values: A couple of weeks ago, HR has the idea of assigning each interviewer a 1-3 (below, meets, above expectations) for meting core company values. This was fairly subjective (no voting), and a lot of people aren't happy with this, but for now I'll work with it but try and emphasise why it isn't a good idea. I feel the best way of doing so is showing alternatives.

3. Tenure: How long you've been employed by years.

HR wants to do the Achievement Rate average (1-3) and Values (1-3) by Priority. Within each Priority, they are then ranked by tenure:

Priority 1

AR: 3
Values: 3

1. Interviewer01 (8 years)
2. Interviewer02 (4 years)
3. Interviewer03 (1 year)

Priority 2

AR: 3
Values: 2

4. Interviewer04 (28 years)
5. Interviewer05 (19 years)
6. Interviewer06 (11 years)
7. Interviewer07 (4 years)
8. Interviewer08 (1 year)

Priority 3

AR: 2
Values: 3

And so on: Priority 4 (AR: 2; Values 2), Priority 5 (AR: 3; Values: 1)..... Priority 9 (AR:1; Values 1)

I want to utilise standardised z-scores. I've never really done this before for a corporation, but I think my idea has merit, however some issues. I also want to utilise more objective data as possible, at the moment these two:

1. Dial Rate (DR): The amount of numbers you dial per hour (more = working harder)

2. Cancellations: How many days per month you have cancelled.

But there are other things to take into account: Coming in late, ideally a Timesheet login vs System login ratio (for Productivty), but I will leave these alone for now.

As you can see here, I've done a dummy version. In the Projects sheet, I have five interviewers and three projects, each with their 3 month average AR and DR, converted into Z-Scores. The Z-Scores are then added up for an Overall Project average -- both AR and DR.

In the 'Overall' Sheets, the Z scores are added up again to include every other facet.

I believe I've got a good idea, but haven't incorporated some other things into consideration. My mind is getting flashbacks of ANOVA, t-testing etc. Here are my current doubts/concerns:

1. Is it okay that the Standard Deviation of Achievement Rate and Dial Rate when added together not add up to 1? Why is it not 1?

2. Realistically, only a certain number of interviewers will work on certain jobs and in certain dates. I need to take into account unequal sample sizes, surely?

Specifically, say there are n=10 interviewers employed (interviewer01 to interviewer10). All 10 worked on Project m8 in August 2018, 8 in September 2018 (interviewer01 and interviewer10 did not), and 7 in October 2018 (interviewer01, interviewer02, and interviewer03 did not). There's some unequal sample size issues here, right? It eludes me what to do here, haha. But it feels on the tip of my tongue! Independent samples t-test? ANOVA!? MANOVA!? Because you have for the 10 interviewers:

interviewer01: 1 out of 3 months (Did not interview September and October 2018)
interviewer02: 2 out of 3 months (Did not interview October 2018)
interviewer03: 2 out of 3 months (Did not interview October 2018)
interviewer04: 3 out of 3 months
interviewer05: 3 out of 3 months
interviewer06: 3 out of 3 months
interviewer07: 3 out of 3 months
interviewer08: 3 out of 3 months
interviewer09: 3 out of 3 months
interviewer10: 2 out of 3 months (Did not interview September 2018)

Then these things come into play:

6 out of 10 interviewers worked on all 3 months.
Only August 2018 had n=10
7 out of 10 interviewers worked on August 2018 and September 2018 but not October 2018
7 interviewers worked on August 2018 and October 2018 but not September 2018
9 out of 10 interviewers worked on at least 2 out of 3 months

Surely the current method of 1-3 is unfair with this stuff? But a z-score for the interviewer's 3 month project average may be unfair when the sample differs, right?

3. And there's the issue with between projects, too, right? You have 10 out of 10 interviewers worked on Project m8 within the past 3 months (only 6 out of 10 for every month). But only 3 out of 10 (jnterviewer04, interviewer05, interviewer06) interviewers worked on Project Dawg within the past 3 months (it's an executive job, which I'll get into). Only 1 out of those 3 (interivewer04) did all 3 months. Then with Project Brah, only 2 interviewers worked on that for the 3 months (interviewer and interivewer04). Out of all 10 interviewers, only one interviewer did all 3 jobs (interivewer04). They get additional z-scores in my method. This doesn't seem right, but I feel the answer is obvious!?

4. If Project Dawg is a job priority, am I on the right track if I do weighted averages between the job? Or something about an index? E.G: Project Brah = Z-Score * 0.3; Project m8 = Z-Score * 0.3; Project Dawg = Z-Score *0.4. Likely on the wrong track, but yeah?

5. Similarly with weighted averages, I'd like to use them on things like the Values. Nobody is happy with the values at the moment. In the interim, I don't think I can convince HR to get rid of them. The next best thing would be weighted averages, where values aren't as high. Something like:

Achievement Rate = Z-Score * 0.35

Dial Rate = Z-Score * 0.35

Cancellations = Z-Score * 0.15

Tenure = Z-Score * 0.1

Values = Z-Score * 0.05

I'm interested in other stats too. Would be fun to utilise. Also wondering if a power distribution would work better than a normal distribution.

Ultimately, I'd like to incorporate Power Query/Pivot/Bi for automatic importing etc. I haven't played around with it before, but it seems fun. Cheers!