Can this problem be modeled?

I have a question about whether I can model in a way that solves this problem:

Suppose a swimming coach has 100 athletes and only cares about the distance they can each swim in 5 minutes. From this, he sets a baseline expectation of each swimmer’s "5-minute distance" expectation that is estimated with some error (e.g., it's just an educated guess).

-- Distance(baseline_expected) = Distance(true baseline) + error

To improve each swimmer's speed, he proposes two adjustments (A & B), relating to their technique or their equipment. He estimates the increased distance each adjustment will allow each swimmer to achieve in 5 minutes as a result of each adjustment, so that the new distance each swimmer can now swim is anticipated to be:

-- Distance(new_expected) = Distance(true baseline) + error + f(A) + f(B)

However, he may over- or under- estimate the impact of adjustments A & B, causing his Distance(new_expected) to be wrong. So, when the swimmers are measured again post-adjustment, their new distances are found to be:

-- Distance(new_observed) = Distance(true baseline) + w1*f(A) + w2*f(B)

...where w1 and w2 reflect the realized impact of adjustments f(A) and f(B). For example, if f(A) was only 50% effective, then w1=.5. If f(A) was estimated perfectly and was 100% effective, w1=1.

QUESTION: If all I can observe is the "baseline_expected", "new_expected", and the "new_observed" distances, how can I figure out which factor (error in baseline estimate, f(A), or f(B)) contributes the most to the differences between the new_expected and new_observed?