inputs: v_1, v_2, ... v_6

outputs: f_1, f_2, ... f_6

I can apply precise forces and torques on the load measurement unit using high precision reference load cells and actuators.

Based on calibration standards and on the nature of the load cells, the relationship is a linear combination of polynomial terms. Consequently, I want to fit:

f_1 = a_0 + a_1*v_1 + a_2*v_2 + ... a_6*v_6 + a_7*v_1^2 + a_8*v_2^2 + ... a_12*v_6^2

Repeat for f_2 to f_6

For the level of calibration required, a second order polynomial is advised based on the standards for single axis calibration. I could randomly apply 100 forces and solve for the coefficient using the least square error. I can also select 100 random points and analyze the error. However, applying the forces takes time, so I need to be efficient. How should I select the calibration points to minimize the calibration error? Which points should I verify? It seems obvious that for a good calibration, the zero and the maximal forces should be part of the equation, but what about in between? Is there a method for selecting those points? There is a small amount of hysteresis in the measurement.

With 6 axes, the vector space is vast, so there could be a lot of distance between the calibration points. I imagine that the method for selecting the calibration points should minimize the average distance between the points.

There are ISO and ASTM standards for linear elastic load cell calibration, but they don't apply to multi-axis load cells.

Could you please point me toward resources for this kind of analysis? I did not find papers with in depth error analysis of the calibration method. There are papers using first order polynomials (1), but there is not much discussion regarding the selection of the calibration and verification points. For context, I have to calibrate a lot of units, so the number of calibration and verification points is important. I need a good understanding of the error.

(1) https://www.researchgate.net/public...e-torque_sensor_for_microrobotic_applications