Engineering research tests: comparing post hoc data and paring

#1
Hello - I have several tests to do but my stats is a bit rusty... They are all based on the deflection of different beam designs:
Test 1
Beam design A (n=4) is loaded and deflection measured
Beam design A (n=4) is loaded, but with a changed contact, and deflection measured

Test 2
Beam design B (n=4) is loaded and deflection measured
Beam design B (n=4) is loaded, but with a changed contact, and deflection measured

So what I think I need to do is a paired t test on Test 1 and the same on Test 2 data - Q) is it appropriate to say it's paired becasue the *design* of the beam is the same, or is it only paired if I tested the *same beam*, or is pairing only really necessary for biological or human experiments???

Or, as I would sometimes like to refer to the comparisons between Test 1 and Test 2 results. In that case I am guessing (please tell me I'm right/wrong) that I run a 1-way ANOVA, then test for homogeneity of variance, then run a Tukey post-hoc test? (Why not Fisher LSD or Bonferroni???) or run a Tamhane T2 if not homogeneous??

The thing I don't get about using post hoc tests is a) which one? b) sometimes I only want to compare 2 means and sometimes more (like in different chapters of a report) so how do I decide between ANOVA and t test c) when to use paired test d) can just assume equal variance becasue not being equal would be illogical? e) If I add to these experiments at a later date, do I need to redo ALL the stats if for example I just designed Beam C and wanted to compare results to Beam A - would that be a t test????????

Any snippets of wiseness would be cool :wave::confused: