Difference of Means

I am looking at measures of the same trees grouped to different sizes and shapes of plots. Can someone please check my effort at statistical significance of the difference in means? It's the same metric taken on two different collections of plots.

Screen Shot 2019-02-28 at 11.09.51 AM.png

(And yeah, oops, one "h" in threshold!)
You are conducting a 2 Sample T-test with unequal variances, is that what you want to do?

This is only valid if there is 1 control and 1 treatment group. If you have multiple treatment groups then an ANOVA would be more appropriate.
The treatment is the level of spatial aggregation, and my interest is pairwise difference from a particular control size. I really just wanted to check that I have the formula and its use right, and that 1.28 for very large N's seems plausible.
Are you conducting multiple pair-wise tests? If so, you are exposing yourself to more alpha risk than just 0.10. If you are comfortable with 0.10 alpha risk then you will need to adjust your individual test's alpha to give the entire projects risk of 0.10 .

Lets say have 1 control and 5 treatment groups. you run 5 comparison tests between the control and the treatments, each at an alpha of 0.10, then the total alpha risk is 1-(1-0.10)^5 = .40951 . If you used an alpha of .021 for each of the 5, then you total alpha risk would be 0.10.

Also, I would not make the degrees of freedom assumption unless the Satterwaite degrees of freedom are more than 100. How many trees in each group? If over 100 each then its an okay assumption.

Sorry to add more items to think about, but I guess thats the nature of this board.
No, that's good, thanks. I have, in some instances 163 or, in another context approx. 40,000-50,000 df.

As for the alpha risk, you'd have to see the context. I truly only care about the pairwise comparison. I only want to know: is, say, the mean at 900 m^2 significantly different from the mean at 400 m^2 in isolation without caring about the other cases at the moment.

My advisor now says I have t wrong for 2-tails, and it should be 1.64. Should be an easy fix.
That 1.64 makes sense that you are conducting a 2-tailed T-test. Essentially by using that you are splitting your alpha risk between the m1>m2 and m1<m2. Using the 1.64 accounts for both, so in essence you are checking for both.