When sigma actually isn't known using the T distribution is the mathematically appropriate thing to do because the t-score that you calculate (x - mu_o)/(s/sqrt(n)) will have a T distribution if the null hypothesis is true. If you pretend like s is actually sigma and use a Z distribution then you're doing it wrong. I'm not sure what you're referring to when you say "I think mine answers that t is not better".

If you'll look above, you will see my contribution-a table.

I calculated t and Z using the same x bar, mu or s, and n = 30.

s/sigma started at 50

At t = 2.19089, Pt<2.19089 = .9817

At Z = 2.19089, Pt<2.19089 = .986

Standard error of sigma at n = 30 is .129

I then increased s/sigma to 1.129 * 50, 1 sd north, simulating error

At t = 1.1940558, Pt< 1.1940558 = .969

At Z = 1.940558, PZ< 1.940558 = .974

.9817/.969 = 1.013

.986/.974 = 1.012

then to sigma/s +2 s; then to +3s

As sigma/s varied, Pt or s < x varies little.

Both have ~ the same sensitivity to sigma +/- delta sigma.

What's going on is, when t = Z, Prob t<t crit ~ Prob Z<Z crit

Close, but not equal, at least at n = 30.

t works < n = 30, or so, because s goes wonky below ~ 30, even with Bessell.

Monte Carlo said: