Z test or T test?


New Member
Looking at a spider diagram I assumed that i should use a Z-test for this hypothesis test however the solutions are saying a T-testing using an F-test to see if the variances are the same. Someone please help me understand :shakehead:shakehead

. A survey was conducted on attitudes towards speed cameras. Random samples were
selected from two groups of people. People in Group A all had a valid driving licence
at the time of the study, whereas people from Group B had never held a driving licence.
Attitudes towards speed cameras were measured on a scale from 0 to 10, with 10 being
the most positive attitude. The results of the survey were as follows:
Group A Group B
x¯ 4.111------6.778
s^2 2.361------7.944

(The dashes are there to help make sense of the table because the thread reformats everything)

(a) Using an α-value of 0.05, test the hypothesis that people who have a driving licence have a different attitude towards speed cameras than people who do not have a driving licence. You may assume that the population data are normally distributed.]

(b) Test the hypothesis that people who have a driving licence have a more negative
attitude towards speed cameras than people who do not have a driving licence (use
α = 0.05). Is your procedure different from the procedure used in part (a)? If so,
explain the difference


TS Contributor
What you want to focus on is:
1) It is different people in the two sample, in other words you have independent samples (you need to use another test for dependent sample often arising with before after measurements on the same group of people - as in before and after treatment in order to measure effect of treatment).
2) The samples are small --> hence you cannot call upon the central limit theorem to justify asymptotic normality of the estimator and then use z-test.

If it were a one sample - not a comparison between two samples - you could the assume normality of the population hence the sample average would be normal and divided by the estimate of the standard deviation adjusted for degrees of freedom you would be dividing by an independent chi^2. A standard normally distributed variable divided by an independent chi^2 adjusted for df is t-distributed. Hence you would use a t-test.

Since we now are comparing groups we apply the same logic - assuming normality of populations - but the application of this logic requires the samples have equal variance. Therefore we assume equal variances and we test for equal variances (using an F-test). If the test is not rejected we go on and conduct the "Independent samples t-test" used for small sample. If you google independent samples t-test you can probably find the further information you need.