# n= 29, p value for three tests: .061, .09 and .13 but all trending in predicted direction: How to report? (SPSS images included)

#### Midori

##### New Member
Is it wrong to refer to results as “nearly” or “somewhat” significant? (7 answers)
So I ran three paired samples t-tests to see if people did a certain action more after hearing a target stimulus. my prediction is that they would do the action more. All tests were one-tailed as i was only looking at one direction: did it increase after x. n = 29 for each test.
the p values for tests were, First test:.061, one-tailed, Second test: = .098, one-tailed and third test: p = .13, one-tailed. But in every result the mean was higher in the group I predicted would be higher after the stimulus.
So, when I report this what can I say? Should I say that while not 'significant' they were in the 'expected direction' and that repeating the test with a larger sample is needed? In other words that there is an interesting trend and deserves more investigation?
Thanks so much!

#### Attachments

• 90.7 KB Views: 2
• 103.5 KB Views: 2
• 105.3 KB Views: 1

#### fed2

##### Active Member
if all of these tests are giving evidence for/against the same hypothesis , it seems like a better test would pool data from all three 'tests'.

#### Midori

##### New Member
So the test that says “All” is just that. The one that says beat and the other that says rep are when I breakdown the participants movements into smaller parts from the “all” one. So the p value that is .061 is the overall picture. The other two are the parts. But all three have larger means where I’d expect them to be.

Last edited:

#### Karabiner

##### TS Contributor
As a reviewer, I would not accept the one-tailed results, but look at the two-tailed figures instead.
With p > 0.1 there is not enough empirical evidence to support your research hypothesis.

With kind regards

Karabiner

#### Midori

##### New Member
As a reviewer, I would not accept the one-tailed results, but look at the two-tailed figures instead.
With p > 0.1 there is not enough empirical evidence to support your research hypothesis.

With kind regards

Karabiner
Oh, so one-tailed is not looked favorably upon in research? That’s a shame. Why are one-tailed not as accepted? Thanks for your thoughts

#### Karabiner

##### TS Contributor
It is good science to make ist possible to see
if an effect is in the opposite direction.

#### Midori

##### New Member
It is good science to make ist possible to see
if an effect is in the opposite direction.
If my question though is only in one direction is that ok? My research question is only looking to see if the participants do a certain action more after the stimulus. I know they will not do it less based on the research. It was only in one direction. Thanks!

#### katxt

##### Active Member
I agree with Karabiner. Usually one tailed tests are used for situations like monitoring or compliance where you may have to prove, for instance, that the mean the DDT level in soil is less than 0.2 mg/kg.

#### fed2

##### Active Member
one vs two sides is a debatble debate. fortunately there's a rule for that: The International Conference on Harmonization (ICH) E9 guideline recommends using a significance level of α/2 for one-sided tests in regulatory sellings.

#### katxt

##### Active Member
That's interesting. The standard statistical workhorses like ANOVA, regression, Chi square, are naturally two sided in their effect.