PERMANOVA in R adonis function

bugman

Super Moderator
#3
A pleasure.

Note that the overall test is Pseudo-F, but your pairwise tests are pseudo-t (the sqrt of the f-ratio).

:cool:
 

bugman

Super Moderator
#4
Briefly (i'll get back again soon),

you can actually run those pairwaise tests as a-priori contrasts - even if your overall test is insignificant - not need to run two PERMANOVAs.

Is time a fixed or random effect?

I'll need to get back to you on the DF a little later...
 

bugman

Super Moderator
#5
I spoke with my professor today, and although he agrees with my assumption of the plots being "chosen" in a way. He still insists that there was some random component in choosing the sites out of the entire 390 ha park boundary. So he wants me to go with the idea that sites (treatment areas and controls) are random, and years are fixed effects.
Yep, I'd agree with that!

As for your DF's - can you post the output, this might give me a better idea of whats going on.
 

bugman

Super Moderator
#6
Hi

It looks like the problem is that adonis is not recognising year as a factor but is treating it as continuous.

Either code these as factors (using as.factor) or try changing year to "before" "after" and "recovery" for example. This is also likely to be upsetting your interaction df.

Give this a go and let us know how you get on.

:tup:
 
#7
A pleasure.

Note that the overall test is Pseudo-F, but your pairwise tests are pseudo-t (the sqrt of the f-ratio).

:cool:
Hi Folks,

I have a question somewhat (perhaps) related to the quote above and thought it might be useful to post it in the same thread (Bugman also seems to be the Man! :)).

Dose this somehow imply that when conducting multiple comparisons with adonis after a global test, the p-values of the pairwise comparisons should be used as they are (i.e., without correction)?

I recently ran a global test with adonis (all treatments) and found significant differences (p-vaue < 0.001) among the treatments. As would be expected, pairwise comparisons of the treatments showed that some treatments are significantly different from each other, while others are not. What puzzled me a bit (and still has me puzzled), however, is that none of the p-values of the pairwise comparisons was as small as the p-value obtained in the global test. Moreover, if I correct the p-values from the pairwise comparisons (e.g., Bonferroni), none of them are significant anymore, which does not make much sense to me. Hence my question of whether p-values from multiple comparisons using adonis should be used as they are, and if this is somehow implied in Bugman's quote above.

Thank you very much in advance for your help!

Best regards,

A
 

bugman

Super Moderator
#8
Hi Folks,
Dose this somehow imply that when conducting multiple comparisons with adonis after a global test, the p-values of the pairwise comparisons should be used as they are (i.e., without correction)?

The recommendations are to look at the uncorrected p-values and make an informed decision. You can use the triangluar matrix of distance measures (distance to centroid) between the groups to help you decide if the P-values make sense. Unfortunatly, this is a limatation in adonis and indeed PERMANOVA+, but ultimatly, it comes down to you and your understanding of the data. If you have alot of comparisons you might want to consider a correction, but for a smaller number of treatments, I usually just use the uncorrected t-values (after looking at the distance measures).

There are discussions on the R-help chat page on this and its limitations:


http://permalink.gmane.org/gmane.comp.lang.r.ecology/714

and

http://r-forge.r-project.org/forum/forum.php?thread_id=1009&forum_id=194



What puzzled me a bit (and still has me puzzled), however, is that none of the p-values of the pairwise comparisons was as small as the p-value obtained in the global test.

A

Well spotted. The reason is that the permuted p-values are calculated as a proportion of the permuted f values that are greater than or equal to your observed f.


P=(# of F-permuted ≥ F-observed)+1/(#Total number of F-permuted)+1

So the smallest possible p-value decreases with the fewer unique permutations.

So with your overall test you may have 1000 unique permuations giving you a p-value as small as 1000 (1/1000), however becasue the pairwise comaparisons are estentially only looking at a subset of the total, the smallest possible p-value increases because the number of unique permuations decreases.
 
#9
Thanks for your reply Bugman.

I've been doing a bit of research, as well as trying different p-value adjustments and came accross Benjamini and Hochberg's 1995 paper, whose method is implemented in R {p.adjust(x, method="BH" or "fdr")}. Their p.adjustment method takes a different approach, and corrects for the "false discovery rate" instead of the "family wise error rate". The result is an increase in power vs. adjustments that correct for the latter.

The paper is an easy read and gives excellent examples as to when it is desirable to correct for FDR vs FWER. The pdf file failed to upload, so I've copied the web address below (hope it works!).

http://www.math.tau.ac.il/~ybenja/MyPapers/benjamini_hochberg1995.pdf

All the best,

A
 
#10
Hi Alfredman, I have the same situation like you, and I want to use a correction for the p-values of the pairwise comparisons using adonis, but I dont know how can I include this opcion p.adjust(x, method="BH" or "fdr") that you recomended. I really apreciate any help.

Thank you so much
 
Last edited: