I personally found the recent conversation so valuable, I can't stand seeing it flushes away. I hope future readers and Googlers can contribute further to the idea. Mods please feel free to edit/move/remove this as you wish.
I might try to save some other talks here too, if no objections.
--------------------------------------
04/12 02:57 Jake: does anyone know off the top of their head in what cases bootstrapping and permutation do not give equivalent results for testing a parameter against 0? that is, you reject the null using the bootstrap, but don't using the perm test, or vice versa? i'm told that this can happen but don't really know how or when or why
04/12 03:00 bryangoodrich: When your bootstrapping is of a small sort? You don't resample the possible permutations enough? Otherwise, I don't see why it would ever be systematically different.
04/12 03:09 Dason: Well it is different - in one you ignore the group labels and in the other you don't when doing the resampling
04/12 03:10 Dason: but it's an interesting question...
04/12 03:37 Jake: yes. i've been wondering it for a long time and today a student asked me
04/12 03:39 Jake: obviously you implement them in different ways. and typically people report confidence intervals for parameters estimates based on bootstrapping, but get p-values based on permutation tests. but one wonders when if ever they would disagree. if they would never disagree, we could simply use bootstrapping for everything and not worry about the actual p-value
04/12 03:40 Jake: e.g., assume that if beta=0 is outside the bootstrapped 95% confidence interval, then the permutation test would inevitably show p < .05
04/12 03:40 Jake: but the fact that people make this clear distinction about when to do each suggests that there must be some pathological cases
04/12 07:35 CowboyBear: @Jake this short article talks about cases when confidence intervals and significance tests give different answers about whether two means are sig different.
04/12 07:35 CowboyBear: http://www.ecmaj.ca/content/166/1/65.short
04/12 07:35 CowboyBear: could be that the same thing applies for bootstrap CI's and permutation p values?
04/12 07:56 Jake: thanks CB i'll look at this
04/12 12:13 Jake: CB i looked over the paper. i have a good understanding of the problem the author mentions and unfortunately don't think it does too much to shed light on the perm. test vs. bootstrap issue... thanks for the lead though
I might try to save some other talks here too, if no objections.
--------------------------------------
04/12 02:57 Jake: does anyone know off the top of their head in what cases bootstrapping and permutation do not give equivalent results for testing a parameter against 0? that is, you reject the null using the bootstrap, but don't using the perm test, or vice versa? i'm told that this can happen but don't really know how or when or why
04/12 03:00 bryangoodrich: When your bootstrapping is of a small sort? You don't resample the possible permutations enough? Otherwise, I don't see why it would ever be systematically different.
04/12 03:09 Dason: Well it is different - in one you ignore the group labels and in the other you don't when doing the resampling
04/12 03:10 Dason: but it's an interesting question...
04/12 03:37 Jake: yes. i've been wondering it for a long time and today a student asked me
04/12 03:39 Jake: obviously you implement them in different ways. and typically people report confidence intervals for parameters estimates based on bootstrapping, but get p-values based on permutation tests. but one wonders when if ever they would disagree. if they would never disagree, we could simply use bootstrapping for everything and not worry about the actual p-value
04/12 03:40 Jake: e.g., assume that if beta=0 is outside the bootstrapped 95% confidence interval, then the permutation test would inevitably show p < .05
04/12 03:40 Jake: but the fact that people make this clear distinction about when to do each suggests that there must be some pathological cases
04/12 07:35 CowboyBear: @Jake this short article talks about cases when confidence intervals and significance tests give different answers about whether two means are sig different.
04/12 07:35 CowboyBear: http://www.ecmaj.ca/content/166/1/65.short
04/12 07:35 CowboyBear: could be that the same thing applies for bootstrap CI's and permutation p values?
04/12 07:56 Jake: thanks CB i'll look at this
04/12 12:13 Jake: CB i looked over the paper. i have a good understanding of the problem the author mentions and unfortunately don't think it does too much to shed light on the perm. test vs. bootstrap issue... thanks for the lead though