Recent content by ondansetron

  1. O

    High statistical significance with low R squared coefficient

    Sure, if you want a real experiment versus a simulation, either can be done. Failure to randomize people into groups in a way that leads to bias away from the null, or introducing some other form of bias that typically occurs in studies and you can get a p-value less than alpha, but the null is...
  2. O

    High statistical significance with low R squared coefficient

    This is what I was saying is incorrect. The p-value doesn't tell you anything regarding the probability the null hypothesis is true or false. I can design an experiment where the null hypothesis is true but where the p-value is very low. This illustrates why it's incorrect to say the p-value...
  3. O

    High statistical significance with low R squared coefficient

    The bold part isn't accurate. A small p-value for the case you provided would indicate that it's improbable to see a coefficient at least as contradictory to the null hypothesis, IF what we saw is entirely due to chance (null is true). In other words, p-values don't tell you any sort of...
  4. O

    Significant 2x2 interaction, but non-significant simple main effects - how to interpet?

    As a general rule, test higher order terms first, before testing the nested terms, and don't test nested terms if the higher order term is significant. For example A*B should be tested before either A or B. If A*B is significant, then , by definition, A and B are statistically useful variables...
  5. O

    Confidence Interval in Statistics test

    So, in the traditional sense of frequentist statistics, I think you can say "95% confident" because they didn't mean it as a probability statement on the interval. They meant it to refer to the methodology and long run success rate if used properly. Almost a short hand was of saying "this...
  6. O

    How serious are violations of regression assumptions

    One of the big problems that I've seen is the misunderstanding that these things are "just calculations" and boil down to a black and white matter (not saying this is you, noetsi, but the people pushing the program). The violation may affect one conclusion in a material way and another in an...
  7. O

    How serious are violations of regression assumptions

    Great example of how "big data" and "analytics" are watered down statistics...I think you can see how violated assumptions screw with estimates and conclusions when you've worked on something that changed dramatically when the assumption violations were remedied or a more appropriate method was...
  8. O

    Ratio of sizes data

    This was my thought after reading your first post. I Think that would be reasonable to use ANCOVA to model Golgi size (volume, area, or however you intended) as a function of genotype after accounting for the covariate cell area/volume (again whichever size measurement you planned to use). Is...
  9. O

    Logistic Regression Models Without Main Effects?

    Another note is that you will have to work with your data to determine which method of relieving collinearity is best. Centering may work in some cases and note in others, depending on the variable, whereas a ridge regression may be worth while in other cases.
  10. O

    Logistic Regression Models Without Main Effects?

    Also, there are ways to handle collinearity if you need to make inferences on the beta estimates (ridge regression, possible centering, partialling out a variable you don't care about, dimension reduction). It is also not advisable, for estimation purposes, to exclude variables that are...
  11. O

    Logistic Regression Models Without Main Effects?

    I don't think it's very reasonable to exclude main effects. By definition, if the interaction is important, you've specified that the variables are important for illustrating the relationship accurately. As a general principle, it's not good to test main effects or lower order terms after a...
  12. O

    Immortality & Bayesian Statistics

    This is different than improbable. Now, you're saying H=>~X (if Hypothesis is true, then we won't see X). The contrapositive is true: X => ~H (If we see X, then H is not true). However, the converse isn't necessarily true. That is, you can't say ~X => H (if we don't see X, H is true)...
  13. O

    Immortality & Bayesian Statistics

    For someone who says he isn't much of a typist, you've done an awful lot of typing... Also, I believe I saw a prior argument: "17. For dummies: a. The likelihood of a "red state" to elect Candidate X is 10%. b. State A elects Candidate X. c. State A is probably not a "red...
  14. O

    Dependent t-test with extra information provided

    It's pretty odd that many of them don't! I have noticed that it is more commonly overlooked when the book is not written by a statistician, but even when it is, it isn't necessarily spelled out for the reader in words. Try running the test with the different hypothesis set up and let's see...
  15. O

    Dependent t-test with extra information provided

    The problem is that this is a common misunderstanding, including in biomedical research. A few things: 1. Failing to reject Ho does not allow someone to conclude Ho. The methodology doesn't allow for that. This kind of hypothesis testing cannot provide evidence FOR a null hypothesis, only...