really depends on what you're doing. a lot of the routine statistical analysis done in psychology is automated in SPSS so depending on which area you are you may not need more than that. linear mixed models (also known as multilevel models or hierarchical linear models) are the hot stuff these days and it wouldnt hurt to know some structural equation modelling just because it's fun. if you're into educational psychology like i am knowing stuff like Item Response Theory and psychometrics in general helps out a lot, but it really depends on which area of psychology you'd like to do research on.
one of my math profs (i come from a background in math/stats and then transitoned to gradschool in psychometrics) used to say that one never learns enough calculus and linear algebra. so i would advise you to take it although when if i compare the class i took in multivariate statistics with the math dept as an undergrad with the class i took on multivariate statistics in the psych dept it was like night and day. only basic knowledge of matrix algebra was required for psychology whereas the one in math did ask me to be very proficient in it.
now, with that being said, knowing linear algebra helps you understand everything in a class way beyond from you knowing the basic tricks and rote memory stuff of how to put things in the software and generate output. why is this important? well, it is important because if one of these days you end up with a particular nasty dataset on which you cannot use the usual tricks people use to analyze data then you're out of luck, but if you understand how methods work you'll always be able to modify them to accommodate what you need. so it never hurts to learn something new
To add on to what spunky said, I would say that the group of procedures that you absolutely must be familiar with, more than anything else, is those procedures which fall under the blanket of the General Linear Model. This includes all the varieties of t-tests, ANOVAs, simple and multiple regression, ANCOVA, moderated regression, etc. All of these different-sounding techniques are special cases of the general linear model, because they can all be expressed in the same mathematical form (a linear combination of the parameters) and they draw on basically the same set of assumptions. If you browse the top empirical journals in psychology today and look at the statistical procedures being used and reported, >90% of them are cases of the general linear model. Knowledge of other statistical techniques is also great, and spunky named the big ones, but knowledge of the general linear model is pretty much the minimum requirement to be considered a competent researcher in psychology.
.... and maybe i'd add to the general linear model logistic regression cuz people kinda like to score stuff as "YES/NO" from time time. so yeah, Jake (who is one of our psychologists here) got it there: hands down the General Linear Model (so the morea reason for you to learn some linear algebra)
Yes, good point, also logistic regression. In the graduate-level statistics course that all incoming students in our department take, we spend almost the entire year going over the general linear model in great detail, and then we reserve the last month for covering logistic regression, resampling techniques (e.g., bootstrapping and permutation tests), and signal detection theory. After that, students split off and take different courses which depend on their particular subfield of psychology. So the above topics are a pretty good representation of what we consider to be the need-to-know skills for all experimental psychologists.
Thanks everyone...Just another quick question. I was reading a stats book and the author basically said that ANOVA and regression are the same thing....Spunky It would be great if you could elaborate on that ....if you could offer me a very simple expression of why? as I know ANOVA compares mean and regression determines the contribution of various IVs on a DV
The main thing to understand is that, really, it's all regression. Fisher himself referred to ANOVA as simply "a convenient method of arranging the arithmetic." Basically what we call ANOVA is just regression where you take your categorical IVs (e.g., the conditions of an experiment) and convert them into a set of k-1 contrast vectors (where k is the number of levels of the categorical IV), and then you regress the DV on these contrasts. This is internally what all ANOVA programs are doing. A full description of the process and theory could easily be the subject of an entire course, and it often is, but you can find a brief and useful discussion of contrast coding HERE.
when i was reading "The Lady Tasting Tea" (<-- a must-read, in my opinion, for a fun and easy-to-follow introduction to the history of statistics and how it came to influence the process of making science) i remember the author mentioning that Fisher, in his greatness, realised that the conceptualisation of the linear model he was going after (or the multiple regression of today) was going to be so cumbersome from a computational perspective that he decided to create a set of shortcuts through the partition of sums of squares which ended up becoming what we now know as ANOVA. that was the first time that i asked to myself just how much more advanced we would be now if great people like him could have had access to something like the laptop i'm using to type now. (which i hate btw because my computer standards it's already very old and outdated!)