This is a great question. However, the answer is somewhat involved. You have described three different estimators for the variance of a random variable. The "1/n" version corresponds to the "maximum likelihood" estimator for variance. However, this estimator is biases. That is, it introduces a systematic error. The 1/(n-1) version is the unbiased version. That is why we typically use this as the estimator of variance. In the case of simple linear regression, we use the 1/(n-2) version bescause it is an unbiased estimator of the variance in this case. The 1/(n-2) version is what is used to calculate MSE (meas square error) in simple linear regression. It is interresting that each of these estimators are consistent. That is, as the sample size increases, each of these estimators will converge to the same value.

My advice to you: we almost never use the 1/n version, and alsmost always use the 1/(n-1) version outside the context of regression. Look carefully to see what is used when you encounter new types of statistics.

~Matt

~Matt