This is the most promising start I've had all day, thank you so much. Your explanation of the interaction effect was really clear.

Glad it was helpful. Feel free to hit the "thanks" button if you'd like!

Also, just to clarify, was the global F-test significant? (I presume it was, but we should only go further if it is.)

I think my confusion is stemming from the fact that from what I understand, in the most simple terms: correlation tells me if a relationship exists, and regression tells me how strong the relationship is, if there is one.

The Pearson correlation will tell you if there exists a linear relationship and it's strength. You could have a very strong non-linear relationship that has a Pearson correlation coefficient close to zero (think of a parabola). The regression allows you to examine the relationships and magnitude of change in Y given a (one) unit increase in a particular X variable, after accounting for the other X variables in the model. In other words, it allows you to say, "If we have temperature in the model, how does passes change with a one hour (assuming you measure night length in hours) increase in night length, i.e. what is left over?" For my example, assume there was no interaction, because that would complicate the example more than needed right now.

How therefore, can I have a correlation between temperature and passes of .535, with a p-value of .002, but the coefficient for the same relationship is not significant? In other words, how can I have a significant relationship in which the factors have no significant effect on each other?

See my explanation above. Once you put it into a regression with additional independent variables (X variables), you're examining the relationship of Passes and Temperature

*after accounting for the other terms in the model*.

I feel like I'm missing something really important and likely quite obvious, but I just can't make sense of it.

Looking specifically at your regression output, it shows that, after accounting for the main effects, interaction is not significant. (Similarly, if you look just at temperature, it would say, after accounting for night length and the interaction of temp and night length, temperature is not significant--- however, as I mentioned in an earlier post, it doesn't make any sense to do this). If you have the interaction (or any higher order term, such as a square), you do no test the lower term while the higher order term is in the model (in this case, temp). The VIF for temp and the interaction are very high, which is expected (but is definitely deflating the t-stats for those coefficients). If you do believe that interaction is reasonable, I would try a centering method for the interaction, refitting the model with temp and night length as well as the centered interaction. This could help reduce multicollinearity and might give you different results.

If you don't have a strong reason to suspect interaction, I would recommend dropping it from the model, and refitting the model with only temp and night length. My guess is that if you rerun the model without the interaction you will see temp become more significant.

At this point, the choice is yours for the next step.