Running time-series regressions on discontinuous datasets: is it legitimate?

#1
I would like to run a simple time-series regression, to estimate the sensitivity of my dependent variable to a set of explanatory variables.

However, the dataset could be subdivided into subsets that fall under specific categories. For example, a time-series of stock returns could be subdivided into a bull market subset and a bear market subset.

The issue is that these market conditions (regimes) are discontinuous. In other words, you have a time period of bull market, followed by one of bear market, then bull, then bear,... The discontinuity between two bull market periods or two bear periods could be several years.

Is it legitimate to stack all the subsets of the same regime, e.g. bull, and run the regression? The purpose is to get a regime-specific sensitivity to the explanatory variables.

Thank you,
 
Last edited:

noetsi

Fortran must die
#2
I would say only if you can reasonably assume the sensitivty did not change over time. Or if you detail how the sensitivity changed (or did not) over time.