Calculating a smooth 90% limit for differences in a time series

I have 50 data sets. Each set has three related time series: fast, medium, slow. My end purpose is simple, I want to generate a number that indicates a relative degree of change of the time series at each point. That relative degree of change should range between 0-1 for all the time series and all the data sets. The scales of the data set range from .0001 to 100. Attached is a spreadsheet with one data set.

To accomplish this, I calculate the differences in a time series, delta(ts)=ts(t) – ts(t-1). Now I am trying to calculate an upper limit of 90% of those deltas. In other words, draw a smooth line on those differences such that only about 10% of the differences exceed that line. I will use that 90% limit to establish a maximum to normalize the differences between 0-1. Is this the best way to do this?

I’ve been working on this for months, mostly linear programmatic methods, with no success. And trying to get it to work across 50 data sets is killing me. I’m sure there has to be an elegant mathematical way to do this. I can’t be the first guy in town trying to normalize a relative degree of change of a time series.

Any help or directions for research are greatly appreciated! Obviously my math skills are weak so examples would be most helpful. Thank you everyone for your time and brain power!


New Member
from your graph it looks like the differences are pretty steady except the spike about 3/4 of the way through, is that right? in your research on this, have you looked at some of the basic time series analysis methods to see if they fit in with your goals? i'm curious why you need to normalize these? reason i ask is, there may be methods to do what you want (which I'm still not clear what that is) w/out normalizing. are you familiar with software other than excel, like maybe R? as for normalizing, can you standardize using the mean, similarly to the way in my attachment?