Correct approach to assessing correlation btn events in time domain?


New Member
Hi Folks
Firstly apologies if i have posted this in the wrong section - please feel free to advise or move as appropriate, and also apologies if I'm using the wrong terminology here and there - hope you can grasp what i'm talking about anyway!

I am doing some research comparing low frequency oscillations in heart rate with palpated/observed phenomena. I am using a heart rate monitor to collect data and simultaneously asking an observer/palpator to note the occurrence of (possibly related?) phenomena and recording the timings of these events.

The following graph represents the product of my number crunching so far (its taken a while to get there!)

In this graph the blue line represents low frequency oscillations in heart rate; the coloured vertical lines represent the event markers for the palpatory events reported by the observers. The x axis is just elapsed time in seconds, the y axis is, well i'm not sure really what that is yet, but it is the x axis I am interested in! I am trying to assess whether there is a statistically significant correlation between the maxima and minima, or peaks and troughs, of the blue waveform, with the coloured vertical markers in time. Do the palpatory markers correspond (as they appear to) with the peaks and troughs of the wave to a greater extent than that which we would expect by chance alone.

(For those who are interested the low frequency waveform was extracted from beat-to-beat heart rate data. This was Fast Fourier Transformed to yield a power spectral density plot showing the relative power of constituent oscillatory frequencies. The desired frequency range was then filtered for by replacing all other data with 0s, then this was Inverse fourier transformed and plotted against time to give this blue waveform. This is why I am not sure what the unit on the y axis represents: I understand how to make excel do an FFT, but the maths behind it still hurts my brain!)

I am following a piece of published research (Nelson et al 2001 ref below) which made similar measurements, adn followed a similar approach, but their statistical approach was simply to pair the palpatory markers with peaks/troughs of the waves, and then to do a paired t test using the values on the x axis, to assess for significance. This seems wrong to me on many fronts - the points on the x axis are not normally distributed; pairing palpatory markers with peaks or troughs does not take account of how many peaks and troughs do not correspond with any marker. etc etc.

My approach so far has been to work out the average wavelength/frequency of the peaks and troughs in the blue waveform, or the average interval between adjacent maxima and minima, and to see if a marker falls with 10% of this interval, away from a peak or a trough. This at least enables me to designate a marker a 'hit' or a 'miss'. It also enables me to see what percentage of markers 'hit' and peak/trough, and what percentage of peaks/troughs hit a marker. This is descriptively interesting but as far as I can see, it doesn't take into account how many of them might hit due to chance alone.

For this reason a colleague suggested that I derive a 'mean waveform' taking a maxima or minima as a start point and simply adding the mean interval between peaks/trough to derive notional maxima and minima. I could then see how many of the time markers 'hit' or 'missed' these notional mean peaks and troughs. From this I could then make a contingency table comparing expected hits and misses with observed hits and misses and come up with a test statistic. I am unclear whether this would constitute and Odds Ratio or a Chi square test but i think they are fairly similar?

I have been reading up on Bland Altman plots as I was looking into reliability of different methods of measuring the same phenomena hoping this would yield something, but I understand that this relies on having paired data. I think to do this anyway I would need to be looking at the intervals between events (peaks/troughs or palpation markers) and I can't help thinking that this is the direction I should be going in but I have just not wrapped my head around it yet. I guess if I take the time interval between events I can at least assess for distribution, variance etc. and i have the feeling that this might enable me to use Pearson's product moment correlation coefficient, but confess I have not wrapped my little mind around this yet either. I will update if I get any further with this

Ok well that's probably enough of a description of my problem for now. Any input would be greatly appreciated and obviously if you need any more info to make an intelligent comment then just ask!

Best wishes

Here's the paper I am copying
Nelson, K. E., Sergueef, N., Lipinski, C. M., Chapman, A. R. & Glonek, T. (2001) Cranial rhythmic impulse related to the Traube-Hering-Mayer oscillation: comparing laser-Doppler flowmetry and palpation. The Journal of the American Osteopathic Association, 101 (3): p.163–173.
Last edited:


No cake for spunky
The types of time series I know are entirely different from this. My only suggestion is the specteral analysis (primarily used in physics) primarily deals with waves in the context of time series. It is, even by the standards of time series, extremely complicated, but you might look there to see if it is of any value to you. Other than regression I have not seen correlation applied to time series and I have never seen it applied to waves.


New Member
If I follow you right I think I may have already been through the spectral analysis - my understanding is that spectral analysis refers to the transforming of my original waveform (not shown) to produce a plot of the relative power of the spectrum of frequencies. Certainly in heart rate variability studies this is what spectral analysis usually refers to. I have then done a further step to produce the above graph by isolating one portion of the spectrum, once frequency range, and reversing the process to give the blue waveform above. I may have misunderstood you but the spectral analysis I have come across refers to this transformation of time-domain data to frequency domain data. Are you suggesting that if I were to convert back to frequency domain I might be better able to perform inferential tests?

Thaks for the reply, sorry if i've misunderstood, I'm not brilliant at this stuff...!