Quote:
Originally Posted by ArtPE
re: sampling rate
let's assume we want to capture at least 5 ms info/events...~200 hz
Nyquist would require >400hz...so 500 should suffice...
|
This is also a matter of the level of precision we want to use when reporting findings. If our goal is to capture events in the 5ms range with 500 Hz, then we can't report measurements with 1 milisecond precision. So it would be inappropiate to report a 33, 34, 35, etc ms shift time, but 40 ms would be fine, which is kind of what Swamp has done in his initial post by providing estimates in 10ms ranges.
This again brings up the issue of what we think the "event" we would like to measure is exactly, and if the broader "shift event" can be broken down to shorter sub-events that should be characterized. For instance, the first graph suggests some deceleration which probably corresponds to the the disengagement of the first clutch, and then acceleration, which probably corresponds to the engagement of the second clutch. If that is the case, clutch engegament/disenagement constitute events. Then we can think about the frequency of that "event" and use the Nyquist–Shannon sampling theorem to come up with a required sampling rate.
Another example is defining clutch slippage during engagement as an event although that is over the top and not relevant to the question being posed with this test. Does that occur as linearly as the first graph suggests? I'd be interested to know, but even with a higher sampling rate, noise would get in the way.