Accurate time differences

General discussions and questions abound development of code with MicroPython that is not hardware specific.
Target audience: MicroPython Users.
Post Reply
ttmetro
Posts: 104
Joined: Mon Jul 31, 2017 12:44 am

Accurate time differences

Post by ttmetro » Sun Oct 28, 2018 7:01 pm

There is often a need to measure time difference accurately. Micropython offers ticks_us() for this plus ticks_diff(...) to deal with wraparound. C(ircuit)Python offers time.monotonic() which returns time as a float to avoid the wraparound, but loses precision over time.

I'd like to use a "chronometer" object instead, that allocates a high precision timestamp (e.g. 64 bit int) upon creation, and returns the time elapsed since creation or reset as a single precision float (hence avoiding memory allocation after object creation).

Before cooking up something random, I'd appreciate comments:
  • Does such a thing exist already in Micropython? (There is 'Chronometer' for CPython on PyPi).
  • If not, what would be a good API?
Many thanks for suggestions!
Bernhard Boser

jickster
Posts: 629
Joined: Thu Sep 07, 2017 8:57 pm

Re: Accurate time differences

Post by jickster » Sun Oct 28, 2018 8:20 pm

The simplest way to do this is implement a free running counter in C that’s 64-bit.

If your hardware only has 32 bit free running counter, you’d have to set-up an interrupt so that when the free running counter overflows, another 32 bit number is increments.

If you can setup this using micropython’s current API, you don’t need to write C code.

Specifically:
* configure a free running counter using time resolution you want
* if necessary set-up another 32 bit variable to be incremented when the “lower” free counter overflows.


Sent from my iPhone using Tapatalk Pro

User avatar
pythoncoder
Posts: 5956
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Re: Accurate time differences

Post by pythoncoder » Mon Oct 29, 2018 7:12 am

The Pyboard has two 32 bit hardware counters (timer 2 and timer 5) which can be cascaded. That would provide a simple Python solution, but it would be hardware specific.

It's perhaps worth bearing in mind that measuring μs to 64 bit precision allows for time intervals of up to half a million years. Further, even when calibrated the Pyboard's crystal oscillator is "only" accurate to a few parts per million (about +- 2 minutes per year). Your precision would grossly exceed your accuracy. Other platforms don't have the STM calibration feature and can be expected to have poorer performance.

I think it's worth being more precise about what you seek to achieve in terms of resolution and absolute accuracy. In some applications guaranteed monotonicity is sufficient, but if you require better absolute accuracy you will need a better reference.

The cheapest quality reference is the PPS signal from a GPS receiver. Using that to synchronise a μs counter while preserving accuracy and monotonicity would be an interesting problem to tackle. I think you would need to engineer a software phase locked loop. But it would offer the possibility of accurate, drift-free long term timing.

To avoid allocation you need to avoid all floating point operations - pre-allocating a float doesn't work because the code required to populate it will allocate. I would stick to 32 bit integers.
Peter Hinch
Index to my micropython libraries.

ttmetro
Posts: 104
Joined: Mon Jul 31, 2017 12:44 am

Re: Accurate time differences

Post by ttmetro » Mon Oct 29, 2018 5:45 pm

Thank you for the responses. As suggested, there are many ways to implement this.

@pythoncoder The problem is not accuracy. Cheap crystals are ~100ppm, not even 16-Bit! The issue is dynamic range. A 32-Bit counter counting ms overflows in 50 days or in just over an hour counting us.

CircuitPython implements the CPython standard time.monotonic(). It internally has a 64-bit counter returns a signal rounded to a 32-bit float with only 22-bit precision. Now ms resolution is maintained over only an hour. After 2 days, the resolution drops to 50ms, not even sufficient for debouncing!

The solution is to keep the offset (time when the chronometer was started) as 64-bit (less would suffice, but 32-bit does not) but return the elapsed time as something that does not require memory allocation so the function can be used e.g. in an interrupt handler. A float is reasonable on architectures that support them (e.g. ARM Cortex M4, which are getting common). Then the resolution is 22-bits, more than the precision of the clock, as @pythoncoder points out, and does not decrease over time. And no issues with wraparound.

The initial allocation of the offset done in the initializer requires the heap. An alternative would be a static, but then the number of available chronometers needs to be set at compile time.

I think I'll add this to the timer module, fits there and suits my purpose. If MicroPython wants to adopt this is a separate question.
Bernhard Boser

Post Reply