Synchronous sampling rate

The official pyboard running MicroPython.
This is the reference design and main target board for MicroPython.
You can buy one at the store.
Target audience: Users with a pyboard.
Post Reply
thejoker
Posts: 11
Joined: Sat Aug 04, 2018 2:14 pm

Synchronous sampling rate

Post by thejoker » Wed Aug 22, 2018 8:54 pm

Dear reader, I am trying to create a micropython application which controls an electromechanical device. To achieve this I want to sample my sensor at a regular interval, every 100 milliseconds.
Now my question: How can I force my pyboard to run the same piece of code every 100 milliseconds?
Here is my code thus far:

Code: Select all

import pyb


def callbackfunction():
    time = round(pyb.millis()/1000, 4)
    return time

timelist = []
timer_object = pyb.Timer(4, freq=10)  # Create a timer object using internal timer #4 with 10 [Hz]
current_time = timer_object.callback(callbackfunction())
timelist.append(current_time)

User avatar
pythoncoder
Posts: 5956
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Re: Synchronous sampling rate

Post by pythoncoder » Thu Aug 23, 2018 6:33 am

The crude way is to have a loop which runs forever, pausing for 100ms then calling a function. Or your approach of using a timer callback. These work, but are not scalable: if you want to respond to user input or update a display, at the same time as running the loop, it soon becomes messy.

The best way to do this sort of thing is to use uasyncio which enables you to run multiple tasks concurrently. If you're unfamiliar with asynchronous programming there is something of a learning curve, but a tutorial may be found in this repository.
Peter Hinch
Index to my micropython libraries.

thejoker
Posts: 11
Joined: Sat Aug 04, 2018 2:14 pm

Re: Synchronous sampling rate

Post by thejoker » Sat Sep 01, 2018 7:09 am

Thanks for your suggestion Peter, but I decided to solve my sampling problem in a bit more dirty way. It's still effective to achieve synchronous sampling rate, so I'm going to post my code here in case anyone has the same problem somewhere in the future:

Code: Select all

import time

# Define sampling time, amount of samples, and create a buffer to store the sensor data
sample_time = 1
sample_amount = 100
data_buffer = [[] for i in range(sample_amount)]

for loop_value in range(sample_amount):
    t_start = time.ticks_ms()
    ############ Sensor sampling code start here #############
    # Some code ..
    # sensor_measurement = 
    # Some code .. 
    ############ Sensor sampling code ends here   #############
    data_buffer[loop_value] = sensor_measurement
    t_end = time.ticks_ms()
    while time.ticks_diff(t_start, t_end) < sample_time*1000:
        # Check the time
        t_end = time.ticks_ms()
# Write data to .txt file:
data_file = open('measurements.txt', 'w')
for each_data_value in data_buffer:
    data_file.write(str(each_data_value))
data_file.close()    

jickster
Posts: 629
Joined: Thu Sep 07, 2017 8:57 pm

Re: Synchronous sampling rate

Post by jickster » Sun Sep 02, 2018 4:51 am

thejoker wrote:Thanks for your suggestion Peter, but I decided to solve my sampling problem in a bit more dirty way. It's still effective to achieve synchronous sampling rate, so I'm going to post my code here in case anyone has the same problem somewhere in the future:

Code: Select all

import time

# Define sampling time, amount of samples, and create a buffer to store the sensor data
sample_time = 1
sample_amount = 100
data_buffer = [[] for i in range(sample_amount)]

for loop_value in range(sample_amount):
    t_start = time.ticks_ms()
    ############ Sensor sampling code start here #############
    # Some code ..
    # sensor_measurement = 
    # Some code .. 
    ############ Sensor sampling code ends here   #############
    data_buffer[loop_value] = sensor_measurement
    t_end = time.ticks_ms()
    while time.ticks_diff(t_start, t_end) < sample_time*1000:
        # Check the time
        t_end = time.ticks_ms()
# Write data to .txt file:
data_file = open('measurements.txt', 'w')
for each_data_value in data_buffer:
    data_file.write(str(each_data_value))
data_file.close()    
That is a bad way to initialize list.
If you know the size before, like you do:

data_buf = [None] * sample_amount

The way you’re currently doing it by appending an item will use up a lot of memory in the background


Sent from my iPhone using Tapatalk Pro

thejoker
Posts: 11
Joined: Sat Aug 04, 2018 2:14 pm

Re: Synchronous sampling rate

Post by thejoker » Tue Sep 18, 2018 8:04 pm

jickster wrote:
Sun Sep 02, 2018 4:51 am
thejoker wrote:Thanks for your suggestion Peter, but I decided to solve my sampling problem in a bit more dirty way. It's still effective to achieve synchronous sampling rate, so I'm going to post my code here in case anyone has the same problem somewhere in the future:

Code: Select all

import time

# Define sampling time, amount of samples, and create a buffer to store the sensor data
sample_time = 1
sample_amount = 100
data_buffer = [[] for i in range(sample_amount)]

for loop_value in range(sample_amount):
    t_start = time.ticks_ms()
    ############ Sensor sampling code start here #############
    # Some code ..
    # sensor_measurement = 
    # Some code .. 
    ############ Sensor sampling code ends here   #############
    data_buffer[loop_value] = sensor_measurement
    t_end = time.ticks_ms()
    while time.ticks_diff(t_start, t_end) < sample_time*1000:
        # Check the time
        t_end = time.ticks_ms()
# Write data to .txt file:
data_file = open('measurements.txt', 'w')
for each_data_value in data_buffer:
    data_file.write(str(each_data_value))
data_file.close()    
That is a bad way to initialize list.
If you know the size before, like you do:

data_buf = [None] * sample_amount

The way you’re currently doing it by appending an item will use up a lot of memory in the background


Sent from my iPhone using Tapatalk Pro
Are you sure that it's a bad way? I thought python list comprehensions were really fast. Will read more about it. Thanks though!

jickster
Posts: 629
Joined: Thu Sep 07, 2017 8:57 pm

Re: Synchronous sampling rate

Post by jickster » Tue Sep 18, 2018 8:58 pm

thejoker wrote:
Tue Sep 18, 2018 8:04 pm
jickster wrote:
Sun Sep 02, 2018 4:51 am
thejoker wrote:Thanks for your suggestion Peter, but I decided to solve my sampling problem in a bit more dirty way. It's still effective to achieve synchronous sampling rate, so I'm going to post my code here in case anyone has the same problem somewhere in the future:

Code: Select all

import time

# Define sampling time, amount of samples, and create a buffer to store the sensor data
sample_time = 1
sample_amount = 100
data_buffer = [[] for i in range(sample_amount)]

for loop_value in range(sample_amount):
    t_start = time.ticks_ms()
    ############ Sensor sampling code start here #############
    # Some code ..
    # sensor_measurement = 
    # Some code .. 
    ############ Sensor sampling code ends here   #############
    data_buffer[loop_value] = sensor_measurement
    t_end = time.ticks_ms()
    while time.ticks_diff(t_start, t_end) < sample_time*1000:
        # Check the time
        t_end = time.ticks_ms()
# Write data to .txt file:
data_file = open('measurements.txt', 'w')
for each_data_value in data_buffer:
    data_file.write(str(each_data_value))
data_file.close()    
That is a bad way to initialize list.
If you know the size before, like you do:

data_buf = [None] * sample_amount

The way you’re currently doing it by appending an item will use up a lot of memory in the background


Sent from my iPhone using Tapatalk Pro
Are you sure that it's a bad way? I thought python list comprehensions were really fast. Will read more about it. Thanks though!
Nothing to do with list comprehensions but the way you're creating a list by iteratively appending an item.
If you KNOW the size upfront, create it with one statement.

thejoker
Posts: 11
Joined: Sat Aug 04, 2018 2:14 pm

Re: Synchronous sampling rate

Post by thejoker » Wed Sep 19, 2018 4:05 pm

jickster wrote:
Tue Sep 18, 2018 8:58 pm
...
Nothing to do with list comprehensions but the way you're creating a list by iteratively appending an item.
If you KNOW the size upfront, create it with one statement.
You are absolutely right my friend. I tested it, and your suggestion is far superior in terms of speed.
Thanks for learning me this lesson!
Best regards,
Thejoker

jickster
Posts: 629
Joined: Thu Sep 07, 2017 8:57 pm

Re: Synchronous sampling rate

Post by jickster » Wed Sep 19, 2018 4:08 pm

thejoker wrote:
Wed Sep 19, 2018 4:05 pm
jickster wrote:
Tue Sep 18, 2018 8:58 pm
...
Nothing to do with list comprehensions but the way you're creating a list by iteratively appending an item.
If you KNOW the size upfront, create it with one statement.
You are absolutely right my friend. I tested it, and your suggestion is far superior in terms of speed.
Thanks for learning me this lesson!
Best regards,
Thejoker
It's also better for memory usage.
Currently, when you append to a list whose underlying array is full, it doubles in size.
So if your LAST append happens on a list that's full, the C-array would double but you'd only be taking up just over half of that newly doubled array.

Post Reply