Page 1 of 1

Synchronous sampling rate

Posted: Wed Aug 22, 2018 8:54 pm
by thejoker
Dear reader, I am trying to create a micropython application which controls an electromechanical device. To achieve this I want to sample my sensor at a regular interval, every 100 milliseconds.
Now my question: How can I force my pyboard to run the same piece of code every 100 milliseconds?
Here is my code thus far:

Code: Select all

import pyb


def callbackfunction():
    time = round(pyb.millis()/1000, 4)
    return time

timelist = []
timer_object = pyb.Timer(4, freq=10)  # Create a timer object using internal timer #4 with 10 [Hz]
current_time = timer_object.callback(callbackfunction())
timelist.append(current_time)

Re: Synchronous sampling rate

Posted: Thu Aug 23, 2018 6:33 am
by pythoncoder
The crude way is to have a loop which runs forever, pausing for 100ms then calling a function. Or your approach of using a timer callback. These work, but are not scalable: if you want to respond to user input or update a display, at the same time as running the loop, it soon becomes messy.

The best way to do this sort of thing is to use uasyncio which enables you to run multiple tasks concurrently. If you're unfamiliar with asynchronous programming there is something of a learning curve, but a tutorial may be found in this repository.

Re: Synchronous sampling rate

Posted: Sat Sep 01, 2018 7:09 am
by thejoker
Thanks for your suggestion Peter, but I decided to solve my sampling problem in a bit more dirty way. It's still effective to achieve synchronous sampling rate, so I'm going to post my code here in case anyone has the same problem somewhere in the future:

Code: Select all

import time

# Define sampling time, amount of samples, and create a buffer to store the sensor data
sample_time = 1
sample_amount = 100
data_buffer = [[] for i in range(sample_amount)]

for loop_value in range(sample_amount):
    t_start = time.ticks_ms()
    ############ Sensor sampling code start here #############
    # Some code ..
    # sensor_measurement = 
    # Some code .. 
    ############ Sensor sampling code ends here   #############
    data_buffer[loop_value] = sensor_measurement
    t_end = time.ticks_ms()
    while time.ticks_diff(t_start, t_end) < sample_time*1000:
        # Check the time
        t_end = time.ticks_ms()
# Write data to .txt file:
data_file = open('measurements.txt', 'w')
for each_data_value in data_buffer:
    data_file.write(str(each_data_value))
data_file.close()    

Re: Synchronous sampling rate

Posted: Sun Sep 02, 2018 4:51 am
by jickster
thejoker wrote:Thanks for your suggestion Peter, but I decided to solve my sampling problem in a bit more dirty way. It's still effective to achieve synchronous sampling rate, so I'm going to post my code here in case anyone has the same problem somewhere in the future:

Code: Select all

import time

# Define sampling time, amount of samples, and create a buffer to store the sensor data
sample_time = 1
sample_amount = 100
data_buffer = [[] for i in range(sample_amount)]

for loop_value in range(sample_amount):
    t_start = time.ticks_ms()
    ############ Sensor sampling code start here #############
    # Some code ..
    # sensor_measurement = 
    # Some code .. 
    ############ Sensor sampling code ends here   #############
    data_buffer[loop_value] = sensor_measurement
    t_end = time.ticks_ms()
    while time.ticks_diff(t_start, t_end) < sample_time*1000:
        # Check the time
        t_end = time.ticks_ms()
# Write data to .txt file:
data_file = open('measurements.txt', 'w')
for each_data_value in data_buffer:
    data_file.write(str(each_data_value))
data_file.close()    
That is a bad way to initialize list.
If you know the size before, like you do:

data_buf = [None] * sample_amount

The way you’re currently doing it by appending an item will use up a lot of memory in the background


Sent from my iPhone using Tapatalk Pro

Re: Synchronous sampling rate

Posted: Tue Sep 18, 2018 8:04 pm
by thejoker
jickster wrote:
Sun Sep 02, 2018 4:51 am
thejoker wrote:Thanks for your suggestion Peter, but I decided to solve my sampling problem in a bit more dirty way. It's still effective to achieve synchronous sampling rate, so I'm going to post my code here in case anyone has the same problem somewhere in the future:

Code: Select all

import time

# Define sampling time, amount of samples, and create a buffer to store the sensor data
sample_time = 1
sample_amount = 100
data_buffer = [[] for i in range(sample_amount)]

for loop_value in range(sample_amount):
    t_start = time.ticks_ms()
    ############ Sensor sampling code start here #############
    # Some code ..
    # sensor_measurement = 
    # Some code .. 
    ############ Sensor sampling code ends here   #############
    data_buffer[loop_value] = sensor_measurement
    t_end = time.ticks_ms()
    while time.ticks_diff(t_start, t_end) < sample_time*1000:
        # Check the time
        t_end = time.ticks_ms()
# Write data to .txt file:
data_file = open('measurements.txt', 'w')
for each_data_value in data_buffer:
    data_file.write(str(each_data_value))
data_file.close()    
That is a bad way to initialize list.
If you know the size before, like you do:

data_buf = [None] * sample_amount

The way you’re currently doing it by appending an item will use up a lot of memory in the background


Sent from my iPhone using Tapatalk Pro
Are you sure that it's a bad way? I thought python list comprehensions were really fast. Will read more about it. Thanks though!

Re: Synchronous sampling rate

Posted: Tue Sep 18, 2018 8:58 pm
by jickster
thejoker wrote:
Tue Sep 18, 2018 8:04 pm
jickster wrote:
Sun Sep 02, 2018 4:51 am
thejoker wrote:Thanks for your suggestion Peter, but I decided to solve my sampling problem in a bit more dirty way. It's still effective to achieve synchronous sampling rate, so I'm going to post my code here in case anyone has the same problem somewhere in the future:

Code: Select all

import time

# Define sampling time, amount of samples, and create a buffer to store the sensor data
sample_time = 1
sample_amount = 100
data_buffer = [[] for i in range(sample_amount)]

for loop_value in range(sample_amount):
    t_start = time.ticks_ms()
    ############ Sensor sampling code start here #############
    # Some code ..
    # sensor_measurement = 
    # Some code .. 
    ############ Sensor sampling code ends here   #############
    data_buffer[loop_value] = sensor_measurement
    t_end = time.ticks_ms()
    while time.ticks_diff(t_start, t_end) < sample_time*1000:
        # Check the time
        t_end = time.ticks_ms()
# Write data to .txt file:
data_file = open('measurements.txt', 'w')
for each_data_value in data_buffer:
    data_file.write(str(each_data_value))
data_file.close()    
That is a bad way to initialize list.
If you know the size before, like you do:

data_buf = [None] * sample_amount

The way you’re currently doing it by appending an item will use up a lot of memory in the background


Sent from my iPhone using Tapatalk Pro
Are you sure that it's a bad way? I thought python list comprehensions were really fast. Will read more about it. Thanks though!
Nothing to do with list comprehensions but the way you're creating a list by iteratively appending an item.
If you KNOW the size upfront, create it with one statement.

Re: Synchronous sampling rate

Posted: Wed Sep 19, 2018 4:05 pm
by thejoker
jickster wrote:
Tue Sep 18, 2018 8:58 pm
...
Nothing to do with list comprehensions but the way you're creating a list by iteratively appending an item.
If you KNOW the size upfront, create it with one statement.
You are absolutely right my friend. I tested it, and your suggestion is far superior in terms of speed.
Thanks for learning me this lesson!
Best regards,
Thejoker

Re: Synchronous sampling rate

Posted: Wed Sep 19, 2018 4:08 pm
by jickster
thejoker wrote:
Wed Sep 19, 2018 4:05 pm
jickster wrote:
Tue Sep 18, 2018 8:58 pm
...
Nothing to do with list comprehensions but the way you're creating a list by iteratively appending an item.
If you KNOW the size upfront, create it with one statement.
You are absolutely right my friend. I tested it, and your suggestion is far superior in terms of speed.
Thanks for learning me this lesson!
Best regards,
Thejoker
It's also better for memory usage.
Currently, when you append to a list whose underlying array is full, it doubles in size.
So if your LAST append happens on a list that's full, the C-array would double but you'd only be taking up just over half of that newly doubled array.