logging data on SD at constant frequency

The official pyboard running MicroPython.
This is the reference design and main target board for MicroPython.
You can buy one at the store.
Target audience: Users with a pyboard.
rcarmi
Posts: 5
Joined: Mon Mar 04, 2019 10:22 am

logging data on SD at constant frequency

Post by rcarmi » Mon Mar 04, 2019 11:10 am

Hello,

I am kind of new to pyboard and thus maybe the answer is available somewhere but I haven't found it. Here is my problem : I am using the pyboard to sample voltage from a sensor. I need the acquisition to be at high frequency and rather constant. The issue is that as soon as I start writing to the SD, the code slows down... I can't figure out how to prevent this while continuously saving the data... Here is the code I use. It is largely inspired from this topic :
viewtopic.php?t=601

[code]
# main.py -- put your code here!
import pyb
import micropython
import os
import sys

micropython.alloc_emergency_exception_buf(200)
#LED 1 is red
#LED 2 is green
#LED 3 is orange
#LED 4 is blue

class T_4(object):
def __init__(self):
# Initialise timer 4 and ADC
self.status = 'Sample'
self.file = open('log.txt','w')
self.tim = pyb.Timer(4)
self.tim.init(freq=1000)
self.set_start()
self.adc = pyb.ADC('X1')
self.buf_index = 0
self.meanval = 3500
self.stdval = 100
self.tempval = 0
self.buf_flag = 0
self.buf_0 = [[0, 0] for i in range(1000)]
self.buf_1 = [[0, 0] for i in range(1000)]
self.tim.callback(self.t_4_callback)
def set_start(self):
# Remember time of start
self.t_start = pyb.micros()

def store_data_0(self):
self.status = 'log'
# Store buffer data to log file
for i in range(len(self.buf_0)):
t = self.buf_0[i][0]
d = self.buf_0[i][1]
value_string = '%i \t %i\n' %(t, d)
self.file.write(value_string)

def store_data_1(self):
# Store buffer data to log file
self.status = 'Log'
for i in range(len(self.buf_0)):
t = self.buf_1[i][0]
d = self.buf_1[i][1]
value_string = '%i \t %i\n' %(t, d)
self.file.write(value_string)

def t_4_callback(self, tim):
# Read ADC value and current time
value = self.adc.read() - self.meanval
t = pyb.micros() - self.t_start
# Add value and time to buffer
if (self.buf_flag==0):
self.buf_0[self.buf_index][0] = t
self.buf_0[self.buf_index][1] = value
else:
self.buf_1[self.buf_index][0] = t
self.buf_1[self.buf_index][1] = value
# Increment buffer index until buffer is filled,
# then disable interrupt and change status
self.buf_index += 1
if (self.buf_index >= len(self.buf_0)):
self.buf_index = 0
if (self.buf_flag == 0):
self.status = 'Store_0'
else:
self.status = 'Store_1'
self.buf_flag += 1
if (self.buf_flag==2):
self.buf_flag =0
self.tempval = value

ledred = pyb.LED(1)
ledgreen = pyb.LED(2)
ledorange = pyb.LED(3)
ledblue = pyb.LED(4)
ledred.on()
info = open('info.txt','a')
info.write('\nStart the code\n')
info.close()
t = T_4()
pyb.delay(1000)

# Check if logging is finished, store data if so
# Was unable to call store data or call method storing data from callback
go_on = True
while (go_on):
if(t.status == 'Store_0'):
ledgreen.on()
t.store_data_0()
go_on = True
ledgreen.off()
elif (t.status == 'Store_1'):
ledgreen.on()
t.store_data_1()
t.status = 'Done'
go_on = False
ledgreen.off()
else :
pyb.delay(100)
t.tim.callback(None)
t.file.close()
ledgreen.off()
ledred.off()
[/code]

(Sorry my BBCode is not activated...)

This code is a bit silly but I am debugging right now. So the first 1000 points are correctly sampled but then once the SD card is used to write the performance drops... In this example it does only one loop. The goal is to continue with the two buffers in a loop.

I tried to avoid opening the file (this is why I added the t.file entry).

Any idea how to improve the performance?

I would like to get the highest possible frequency rate.

Thank you!

rcarmi
Posts: 5
Joined: Mon Mar 04, 2019 10:22 am

Re: logging data on SD at constant frequency

Post by rcarmi » Mon Mar 04, 2019 4:30 pm

Hello again,

Here is where I stand now. If someone find the solution to my previous post I am still highly interested but as it does not work write now, here is another solution.

I use the pyboard as an acquisition card so I want to maximise the amount of data I can feed into it. From the test I have done I found that I can have up to 8,000 point (max) and a good frequency of up to 4kHz. Above this the card does not follow and the dt is not constant (which reduce the number of points you can get by 2 since then you have to track the time). This is definitely not the best but it works.

An improvement would be to increase the buffer sizes (or increase the number of buffers). I played with the micropython.alloc_emergency_exception_buf() command but I am not sure there is any effect. So I put it to 500 though it does not change the behaviour from 200.

I have tried to increase the number of self_buf_i but 2 is the max I could do (whatever the size...weird...)
I then increased the number of column and reached 4!
I also tried to increase the range (number of rows) but nothing would increase. So I think that is the max! I would be super excited if someone is able to double this figures! 16,000 points starts to be a descent size ;)

So right now the max I can do is 4kHz and 8,000 points so 2 sec of acquisition
at 1kHz that is 8 sec and forth and so on!

here is the code (again sorry It is in the text directly, the little icon BBCode is OFF right ow and I have no clue how to change this...)


[code]
# main.py -- put your code here!
import pyb
import micropython
import os
import sys

micropython.alloc_emergency_exception_buf(500)
#LED 1 is red
#LED 2 is green
#LED 3 is orange
#LED 4 is blue

class T_4(object):
def __init__(self,timerId,freqVal,fileName):
# Initialise timer 4 and ADC
self.status = 'Sample'
self.file = open(fileName,'w')
self.tim = pyb.Timer(timerId)
self.tim.init(freq=freqVal)
self.set_start()
self.adc = pyb.ADC('X1')
self.buf_index = 0
self.buf_flag = 0
self.buf_0 = [[0,0,0,0] for i in range(1000)]
self.buf_1 = [[0,0,0,0] for i in range(1000)]
self.tim.callback(self.t_4_callback)
def set_start(self):
# Remember time of start
self.t_start = pyb.micros()

def store_data(self):
self.status = 'log'
# Store buffer data to log file
for i in range(len(self.buf_0)):
self.file.write('%i\n' %(self.buf_0[i][0]))
for i in range(len(self.buf_0)):
self.file.write('%i\n' %(self.buf_1[i][0]))
for i in range(len(self.buf_0)):
self.file.write('%i\n' %(self.buf_0[i][1]))
for i in range(len(self.buf_0)):
self.file.write('%i\n' %(self.buf_1[i][1]))
for i in range(len(self.buf_0)):
self.file.write('%i\n' %(self.buf_0[i][2]))
for i in range(len(self.buf_0)):
self.file.write('%i\n' %(self.buf_1[i][2]))
for i in range(len(self.buf_0)):
self.file.write('%i\n' %(self.buf_0[i][3]))
for i in range(len(self.buf_0)):
self.file.write('%i\n' %(self.buf_1[i][3]))

def t_4_callback(self, tim):
# Read ADC value and current time
value = self.adc.read()
t = pyb.micros()
# Add value and time to buffer
if (self.buf_flag==0):
self.buf_0[self.buf_index][0] = t
elif (self.buf_flag==1):
self.buf_1[self.buf_index][0] = t
elif (self.buf_flag==2):
self.buf_0[self.buf_index][1] = t
elif (self.buf_flag==3):
self.buf_1[self.buf_index][1] = t
elif (self.buf_flag==4):
self.buf_0[self.buf_index][2] = t
elif (self.buf_flag==5):
self.buf_1[self.buf_index][2] = t
elif (self.buf_flag==6):
self.buf_0[self.buf_index][3] = t
elif (self.buf_flag==7):
self.buf_1[self.buf_index][3] = t
# Increment buffer index until buffer is filled,
# then disable interrupt and change status
self.buf_index += 1
if (self.buf_index >= len(self.buf_0)):
self.buf_index = 0
if (self.buf_flag == 7):
self.status = 'AcquisitionDone'
self.tim.callback(None)
self.buf_flag += 1

ledred = pyb.LED(1)
ledgreen = pyb.LED(2)
ledorange = pyb.LED(3)
ledblue = pyb.LED(4)
ledred.on()
info = open('info.txt','a')
info.write('\nStart the code\n')
info.close()
f0 = 4000
t1 = T_4(3,f0,'log_1.txt')
pyb.delay(1000)
# Check if logging is finished, store data if so
# Was unable to call store data or call method storing data from callback
go_on = True
while (go_on):
if(t1.status == 'AcquisitionDone'):
ledgreen.on()
t1.store_data()
go_on = False
ledgreen.off()
else :
pyb.delay(50)
t1.tim.callback(None)
t1.file.close()
ledgreen.off()
ledred.off()
[\code]

From there one could do a code that log the data by burst of 8,000 points and whatever time it takes to save then restart... Not optimal but the best I could do in one afternoon...

Still happy to get help!!

Disclaimer:
I have also tried this version :
[code]
# main.py -- put your code here!
import pyb
import micropython
import os
import sys

micropython.alloc_emergency_exception_buf(200)
#LED 1 is red
#LED 2 is green
#LED 3 is orange
#LED 4 is blue
ledred = pyb.LED(1)
ledgreen = pyb.LED(2)
ledorange = pyb.LED(3)
ledblue = pyb.LED(4)
adc = pyb.ADC('X1')
switch = pyb.Switch()
ledred.on()
info = open('info.txt','a')
info.write('\nStart the code\n')
info.close()
pyb.delay(1000)
f = open('log_2.txt','w')
# Check if logging is finished, store data if so
# Was unable to call store data or call method storing data from callback
go_on = True
t0 = pyb.micros()
while not switch():
t= pyb.micros()-t0
val = adc.read()
value_string = '%f \t %f\n' %(t,val)
f.write(value_string)
pyb.delay(1)

f.close()
ledgreen.off()
ledred.off()
[\code]
and this is terrible... The frequency is not at all constant and the errors are huge sometimes. It is not able to maintain a highfrequency.


Thanks for any help!

Rem

rhubarbdog
Posts: 168
Joined: Tue Nov 07, 2017 11:45 pm

Re: logging data on SD at constant frequency

Post by rhubarbdog » Tue Mar 05, 2019 2:11 am

Code: Select all

micropython.alloc_emergency_exception_buf(200)
creates space in memory to store exception text from an exception which occours in a callback. Increasing it to 500 will only absorb ram and do nothing of interest to your program.

rhubarbdog
Posts: 168
Joined: Tue Nov 07, 2017 11:45 pm

Re: logging data on SD at constant frequency

Post by rhubarbdog » Tue Mar 05, 2019 2:31 am

You make multiple calls to len(self.buf_0) in method store_data(). It's faster to call len once at the start of this method and use that variable in your range statements. Because of how you set you buffer up in the __init__ method you could call len(self.buf_0) there and store it in attribute self.max_buf.

When you write your data you make many itterations can't you re-order your logic to only have one for loop.

Code: Select all

for i in range(1000):
    pass
    
takes a few micro secconds to run.

rhubarbdog
Posts: 168
Joined: Tue Nov 07, 2017 11:45 pm

Re: logging data on SD at constant frequency

Post by rhubarbdog » Thu Mar 07, 2019 2:23 am

I've implemented what you require witn a ring buffer.
I can't get speeds significantly higher than 8kHz. The write time takes too long and the indexes cross.
If you want me to post my code just ask in this thread.

User avatar
saulo
Posts: 16
Joined: Thu May 26, 2016 9:05 am
Location: Brasil

Re: logging data on SD at constant frequency

Post by saulo » Fri Mar 08, 2019 1:27 pm

rhubarbdog wrote:
Thu Mar 07, 2019 2:23 am
I've implemented what you require witn a ring buffer.
I can't get speeds significantly higher than 8kHz. The write time takes too long and the indexes cross.
If you want me to post my code just ask in this thread.
use the fastest microsd card you can put your hands on and try to write in binary , parse data could be an after-work if you want fast daq performance, right?

rhubarbdog
Posts: 168
Joined: Tue Nov 07, 2017 11:45 pm

Re: logging data on SD at constant frequency

Post by rhubarbdog » Fri Mar 08, 2019 1:35 pm

I'm writing 1 - 4 kb of array.array integers in one statement. I interleave the data in the 1 buffer. Then when the test is done convert the binary to csv afterwards.

rcarmi
Posts: 5
Joined: Mon Mar 04, 2019 10:22 am

Re: logging data on SD at constant frequency

Post by rcarmi » Fri Mar 08, 2019 2:38 pm

Hello all,

Thanks Rhubarbdog for your comment. Please share the code with us. I think many people could find interest in this option and it will greatly help me in my projects.

Best

Rémi

rhubarbdog
Posts: 168
Joined: Tue Nov 07, 2017 11:45 pm

Re: logging data on SD at constant frequency

Post by rhubarbdog » Sat Mar 09, 2019 4:31 am

Here's my code. I think it's probably as good as it gets. When the write actually happens there is a delay of 1ms this causes glitches in the data, it's quite easy to identify these and you could massage the data as required.
Running at less than 8kHz seems to give a more accurate number of data points.

Here's the data logger

Code: Select all

import pyb
import time
import micropython
micropython.alloc_emergency_exception_buf(200)
import os
import array

class logger():
    def __init__(self, timer_, freq_, file_name):
        self.block = 1024 # * sizeof(int) = 4096
        # size of buffer should be n * self.block. and a factor of data points
        self.buffer_ = array.array('i', [0 for i in range(8192)])

        self.write_data_ = self.write_data

        self.into_buf_i = 0
        self.write_i = 0
        self.max_i = len(self.buffer_)
        self.looped = False
        self.writing = False
        self.error = False

        self.timer = pyb.Timer(timer_, freq = freq_)
        self.sensor = pyb.ADC('X1')
        self.file_ = open(file_name, 'wb')

        self.begin = time.ticks_us()
        self.timer.callback(self.buffer_data)

    def terminate(self):
        if not self.error:
            self.timer.callback(None)
            time.sleep_ms(500)
            while self.writing:
                pass
            if self.looped:
                self.file_.write(self.buffer_[self.write_i:self.max_i])
                self.write_i = 0
            if self.into_buf_i != 0:
                self.file_.write(self.buffer_[self.write_i:self.into_buf_i])

            self.file_.close()

    def buffer_data(self, timer_):
        self.buffer_[self.into_buf_i] = self.sensor.read()
        self.buffer_[self.into_buf_i + 1]\
            = time.ticks_diff(time.ticks_us(), self.begin)
        self.into_buf_i += 2

        if self.write_i + self.block < self.into_buf_i or self.looped:
            if not self.writing:
                self.writing = True
                micropython.schedule(self.write_data_, 0)

        if self.into_buf_i >= self.max_i:
            self.into_buf_i = 0
            self.looped = True

        if self.looped and self.into_buf_i >= self.write_i:
            self.timer.callback(None)
            self.error = True
            self.file_.close()
        
    def write_data(self, _):
        while self.write_i + self.block < self.into_buf_i or\
              (self.looped and self.write_i < self.max_i):
            tmp = self.write_i + self.block
            self.file_.write(self.buffer_[self.write_i:tmp])
            self.write_i = tmp
            if self.write_i >= self.max_i:
                self.write_i = 0
                self.looped = False

        self.writing = False
    

yellow = pyb.LED(3)
blue = pyb.LED(4)
os.chdir('/sd')

blue.off()
yellow.on()
data_set = logger(4, 7000, 'data-set.bin')
time.sleep(1)
yellow.off()
if data_set.error:
    blue.on()
data_set.terminate()
and here's a python3 script to convert this data into CSV so it can be viewed in a spreadsheet

Code: Select all

import os
import struct

os.chdir('pyb_sd')
    
input_ = 'data-set.bin'
output = 'data-set.csv'
FORMAT  = '=ii'

ok = True
with open(output, 'w') as out_f:
    with open(input_, 'rb') as in_f:
        while ok:
            try:
                buffer_ = bytearray(in_f.read(struct.calcsize(FORMAT)))
                out_f.write("%d\t%d\n" % struct.unpack(FORMAT, buffer_))
            except (OSError, struct.error):
                ok = False


os.chdir('..')
I think if you need every data point accurately at a frequency of say 8kHz you need 2 pyboards. one logging data and one writing to disk. Connect them via uart and set the receive buffer to say 512 bytes

User avatar
pythoncoder
Posts: 5956
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Fast logging ideas

Post by pythoncoder » Sat Mar 09, 2019 10:21 am

I would suggest creating a binary file whose size matches the total amount of data to be collected. When a sample is acquired, convert to binary, seek to the required offset, and write the data.

This could offer several benefits:
  • The FAT index would not need to be updated.
  • If the file is initially filled with 0xff, needless block erase cycles should be eliminated.
  • Much of the non-deterministic behaviour of Flash filesystems should be avoided (e.g. wear levelling).
I'd be interested to hear the outcome of any testing, including whether preloading the file with 0 makes any difference. The underlying NAND Flash technology can change 1's to 0's without an erase cycle, but not 0's to 1's. However it is conceivable that there might be an inversion somewhere between the API and the hardware. [EDIT] See below.

A more radical approach to fast logging would be to buffer the data in RAM. This is not feasible on a Pyboard 1.x but should be possible on the Pyboard D SF6W (when it emerges). It might be possible on an ESP32 with SPIRAM. Or there may be a way to use an SPIRAM chip over the Pyboard SPI interface.

[EDIT] I had to try this ;)
tl;dr with the Flash card I used, preloading with 0xff was faster: 909ms vs 1264ms to write 8192 samples of two integers. Tested on a Pyboard V1.0.

Code: Select all

import utime
from micropython import const

NSAMPLES = const(8192)
ISIZE = const(4) # Bytes per integer
SSIZE = const(2)  # Integers per sample
SAMPLE_BYTES = const(ISIZE * SSIZE)

data = [1234, 5678]

def test(init=b'\xff'):
    with open('/sd/log.bin', 'wb') as f:
        for _ in range(NSAMPLES * ISIZE * SSIZE):
            f.write(init)  # Create file
        t = utime.ticks_ms()  # Start time
        for n in range(NSAMPLES):  # Write the data
            f.seek(n * SAMPLE_BYTES)
            for d in data:
                f.write(d.to_bytes(ISIZE, 'little'))
        dt = utime.ticks_diff(utime.ticks_ms(), t)
        print('Init: {}, time {:6d}(ms)'.format(init, dt))

test()
test(b'\0')
To verify the file contents I ran

Code: Select all

with open('/sd/log.bin', 'rb') as f:
    for _ in range(5):
        d0 = int.from_bytes(f.read(4), 'little')
        d1 = int.from_bytes(f.read(4), 'little')
        print(d0, d1)
Peter Hinch
Index to my micropython libraries.

Post Reply