Hello,
I have a data logger based on PyBoard 1.1 where files are written to the FAT32 SD card at a rate from 0.1s up to 24 hours in CSV format.
What I have noticed when trying different ways to manage the logs files are the following:
1. If you create a new file say every minute, hour or day, is that as the number of files on the SD card grows, the new file creation and access time starts increasing, which becomes a problem if you are writing at 10Hz. You start seeing data missing between the closing of the old file and the creation of the new file. I gave up on this and moved to '2' below using just a single file;
2. If you just use a single CSV file, which works up to 4GB on FAT32, as this file grows in size (i.e. at around 500MB) you start to see the file access time increasing which again becomes a problem if you are writing at 10Hz.
The file system info for one of the Samsung 64GB EVO SD cards is as follows:
Bytes per sector: 512
Sectors per cluster: 64
Sectors of root director: unlimited
Reserved sectors: 32
Sectors per FAT: 15259
First FAT section: 32
First data sector: 30550
What I am interested to find out is the best way to manage these problems and data logger files with FAT32?
Any comments on this would be appreciated
FAT32 SD card writes get slower as the file gets bigger
Re: FAT32 SD card writes get slower as the file gets bigger
After some more time was spent on this here is the solution to this problem which might help others.
It was found that the operation that increases in time as the file size increases was the file opening as shown below:
I found that if I keep the file always open, then the time to write to a 4MB file seemed pretty much the same as to a 4GB file (which was measured up to approximately 4ms).
I have now changed the data logger to keep the file open and to create a new file at a fixed period (i.e. day/week/month) which works well. The data logger also has a pause button which also triggers a file close so it can be powered down after a pause without losing data. Ideally I would also like to detect when power is removed but this will require a bigger capacitor and better detection circuit so I might look at this for another version.
It was found that the operation that increases in time as the file size increases was the file opening as shown below:
Code: Select all
self.file = open(self.filePathLog, 'a')
self.bytesWritten = self.file.write(data)
self.file.close()
I have now changed the data logger to keep the file open and to create a new file at a fixed period (i.e. day/week/month) which works well. The data logger also has a pause button which also triggers a file close so it can be powered down after a pause without losing data. Ideally I would also like to detect when power is removed but this will require a bigger capacitor and better detection circuit so I might look at this for another version.
Re: FAT32 SD card writes get slower as the file gets bigger
I had another type of board (not running Micropython) that did data logging and what I did was monitor the voltage prior to the voltage regulator (battery or raw voltage) using a voltage divider and when it started to fall below a preset value then immediately close the sd file. That gave me a greater (and enough) amount of time to close the file and always worked reliably. Of course a larger cap or even a super cap on the 3.3V supply would also help.
Re: FAT32 SD card writes get slower as the file gets bigger
Are all the files in the same directory (i.e. the root directory)?
I seem to remember (would need to check, but fairly sure) that FAT file creation and opening is O(n).
You could probably alleviate this by creating directories (essentially turning this into O(log n) ). e.g. /year/month/day/time.log
(That's easy if you know the exact datetime, which your program may or may not).
You might also want to look at LittleFS (which is pretty much better in every way than FAT, hopefully it handles this better too!).