The file system, raw flash and wear levelling

All ESP8266 boards running MicroPython.
Official boards are the Adafruit Huzzah and Feather boards.
Target audience: MicroPython users with an ESP8266 board.
Post Reply
ok8266
Posts: 3
Joined: Thu Jan 16, 2020 10:11 am

The file system, raw flash and wear levelling

Post by ok8266 » Thu Jan 16, 2020 10:30 am

Hi,

on a Wemos D1 mini, I want to use the available remaining flash space to regularly store some data and then send it upstream when the conditions permit. I now have a collected a few questions regarding the management of the flash from micropython:

Do I understand it correctly that micropython's file system functionality is essentially a form of FAT over the free flash space on the device and does no further wear levelling, meaning I risk premature wear out if I create (and delete) lots of data files, which would rewrite the FS directory index structures very often?

Given that I essentially want to keep (self-contained) data chunks around until I have send them away and then delete them, I think I can implement a simple and sufficient wear-levelling approach myself in python. I want to store my chunks sequentially in a modular wrap-around fashion in the available flash space, keeping track of what I sent out by simply deleting them after the upstream host has acknowledged reception.

However, this approach depends on how the exposed APIs actually manage the underlying flash:

Is there an easy way to use the available raw flash access functions and be sure to not poke in the wrong parts of the flash (e.g. the micropython installation itself or the filesystem data structures)? I see no "erase" function to call - is erasing the flash done by writing 0xff data to a particular location? Or is the logic inverted somehow and I should write 0x00? (This is not a particularly critical question as getting this wrong would reduce flash life by only a factor of 2).

If I allocate a big file (filling most of the available FS space) and then go and only rewrite sections of that file while not changing its size, does that mean that only the place in flash corresponding to where I rewrite is touched? Or is micropython also automatically updating anything in the file system index structures (e.g. like Linux would be updating atime)?

Cheers and thanks for any answers!


User avatar
Roberthh
Posts: 3667
Joined: Sat May 09, 2015 4:13 pm
Location: Rhineland, Europe

Re: The file system, raw flash and wear levelling

Post by Roberthh » Thu Jan 16, 2020 12:44 pm

You may also buy the Wemos D1 mini SD card shield and use an SD card.

ok8266
Posts: 3
Joined: Thu Jan 16, 2020 10:11 am

Re: The file system, raw flash and wear levelling

Post by ok8266 » Fri Jan 17, 2020 10:49 am

Thank you both for the suggestions. I was not aware of the littlefs option for ESP8266 and it seems to be in general a good solution to my problem.

For various reasons (deployment...) I still wonder about my idea to preallocate a big file on the current fatfs and then just rewriting parts of it - is that possible as an ad-hoc wear levelling technique, or will unexpected behavior from the lower layers create a problem for me?

User avatar
Roberthh
Posts: 3667
Joined: Sat May 09, 2015 4:13 pm
Location: Rhineland, Europe

Re: The file system, raw flash and wear levelling

Post by Roberthh » Fri Jan 17, 2020 12:14 pm

Even when you pre-allocate a big file, you still would have two writes. One at the portion of the file where you write the data, and one to the directory, keeping track of the file modification time. So in general that would not be much better than just appending to the file. Appending would in addition requires writes to the FAT, when another block is allocated.
I should mention too that writing to the SD card is much faster.

ok8266
Posts: 3
Joined: Thu Jan 16, 2020 10:11 am

Re: The file system, raw flash and wear levelling

Post by ok8266 » Fri Jan 17, 2020 5:05 pm

@Roberthh: Alright, thanks for the explanation. I'll use littlefs2 then, I now got it built and installed on a test board and it seems to work.

Post Reply