They maintain no state, but loading each one of our text-adventure 'Jinja-style' templates can slowly eat away at memory, and if there are too many templates, it eventually becomes too low. This is placing a limit on the size of our text adventures, and resetting periodically would resolve this outright.
Unfortunately, the routine I wrote to use machine.reset() when gc.mem_free() gets too low seems unreliable. If I manually CTRL+C the running _boot.py routine and interactively test the machine.reset() function on our nodes, it mostly works fine, but sometimes you see the following on the Serial Console which finishes in a total hang...
Code: Select all
LOW MEMORY, REBOOTING bcn 0 del if1 usl Visiting box 3 ets Jan 8 2013,rst cause:2, boot mode:(1,7) ets Jan 8 2013,rst cause:4, boot mode:(1,7) wdt reset
Interestingly the text "Visiting box 3" is actually from a few lines later than the machine.reset() request, suggesting that things are running on for some reason.
These nodes maintain a single hardware SPI and a single software SPI to a screen and RFID reader, but have no explicit interrupts or sleep management in place. The VFS and Wifi lines in _boot.py are commented out, and we are running exclusively from a frozen filesystem. However, even with this special configuration I would expect reboots to repeat the behaviour of the first boot.
Can anyone suggest a configuration which is more reliable, or a hardware/software workaround for the confused state which prevents the node from coming back up?
We have been targeting 1.8.7 as this system has been in testing for a long time, (I fear other compatibility issues if we migrate), but if there are known fixes in 1.9.x we will have to rebuild our toolchain against the latest if there are no other workarounds we can try.