- It would seem best to instantiate large objects at the start when the heap is clean, as globals.
- My impression is that assigning variables as globals reduces the issue. Is this valid? When is RAM for globals allocated?
- Any other design pointers?
- Is heap fragmentation a potential reliability issue in continuously running programs?
- How can you assess the margin of safety for a given program?
Code: Select all
gc.collect()
micropython.mem_info()
Code: Select all
def remdata(remote):
# code omitted
missed = len({p for p in range(min(seqvals), max(seqvals)+1)}-seqvals) # Allocation fail despite 33K available
return {'min_t1': min_t1, 'max_t1': max_t1, 'min_t2': min_t2, 'max_t2': max_t2, 'max_nfail': max_nfail, 'max_seqfail': max_seqfail, 'missed': missed, 'min_vdd': min_vdd}
Code: Select all
def remdata(remote):
global hist
# code omitted
missed = len({p for p in range(min(seqvals), max(seqvals)+1)}-seqvals)
hist ={'min_t1': min_t1, 'max_t1': max_t1, 'min_t2': min_t2, 'max_t2': max_t2, 'max_nfail': max_nfail, 'max_seqfail': max_seqfail, 'missed': missed, 'min_vdd': min_vdd}
return hist
Does the team think heap fragmentation is likely to be the cause? And how can I get the code to a state where small arbitrary changes don't result in a crash? Experience of numerous crashes tells me that the program is working by a very small margin