Trying to understand how memory allocation is working here...
Trying to understand how memory allocation is working here...
This is on a Pico with just MicroPython, no main.py. Just the REPL running.
I reset the board, connect with Thonny and import gc.
Then I use gc to look at how much memory is allocated. Every time I run gc.mem_alloc() another thousand or so bytes are used-up:
Out of curiosity, I kept going:
Until it garbage collected...
All I am doing is up-arrow > Enter to re-run the command.
What's going on? Over 190K allocated just running gc.mem_alloc() repeatedly every 1 to 5 seconds. I would like to understand.
I even waited to see if garbage collection would fire an event and clear a bunch of memory. In the several minutes it took to create this post to this point, nothing changed. Up-arrow > Enter added another thousand bytes to the prior total.
I am trying to be very aware of memory allocation for a high speed application I am working on. In fact, I am manually running the garbage collector at specific moments in time using the state machine that runs the application. I came across this when I was investigating if using del() explicitly might be faster than using the garbage collector (not the same thing, but I am trying to save microseconds/milliseconds where I can).
I reset the board, connect with Thonny and import gc.
Then I use gc to look at how much memory is allocated. Every time I run gc.mem_alloc() another thousand or so bytes are used-up:
Out of curiosity, I kept going:
Until it garbage collected...
All I am doing is up-arrow > Enter to re-run the command.
What's going on? Over 190K allocated just running gc.mem_alloc() repeatedly every 1 to 5 seconds. I would like to understand.
I even waited to see if garbage collection would fire an event and clear a bunch of memory. In the several minutes it took to create this post to this point, nothing changed. Up-arrow > Enter added another thousand bytes to the prior total.
I am trying to be very aware of memory allocation for a high speed application I am working on. In fact, I am manually running the garbage collector at specific moments in time using the state machine that runs the application. I came across this when I was investigating if using del() explicitly might be faster than using the garbage collector (not the same thing, but I am trying to save microseconds/milliseconds where I can).
Re: Trying to understand how memory allocation is working here...
This one is even worse.
Reboot the board and then...
That's 6K added every time I run gc.mem_alloc()
BTW, gc.threshold() returns -1.
Reboot the board and then...
That's 6K added every time I run gc.mem_alloc()
BTW, gc.threshold() returns -1.
- pythoncoder
- Posts: 5956
- Joined: Fri Jul 18, 2014 8:01 am
- Location: UK
- Contact:
Re: Trying to understand how memory allocation is working here...
Interesting. I tried this using the official mpremote, also with rshell. With these each call to mem_alloc() after the first adds 64 bytes. This seems consistent, even after allocating a large array.
The bulk of the leakage seems to be down to Thonny. In general MicroPython does not leak, and gc.collect() is effective.
Note that gc.mem_alloc() does not leak when called in code: it seems to be something to do with the REPL:
The bulk of the leakage seems to be down to Thonny. In general MicroPython does not leak, and gc.collect() is effective.
Note that gc.mem_alloc() does not leak when called in code: it seems to be something to do with the REPL:
Code: Select all
MicroPython v1.18-141-gff9c70850-dirty on 2022-02-20; Raspberry Pi Pico with RP2040
Type "help()" for more information.
>>>
paste mode; Ctrl-C to cancel, Ctrl-D to finish
=== import gc
=== while True:
=== print(gc.mem_alloc())
=== gc.collect()
===
5856
4384
4384
4384
4384 # Ad nauseam
Peter Hinch
Index to my micropython libraries.
Index to my micropython libraries.
Re: Trying to understand how memory allocation is working here...
Yes that's it, displaying on REPL allocates strings. Potentially also Thonny adds some. In any case, for measuting memroy-related behavior in an application one cannot rely on what the REPL says.
Re: Trying to understand how memory allocation is working here...
Nice. Thanks. This was past 2:00 AM for me, after a 20 hour day my brain was mush. I was about to write exactly what you wrote above this morning when I saw your post.pythoncoder wrote: ↑Wed Jun 08, 2022 8:41 amInteresting. I tried this using the official mpremote, also with rshell. With these each call to mem_alloc() after the first adds 64 bytes. This seems consistent, even after allocating a large array.
The bulk of the leakage seems to be down to Thonny. In general MicroPython does not leak, and gc.collect() is effective.
Note that gc.mem_alloc() does not leak when called in code: it seems to be something to do with the REPL:Code: Select all
MicroPython v1.18-141-gff9c70850-dirty on 2022-02-20; Raspberry Pi Pico with RP2040 Type "help()" for more information. >>> paste mode; Ctrl-C to cancel, Ctrl-D to finish === import gc === while True: === print(gc.mem_alloc()) === gc.collect() === 5856 4384 4384 4384 4384 # Ad nauseam
I am only using Thonny for loading code and running it because the MicroPython plugin for PyCharm doesn't play nice. Every so often I might run a quick test on the Thonny REPL, but that's about it.
The issue with the PyCharm setup is that going between loading code and the REPL is painful. If you launch the REPL you have to kill it before you can load code again. And then I have to manually launch the REPL again. Do that a hundred times a day and you want to throw the computer through the window. I don't see a setting anywhere to alter how this works.
Re: Trying to understand how memory allocation is working here...
When you enter things in the REPL, they get added to a history buffer, so that you can use the up arrow to bring back the previous command. Each time you enter a command a new entry get allocated for the history buffer, this is what's consuming the memory. The history buffer has a fixed number of entries and old entries will just be dropped off the end, to be garbage collected later.
Re: Trying to understand how memory allocation is working here...
Although that might very well be part of it, I think there's more at play here. If you look at my screen grabs, in the first case memory allocation was going up about a thousand bytes at a time, just for executing gc.mem_alloc(). In the second post it was going up about 6,000 bytes at a time while executing the same command. If this was a matter of keeping history I would expect it to grow somewhere in the order of, say, 20 to 50 bytes per command, not much more than that.dhylands wrote: ↑Wed Jun 08, 2022 7:49 pmWhen you enter things in the REPL, they get added to a history buffer, so that you can use the up arrow to bring back the previous command. Each time you enter a command a new entry get allocated for the history buffer, this is what's consuming the memory. The history buffer has a fixed number of entries and old entries will just be dropped off the end, to be garbage collected later.
I just tried it on PyCharm. Very different results:
Re: Trying to understand how memory allocation is working here...
I built the latest MicroPython for the RPi Pico and I see these results when using a simple terminal: (the repl within rshell) So only 144 bytes per iteration. Have you tried using just a simple terminal program?
I'm not sure how Thonny is working for doing its REPL. I know the author is on the forum, so perhaps he can pipe in.
If you want to get accurate results, you really need to run gc.mem_alloc() from within a function.
Code: Select all
>>> gc.mem_alloc()
4496
>>> gc.mem_alloc()
4672
>>> gc.mem_alloc()
4816
>>> gc.mem_alloc()
4960
>>> gc.mem_alloc()
5104
>>>
I'm not sure how Thonny is working for doing its REPL. I know the author is on the forum, so perhaps he can pipe in.
If you want to get accurate results, you really need to run gc.mem_alloc() from within a function.
Re: Trying to understand how memory allocation is working here...
Yeah, this was just a curiosity I noted as I was running some tests on the REPL. No issues I can think of in an actual program.
Re: Trying to understand how memory allocation is working here...
Yes, this is entirely a Thonny feature. Thonny's shell has a number of extra features that allow lots of neat things like pop-up completion and time synchronisation etc. It's also line-buffered. When you execute something at the shell, it does a bunch of extra work to make this happen. Look at https://github.com/thonny/thonny/blob/m ... backend.py to see some of this, in particular the __thonny_helper code that gets loaded onto the target.
For example, running "gc.mem_alloc()" actually sends
Code: Select all
__thonny_helper.print_repl_value(gc.mem_alloc())
__thonny_helper.builtins.print('__thonny_helper' in __thonny_helper.builtins.dir())
__thonny_helper.print_mgmt_value({name : (__thonny_helper.repr(value), __thonny_helper.builtins.id(value)) for (name, value) in __thonny_helper.builtins.globals().items() if not name.startswith('__')})