Getting the size of an object in MicroPython

General discussions and questions abound development of code with MicroPython that is not hardware specific.
Target audience: MicroPython Users.
Post Reply
laukejas
Posts: 29
Joined: Thu May 02, 2019 5:17 pm

Getting the size of an object in MicroPython

Post by laukejas » Fri Apr 03, 2020 1:59 pm

Hi,

I was exploring various ways to store large 2D list in Python, when I discovered that there is no sys.getsizeof() call in MicroPython. Does anyone know what alternative I can use to determine the exact size of an object?

Right now, I am trying to estimate it by comparing what gc.mem_free() prints before and after creation of an object. And the results are quite puzzling. Consider a simple 2D list creation:

Code: Select all

list1 = [[0 for _ in range(10)] for _ in range(10)]
This would create a simple 10x10 integer array. If I do this in Python 3.7, and then call:

Code: Select all

>>>sys.getsizeof(list1)
100


As you see, this list takes up 100 bytes. 1 byte per member, which sounds logical. However, I don't know how to test this directly in Micropython, so I am calling:

Code: Select all

>>>gc.mem_free()
31424
>>>list1 = [[0 for _ in range(10)] for _ in range(10)]
>>>gc.collect()
>>>gc.mem_free()
30528
As you can see, the difference in memory usage after creating this list is 31424 - 30528 = 896 bytes. Almost 9 times the size of the list! How can this be?

I tried creating the list in a more optimized way - converting it's members to bytes:

Code: Select all

list2 = [[bytes([0]) for _ in range(10)] for _ in range(10)]
When I do it in Python 3.7, sys.getsizeof() reports that the size of this list is also 100 bytes. But when I create it in MicroPython...

Code: Select all

>>>gc.mem_free()
31408
>>>list2 = [[bytes([0]) for _ in range(10)] for _ in range(10)]
>>>gc.collect()
>>>gc.mem_free()
27344
31408 - 27344 = 4064 bytes!! What the hell?

Notice that if I delete the list using:

Code: Select all

>>>del list1  #or list2
>>>gc.collect()
>>>gc._mem_free()
31408
Free memory goes back right where it was, meaning that indeed it is the list that is taking so much space, not something else.

I also tried using Array library with some code that one of the members of this forum suggested:

Code: Select all

>>>import array
>>>def build_2D_array(N):
...    rng = range(N)
...   result = []
...    for i in rng:
...      arr = array.array('B') #unsigned byte
...      for j in rng:
...          arr.append(0)
...      result.append(arr)
...    return result 
Python 3.7 also says the size of this list/array is 100 bytes. But when I run it in MicroPython...

Code: Select all

>>>gc._mem_free()
30736
>>>list3 = build_2D_array(10)
>>>gc.collect()
>>>gc._mem_free()
30336
30736 - 30336 = 400 bytes... Not as bad as list2, but still bad, taking 4 times more memory than it's supposed to.

So, what the heck is going on? Does memory management work entirely different in MicroPython than it does in Python 3.7? Why do such simple lists take up so much memory? And is there a better way to measure how much memory does a specific object occupy, something similar to sys.getsizeof() that Python has?

stijn
Posts: 735
Joined: Thu Apr 24, 2014 9:13 am

Re: Getting the size of an object in MicroPython

Post by stijn » Fri Apr 03, 2020 2:11 pm

I don't know a solution but have a question: what do you want to know *exactly*, what do you want to do? 'The size of an object' is open for interpretion.

Some comments:
As you see, this list takes up 100 bytes. 1 byte per member, which sounds logical.
Depends on how you look at it: I find it hard to believe that's exactly 100bytes. Python isn't plain C. CPython's PyObject struct alone, which AFAIK it normally has to allocate for each object, could well be more than 100bytes. As such, my CPython 3.7.1 does:

Code: Select all

>>> sys.getsizeof([[0 for _ in range(10)] for _ in range(10)])
192
which I personally find more logical already (just because it matches my expectation from having spent some time with the CPython source code a couple of years ago; don't know how you get 100).

Wrt your gc tests: gc is not deterministic. The difference between two calls to mem_free does not equal the size of the objects you created explicitly in between the 2 calls because during those calls you already have allocated extra memory just by using the REPL and it's possible memory was freed because there was a gc sweep.

Christian Walther
Posts: 169
Joined: Fri Aug 19, 2016 11:55 am

Re: Getting the size of an object in MicroPython

Post by Christian Walther » Fri Apr 03, 2020 2:49 pm

laukejas wrote:
Fri Apr 03, 2020 1:59 pm

Code: Select all

>>>sys.getsizeof(list1)
100
As you see, this list takes up 100 bytes. 1 byte per member, which sounds logical.
This not 1 byte per member, it’s 10 bytes per member (although computing “size per member” probably makes little sense anyway). You’re only counting the memory used by the 10-element outer list here, not the 10 inner lists. At least that’s how I interpret
https://docs.python.org/3/library/sys.html#sys.getsizeof wrote:Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to.

laukejas
Posts: 29
Joined: Thu May 02, 2019 5:17 pm

Re: Getting the size of an object in MicroPython

Post by laukejas » Fri Apr 03, 2020 3:01 pm

stijn wrote:
Fri Apr 03, 2020 2:11 pm
I don't know a solution but have a question: what do you want to know *exactly*, what do you want to do? 'The size of an object' is open for interpretion.

Some comments:
As you see, this list takes up 100 bytes. 1 byte per member, which sounds logical.
Depends on how you look at it: I find it hard to believe that's exactly 100bytes. Python isn't plain C. CPython's PyObject struct alone, which AFAIK it normally has to allocate for each object, could well be more than 100bytes. As such, my CPython 3.7.1 does:

Code: Select all

>>> sys.getsizeof([[0 for _ in range(10)] for _ in range(10)])
192
which I personally find more logical already (just because it matches my expectation from having spent some time with the CPython source code a couple of years ago; don't know how you get 100).

Wrt your gc tests: gc is not deterministic. The difference between two calls to mem_free does not equal the size of the objects you created explicitly in between the 2 calls because during those calls you already have allocated extra memory just by using the REPL and it's possible memory was freed because there was a gc sweep.
Okay, I see. Well, my questions are these:
1) In MicroPython, how do I determine how much RAM memory, in bytes, is reserved for the list I created? This information is needed so I can plan memory management for big projects.
2) Why do the different methods of creating lists that I quoted seem to leave the microcontroller with so little memory? Why does list2 and list3, which are created with bytes as members (rather than integers) seem to take even more memory than a plain integer list?
Christian Walther wrote:
Fri Apr 03, 2020 2:49 pm
laukejas wrote:
Fri Apr 03, 2020 1:59 pm

Code: Select all

>>>sys.getsizeof(list1)
100
As you see, this list takes up 100 bytes. 1 byte per member, which sounds logical.
This not 1 byte per member, it’s 10 bytes per member (although computing “size per member” probably makes little sense anyway). You’re only counting the memory used by the 10-element outer list here, not the 10 inner lists. At least that’s how I interpret
https://docs.python.org/3/library/sys.html#sys.getsizeof wrote:Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to.
Thanks for clarification, I didn't know this. So what is a better way to determine the exact amount of memory that an object takes up in Python/MicroPython?

User avatar
pythoncoder
Posts: 5956
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Re: Getting the size of an object in MicroPython

Post by pythoncoder » Fri Apr 03, 2020 5:24 pm

Arrays and bytearray objects store their data in contiguous locations. In the cases where it matters (large numbers of elements) you can calculate it. There is doubtless a small amount of overhead, but if you issue

Code: Select all

a = array('H', (0 for _ in range(1000))
the RAM usage will be 2000 bytes plus a few bytes. Integers and floats occupy 4 bytes.

Unless you have very dynamic sizes it's usually best to allocate large arrays early, and use memoryview objects for zero RAM slicing. Allocating early avoids the situation where a large contiguous block can't be allocated owing to RAM fragmentation.

If you still want to measure sizes the way I do it is:

Code: Select all

import gc
gc.collect()
start = gc.mem_free()
a = [0]*100  # This is what we want to measure
print(start - gc.mem_free())
On a Pyboard 1.1 this gives 464 bytes implying an overhead of 64 bytes.
Peter Hinch
Index to my micropython libraries.

laukejas
Posts: 29
Joined: Thu May 02, 2019 5:17 pm

Re: Getting the size of an object in MicroPython

Post by laukejas » Fri Apr 03, 2020 6:42 pm

pythoncoder wrote:
Fri Apr 03, 2020 5:24 pm
Arrays and bytearray objects store their data in contiguous locations. In the cases where it matters (large numbers of elements) you can calculate it. There is doubtless a small amount of overhead, but if you issue

Code: Select all

a = array('H', (0 for _ in range(1000))
the RAM usage will be 2000 bytes plus a few bytes. Integers and floats occupy 4 bytes.

Unless you have very dynamic sizes it's usually best to allocate large arrays early, and use memoryview objects for zero RAM slicing. Allocating early avoids the situation where a large contiguous block can't be allocated owing to RAM fragmentation.

If you still want to measure sizes the way I do it is:

Code: Select all

import gc
gc.collect()
start = gc.mem_free()
a = [0]*100  # This is what we want to measure
print(start - gc.mem_free())
On a Pyboard 1.1 this gives 464 bytes implying an overhead of 64 bytes.
Thank you for this explanation. So what you're saying is that this method to estimate memory usage via gc.mem_free() before and after object creation is reliable after all? Because stijn and Christian Walther said that it isn't.

Anyway, I re-ran the list/array creation in the way you wrote it, single dimension, 1000 members, and then compared gc.mem_free() before and after.

list4 = array.array('H', (0 for _ in range(1000))) - takes 2092 bytes according to Python 3.7, 2336 bytes according to MicroPython. This seems consistent with the fact that Short datatype takes 2 bytes per member. Overhead is quite large, though, especially in MicroPython.

Same with Bytes instead:

list5 = array.array('B', (0 for _ in range(1000))) - 1062 bytes in Python, 1440 bytes in MicroPython.

It seems that overhead is still larger in MicroPython than it is in Python. And when creating 2D lists/arrays in MicroPython, memory usage differences are much more extreme. Why is that?

As for the memory fragmentation - I do try to allocate large chunks of memory early on for these big variables. But is there a way to defragment the RAM in MicroPython? In my experience, gc.collect() doesn't do this, as I'm often unable to allocate large blocks of memory even when when there's plenty of memory available even after calling this function, implying that it is still fragmented.

stijn
Posts: 735
Joined: Thu Apr 24, 2014 9:13 am

Re: Getting the size of an object in MicroPython

Post by stijn » Fri Apr 03, 2020 7:05 pm

Don't have time to write a complete reply now to explain where those numbers you see come from, but for starters whatever you test should not be on the REPL because the code and interpreting and compiling thereof itself also requires RAM. And at least you probably need collect() calls before the allocation you want to measure. And yes a list will cost more because it's not ontiguous so normally each object separately also has overhead. Even then, using mem_free etc might be skewed due to fragmentation of the heap (not exactly sure, I don't know by heart how mem_free is implemented). Which you'd also need to take into account when considering a complete program's usage, it will be more than just the sum of parts. I.e. it's not because in total there are x free bytes that you'll be able to allocate an object of size x. So, even if you can get a proper estimate using mem_free in simple test code like used here, that only gives an indication of what is going to happen in an actual program. Especially when it is long running.

stijn
Posts: 735
Joined: Thu Apr 24, 2014 9:13 am

Re: Getting the size of an object in MicroPython

Post by stijn » Sat Apr 04, 2020 8:42 am

laukejas wrote:
Fri Apr 03, 2020 6:42 pm
list4 = array.array('H', (0 for _ in range(1000))) - takes 2092 bytes according to Python 3.7, 2336 bytes according to MicroPython. This seems consistent with the fact that Short datatype takes 2 bytes per member. Overhead is quite large, though, especially in MicroPython.
Let's take this particular example and find out why your conclusion is wrong. For this I build MicroPython with verbose debug printing for gc.c so we can see what gets allocated, and step through the code with a debugger. So linux or windows ports.

To make things a bit less complicated I'm declaring the initial value first as a list, for 2 reasons: the statement you show also has to allocate the generator, and the length of a generator can only be known by iterating it meaning the array cannot be allocated in one go simply because the constructor doesn't know yet how bit it is. But want to know exactly what memory it takes up so passing it a list with a known length is going to make reading that easier, vs having to look at a bunch of reallocations.

Code: Select all

>>> initializer = [0 for _ in range(1000)]
>>> gc.collect()  # To make sure no dangling things left
Note I'm still doing this on the REPL to show why that has problems; now let's start with the actual business, including the debug output from gc.c:

Code: Select all

>>> start = gc.mem_free()
gc_alloc(22 bytes -> 1 blocks)
gc_alloc(0000025EA457D260)
gc_alloc(32 bytes -> 1 blocks)
gc_alloc(0000025EA457D2C0)
gc_alloc(160 bytes -> 5 blocks)
gc_alloc(0000025EA457E460)
gc_alloc(20 bytes -> 1 blocks)
gc_alloc(0000025EA457D300)
gc_alloc(32 bytes -> 1 blocks)
gc_alloc(0000025EA457D340)
gc_alloc(1024 bytes -> 32 blocks)
gc_alloc(0000025EA4580720)
gc_alloc(256 bytes -> 8 blocks)
gc_alloc(0000025EA457E540)
gc_alloc(144 bytes -> 5 blocks)
gc_alloc(0000025EA457E640)
gc_free(0000000000000000)
gc_free(0000025EA4580720)
gc_free(0000025EA457E540)
gc_free(0000025EA457D2C0)
gc_free(0000025EA457D340)
gc_free(0000025EA457D300)
gc_free(0000025EA457E460)
gc_alloc(72 bytes -> 3 blocks)
gc_alloc(0000025EA457D460)
gc_alloc(24 bytes -> 1 blocks)
gc_alloc(0000025EA457D2C0)
gc_alloc(64 bytes -> 2 blocks)
gc_alloc(0000025EA457D340)
gc_alloc(144 bytes -> 5 blocks)
gc_alloc(0000025EA457E460)
gc_alloc(0 bytes -> 0 blocks)
gc_alloc(21 bytes -> 1 blocks)
gc_alloc(0000025EA457D300)
gc_alloc(0 bytes -> 0 blocks)
gc_free(0000000000000000)
gc_free(0000025EA457E460)
gc_free(0000025EA457E640)
gc_free(0000025EA457D340)
gc_free(0000025EA457D460)
gc_alloc(32 bytes -> 1 blocks)
gc_alloc(0000025EA457D340)
>>>
Wait, what?? Yes that's like 2000bytes which have been allocated under the hood, just to run that statement. Now most of that has been deallocated but not all of it: having to store that variable 'start' also requires memory. Since I'm running this on a 64bit build that should be 2 times 8 bytes (one 'pointer' for the name, one for the integer value) but as you can see there's no allocation of just 16 bytes. The reason for that is that when MicroPython stores a new name in the globals() dictionary and it needs to allocate more space it allocates some extra so it doesn't constantly have to reallocate.

This shows 2 possible problems with using mem_free() as substitue for getsizeof():
- it takes the overhead of storing the name into account, which might or might not matter, but shows why your question 'what is the size of the object' is by itself flawed with respect to what you're trying to do. Sidenote: multiple names can point to the same object. I.e. if your array object is 64 bytes, but you then do x = array, y = array, z = array you are effectively using 48 bytes extra. That's almost as much as the size of the object itself. Not gonna matter for large arrays, but still, worth consideration.
- if you see that list of allocations, and know there is fragmentation, and know that garbage collection is not deterministc meaning gc.collect() does not always succeed in collecting everything right away, it's extra clear that what mem_free() returns next time might not always be the exact size of the object

Next (now not showing the allocations for compilation etc, explanation shown inline):

Code: Select all

>>> arr = array.array('H',initializer)
# This is allocation for the internal MicroPython array object, i.e. a strcut
# with things like typecode, length, pointer to data
gc_alloc(32 bytes -> 1 blocks)
gc_alloc(0000025EA457D4C0)
# Now allocating the actual data
gc_alloc(2000 bytes -> 63 blocks)
gc_alloc(0000025EA4580720)
# The data needs to be initialized, for which an iterator needs to be allocated;
# on the next call to gc.collect() this should be gone though.
gc_alloc(32 bytes -> 1 blocks)
gc_alloc(0000025EA457D500)
# Storing the name 'arr' in globals() extra preallocation
gc_alloc(128 bytes -> 4 blocks)
gc_alloc(0000025EA457DB20) 
So in the strict sense, the actual amount of bytes taken by just the array object here is 2032 or 2048 if you count the storage for the name. Better than CPython, and much better than what you thought it was.

So what does mem_free() say?

Code: Select all

>>> gc.mem_free() - start
-2368
 # Without collect() that's rather bogus though, try again.
>>> gc.collect()
>>> gc.mem_free() - start
-2080
It's detting close but not there yet. Not running on the REPL might shave off some more. I hope this gives some insight as to why mem_free() and the way you're using it has problems.

As far as solutions go: if you go over the code you could cook your own sys.sizeof() implementation for types of which you have looked up what they actually need. But as sai, that only answers the question 'how do I get the sie of an object' which is not too interesting as it is not going to tell you how much RAM will get used by an application or how much you have left to allocate in an actual program of more than a couple of lines.

If I would have to figure that out I'd probably make a prototype application which does roughly what the specs say it has to do, then run it while repeatedly calling collect()/mem_free() at points of interest. Then crank up allocation size and try again. Then change number of allocated objects and repeat. Get all data, use interpolation to get to the amounts of memory and allocations the actual application would have, add some error margin, that should give a rough estimate. And unless I'm mistaken, more accurate then just adding the result of a bunch of getsizeof() calls together sice it would be taking other allocations into account as well. It would however be pretty interesting to compare it with the sum of getsizeof() calls of all objects, to see if the relationship is lineair for instance.

Post Reply