Thanks everyone - I really appreciate it
I have already studied byte arrays and they work great, but I do not understand how there can be so much overhead.
The following code returns 56 and 54 bytes and that is for storing 5 bytes
import sys
ar = bytearray([1,2,3,4,5])
ar2 = bytearray(b'\x01\x02\x03\x04\x05')
print(sys.getsizeof(ar))
print(sys.getsizeof(ar2))
Am I doing something wrong here?
Isn't there any way of storing data without that much overhead and does the sys.getsizeof() returns the memory allocated?
Please note that this is in Python and not MicroPython (forgot my board at the university)!!!
C-like data types in µPy
-
- Posts: 847
- Joined: Mon Nov 20, 2017 10:18 am
Re: C-like data types in µPy
Here's my experimentation in MicroPython
MicroPython ESP32_LoBo_v3.2.4 - 2018-04-06 on Out of the BOTS with ESP32
Type "help()" for more information.
>>> import uctypes
>>> ar = bytearray([1,2,3,4,5])
>>> ar2 = bytearray(b'\x01\x02\x03\x04\x05')
>>> print(uctypes.sizeof(ar))
5
>>> print(uctypes.sizeof(ar2))
5
>>>
- pythoncoder
- Posts: 5956
- Joined: Fri Jul 18, 2014 8:01 am
- Location: UK
- Contact:
Re: C-like data types in µPy
The following is also worth noting:
Given that a small integer is a machine word of 4 bytes, the overhead on a list of 5000 integers is a mere 96 bytes. The reason for using array.array is not to save RAM. It is to produce an object having the buffer protocol (data elements in contiguous memory locations).
Code: Select all
>>> import gc, micropython
>>> gc.collect()
>>> micropython.mem_info()
stack: 468 out of 15360
GC: total: 102400, used: 8336, free: 94064
No. of 1-blocks: 12, 2-blocks: 7, max blk sz: 440, max free sz: 5825
>>> a =[0]*5000
>>> gc.collect()
>>> micropython.mem_info()
stack: 468 out of 15360
GC: total: 102400, used: 28432, free: 73968
No. of 1-blocks: 15, 2-blocks: 8, max blk sz: 1250, max free sz: 4575
>>>
>>> 94064-73968
20096
>>>
Peter Hinch
Index to my micropython libraries.
Index to my micropython libraries.