OSError: [Errno 5]
Re: OSError: [Errno 5]
So it must be memory related.
I have modules that are being loaded in main, and after all the modules are imported I am trying the SDIO test and the test I pasted earlier.
It fails for 2048 blocks. If I disable any imports, then I start seeing what you are seeing.
Here is the output of pyb.info():
qstr:
n_pool=6
n_qstr=727
n_str_data_bytes=12000
n_total_bytes=17136
GC:
101760 total
73792 : 27968
1=887 2=597 m=161
LFS free: 92160 bytes
There is 27968 of free memory, this should be sufficient for reading a file with 2048 block size.
I have modules that are being loaded in main, and after all the modules are imported I am trying the SDIO test and the test I pasted earlier.
It fails for 2048 blocks. If I disable any imports, then I start seeing what you are seeing.
Here is the output of pyb.info():
qstr:
n_pool=6
n_qstr=727
n_str_data_bytes=12000
n_total_bytes=17136
GC:
101760 total
73792 : 27968
1=887 2=597 m=161
LFS free: 92160 bytes
There is 27968 of free memory, this should be sufficient for reading a file with 2048 block size.
Re: OSError: [Errno 5]
Right - I've been nailed by this as well (switching from Python2 to Python 3).
When I changed if a == '' to a == b'' or if len(a) == 0 then it exits the loop.
Files open in binary mode return binary strings (hence the b prefix) whereas files in text mode return unicode strings.
Code: Select all
>>> b'' == ''
False
Files open in binary mode return binary strings (hence the b prefix) whereas files in text mode return unicode strings.
Re: OSError: [Errno 5]
At least we are both seeing the same results
Can you please tell me why with 27K of memory left, reading with 2048 block size fails?
Does the underlying read IO use that much RAM?
Can you please tell me why with 27K of memory left, reading with 2048 block size fails?
Does the underlying read IO use that much RAM?
Re: OSError: [Errno 5]
You could be running into heap fragmentation.
Here's a contrived case that demonstrates that:
which for me, produces:
So even though there is 49K free, there isn't any chunk big enough to hold 2048 bytes.
Here's a contrived case that demonstrates that:
Code: Select all
import gc
def read_file():
l=0
print("About to open file")
aFile = open('/sd/data.bin', 'rb')
print("After opening file")
while True:
try:
a = aFile.read(2048)
if a == b'':
print('done')
break
except OSError:
print('OSError')
break
except :
print('Some error')
break
l = l + len(a)
print(l)
aFile.close()
data = []
def consume_memory():
global data
read_file()
try:
for i in range(128):
data.append(bytearray(1024))
except MemoryError:
for i in range(1, len(data), 2):
data[i] = None
print("mem_free = {}".format(gc.mem_free()))
print("mem_alloc = {}".format(gc.mem_alloc()))
gc.collect()
print("mem_free = {}".format(gc.mem_free()))
print("mem_alloc = {}".format(gc.mem_alloc()))
read_file()
consume_memory()
Code: Select all
>>> import cup
About to open file
After opening file
2048
4096
6144
8192
10240
12288
14336
16384
18432
20480
22528
24576
25517
done
mem_free = 2048
mem_alloc = 100240
mem_free = 49904
mem_alloc = 52384
About to open file
After opening file
Some error
Re: OSError: [Errno 5]
Oh, interesting. I thought gc.collect() performs some defragmentation.
If not, what would you recommend in this case?
If not, what would you recommend in this case?
Re: OSError: [Errno 5]
I tried to preallocate a buffer and use readinto, but readinto doesn't seem to be supported, so I'm going to open an issue in github.
Re: OSError: [Errno 5]
Well thank you for the useful feedback.
I think most users will face this problem, there are ways we can reduce the fragmentation with pre-allocating buffers, but eventually we will need this flexibility.
If there is something we can do about running a process periodically to get the memory chunks moved that would be great, or if there is a way in the memory allocator to control the allocated bytes. For example for small buffers allocate a multiple of 64 bytes, and for larger buffer a multiple of 1024.
This will reduce the fragmentation, it is just an idea, but some RTOSs do similar things.
I think most users will face this problem, there are ways we can reduce the fragmentation with pre-allocating buffers, but eventually we will need this flexibility.
If there is something we can do about running a process periodically to get the memory chunks moved that would be great, or if there is a way in the memory allocator to control the allocated bytes. For example for small buffers allocate a multiple of 64 bytes, and for larger buffer a multiple of 1024.
This will reduce the fragmentation, it is just an idea, but some RTOSs do similar things.
Re: OSError: [Errno 5]
The gc can't move objects, since it would need to know where all of the references to the object are and update them. All it can do is free up objects which have no references.nelfata wrote:Oh, interesting. I thought gc.collect() performs some defragmentation.
If not, what would you recommend in this case?
The problem is that other in memory objects, like an int, could contain a value which looks like a pointer and the gc doesn't know this. So it will err on the side of caution and not free a block that had an int with a value that looks like a pointer, but it can't go updating those in case they aren't really pointers
Re: OSError: [Errno 5]
So for the moment it is not possible to defragment memory.
Re: OSError: [Errno 5]
No, unfortunatelly there is no way to defragment memory. It's designed to be ok (ie not need defragmentation) for small objects like floats, short lists and tuples, etc. But for large (kb) buffers, you are best pre-allocating them.
You can see what the current state of the heap is by running: pyb.info(1). This will print out a representation of the heap and its used blocks.
You can see what the current state of the heap is by running: pyb.info(1). This will print out a representation of the heap and its used blocks.