how to simulate maximum garbage collection time

C programming, build, interpreter/VM.
Target audience: MicroPython Developers.
Post Reply
jickster
Posts: 629
Joined: Thu Sep 07, 2017 8:57 pm

how to simulate maximum garbage collection time

Post by jickster » Wed May 16, 2018 8:21 pm

I've timed my garbage collection in the case where there's nothing to collect and now I want to time it for the worst case.

How would I do that?

Is it enough to, in a while True loop, create objects until I get an "out-of-memory" heap error and then gc.collect()
I know that will technically fill it up but will that give me the MAXIMUM time that gc.collect() can possibly take for a given heap size?

User avatar
pythoncoder
Posts: 5956
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Re: how to simulate maximum garbage collection time

Post by pythoncoder » Thu May 17, 2018 6:26 am

I don't know the answer to your question, but I design applications to minimise GC times. I periodically perform a GC when the application is otherwise idle. This ensures that GC times are short. Regular GC also helps reduce heap fragmentation which can be a cause of application failure.
Peter Hinch
Index to my micropython libraries.

stijn
Posts: 735
Joined: Thu Apr 24, 2014 9:13 am

Re: how to simulate maximum garbage collection time

Post by stijn » Thu May 17, 2018 8:17 am

will that give me the MAXIMUM time that gc.collect() can possibly take for a given heap size
Not likely, it still depends on the size of the objects (i.e. many small objects vs a couple of larger ones won't take the same time) and their layout (number of children blocks to mark). Even then, when finaliser support is enabled, __del__ might get invoked which theoretically can take any amount of time.

Now even if you ignore all of the above and somehow get to what you consider a maximum, it is still completely unrealistic. If anything you'd have to measure when running an actual real-life application. You can still tweak it to allocate more than usual but without measuring in an actual application you simply cannot be sure. That's not just for gc BTW, but for anything, and the effect only gets larger for bigger applications. I cannot even estimate the number of hours I lost profiling pieces of code only to figure out either the effect is negligible in the actual application or the numbers are completely different in an actual application.

Anyway, for MicroPython I use pythoncoder's approach as well: deal with gc in idle time. And for cases where there is not enough idle time resort to preallocation and/or custom code.

Post Reply