But the reason wasn't mentioned. Perhaps it's obvious to everyone, but I found myself wondering what is the reasoning here?there is a good reason why methods should be used in device drivers rather than properties
If it's an efficiency question, is there any reference material or approach I can take to get a feel for what operations are more or less expensive for the constrained CPU, FLASH and RAM resources and the underlying strategy Micropython uses for each python operation?
I found http://docs.micropython.org/en/v1.9.3/e ... ained.html really informative but that mostly looks into the choice of ways to store data, strategies for putting code on the board and ways to avoid fragmentation. It doesn't suggest what sort of resources are committed by any particular choices a developer might make when implementing a particular behaviour in python.
I wonder what is the cost of a new module, a new class, a new function, a module-level list, a dict. Are there any language structures which interfere with other optimisation strategies, slowing down the whole thing? Ideally as I code I would be seeing hotspots around operations which I only use if forced, and have the confidence that other operations will be translated into really efficient and minimal bytecode.
Although some of these questions are fundamental computer science ones, and hence explain themselves (e.g. searching through a list for equality vs using a lookup) there are a bunch which relate to strategies which Micropython has adopted compared to CPython.
Presumably the tradeoffs go both ways - some things which are cheap in CPython are expensive in Micropython. Not only is there an inevitable tradeoff between memory usage and processor time, (and Micropython is by definition low-memory), I imagine there are features which are supported in Micropython for python3.4 compatibility, but have never been optimised since experts would simply work around using them.
When I have two complete alternate implementations to try out, I can do some simple profiling (e.g. repeat an operation 100000 times), but there are usually lots of ways to skin the cat, it would be pretty useful to know what operations are pricy as this would help to choose the small number of candidate implementations to test.
What are the things which people in the know about Micropython internals tend to avoid because of the overheads and why?