Not so much. Take a look at how memory is allocated and you will see it assumes a single memory pool which isn't desirable for multiple instances in an embedded application.
Imagine you have a system instance and an applet instance, you wouldn't want the applet instance to be able to consume all memory and starve the system instance. You would also want the supervising task to be able to kill an instance and not leave memory heavily fragmented.
Even if you have a single instance in an embedded application you may want a memory pool of a fixed size to give to uPy. So this means all of the malloc/free/etc calls have to be potentially overridden and wrapped in MP_STATE_*.
I haven't done a full scope of work, but I know that wasn't the only issue I saw. Not a simple refactor as I say.
I am going to argue this doesn't need to be done — or handled in a minumal way — in the same way most RTOSs don't worry about this.@pythoncoder mentioned another issue: scripts that allow direct access to physical resources - peripherals, memoryview of RAM - are going to have to be rewritten to use synchronization mechanisms to multiplex. Sounds very messy.
Even if code is refactored to pass in CONTEXT object, I guarantee you peripheral modules will NOT be rewritten (anytime soon) to support multiplexing. An entire new abstraction layer would have to be written and peripheral code updated to support multiplexing access to peripherals.
At most you have a xx_in_use mutex and fail if SPI, I2C, etc are in use at the time of the call. However even this may not be needed for many/most of the use cases of multiple interpreters.
In my own case the applet interpreters won't have access to any hardware IO. They are applets that interact with the hardware/outside world through a provided API. I am using uPy as an extension language, one that is lean enough to be used on a microcontroller.
[/quote]Can you run useful scripts without hw resource synchronization? Doubtful. Even networking requires synchronization: the ip stack cannot run in both of the apps because only one thing can control the physical interface at once.
Without synchronization, one app must be the master - and running the networking - and the others the slaves. The master downloads the `.py` files and kicks off new isolated instances then ???somehow??? detect when the slaves are done and copy the results.
This still doesn’t synchronize other hw resources like SPI: if two or more scripts use same bus simultaneously you’re still screwed.
I think you are thinking too much about how uPy is being used right now, and not how it could be used if you could run multiple interpreters.
Also I don't anticipate one instance spawning another instance, but that the master C/C++ application would be doing that.