Some interesting ideas there. Perhaps a little history is required. When I posted my code I hoped that the changes (or at least the basic concept) would be incorporated in the official build, so I worked on the principle of keeping changes to the absolute minimum. My aim was to obsolete my old scheduler rather than to award myself the job of maintaining a new one
It's now evident that the ideas aren't going to be accepted so I'll do my best with a fork, as I remain convinced that the official library is insufficiently flexible for some embedded applications. The lib is still under development so to some extent I'm aiming at a moving target; I still feel inclined to minimise changes.
The idea of a single queue handling priority as well as time would be nontrivial, and would involve modifying the firmware in addition to the Python library. I feel disinclined to go down that route. Multiple queues would be easy, but I'm not convinced that typical firmware applications really require this. My personal view is to keep it "micro" but I welcome other opinions.
In both my implementation and the official version multiple standard coros which are ready for execution will be scheduled in "fair" round-robin fashion.
This means that if there are N coros, each polling an event in a tight loop with a zero delay yield, each event will be tested once every N iterations of the scheduler. In many, perhaps most, situations this is probably OK. But there may be events requiring a faster response. What I think is needed is a means of polling an event on every iteration of the scheduler. This effectively introduces a third tier of priority, one above the standard level. This would work by yielding a callback function rather than a time, the callback returning True when scheduling is required.
It would need using with care because of the potential impact on the task switching overhead of the scheduler: polling functions need to be fast executing and few in number. Typically they would be a lambda or a simple bound method which tests a flag set by a hardware interrupt. Such a mechanism would enable a small number of coros to respond to events in a time corresponding to the worst case processing time of any one other coro.