Avoiding "async def" creep

General discussions and questions abound development of code with MicroPython that is not hardware specific.
Target audience: MicroPython Users.
User avatar
jcw
Posts: 35
Joined: Sat Dec 21, 2019 12:08 pm
Location: CET/CEST

Avoiding "async def" creep

Post by jcw » Sun May 24, 2020 8:16 pm

This is a post about µPy's (and CPy's) async def and await functionality, which could also have been titled "Awaitables vs Callables". Warning up front: I'm not fully up to speed on asyncio yet, and know next to nothing about its history or discussions elsewhere, so please do point out when there is useful information out there that I really ought to go through.

As I understand it, asyncio makes it possible to interleave functionality and time. I.e. processing some code, which at times can't continue until "something else" happens. For example, sleeping for 1 second. Not only would it be nice to run some other code in the meantime, in embedded context this is where waiting for an external event happens, or putting the µC into low-power mode.

But there's a catch. With threads, we either get a very harsh "preemptive" environment (with race-conditions waiting at every corner), or a mass of hard-to-pre-allocate stacks if task switches only happen through specified yield calls. Enter coroutines, which are essentially non-preemptive threads, but with stack frames allocated on the heap (I don't know the implementation, I assume the compiler knows up front what size stack frame it will need), which is garbage-collected. So now, the call stack is no longer a linear memory vector, but a linked set of caller-callee segments. The key here is that they can "fork", i.e. segments can branch off in different directions, so that the completion of one call does not depend on everything it called itself. In other words: coroutines can be suspended, individually, and resumed later in a different order (note that I'm making all this up without knowing µPy's innards, the actual design may well be different).

That means there are lovely "awaitable" coroutines as lightweight tasks, and normal "callables" (functions, classes, objects) which can only run to completion once started. Unfortunately, never the twain shall - fully - meet ...

The problem: you can't use "await" in a callable function, i.e. one not defined as "async def", because the compiler either has to generate code for the usual (efficient) stack-based enter/exit program flow, or it generates code for heap-allocated coroutine functionality. And while a coroutine can call "normal: functions (eating up a bit of the linear stack again), they can only be suspended when those functions have returned, leaving the stack alone again.

Another way to look at this is: coroutines "live" on the garbage-collected heap. They can "borrow" the stack for performing as many calls as they want, but only once that stack use has stopped again (i.e. all those functions have returned), can a coroutine benefit from its key distinguishing feature: getting suspended, allowing another coroutine to resume.

The consequence is that once you're in normal "callable" code (i.e. on the stack), you have to keep going and return. The only way is run-to-completion. For libraries, any kind of modular code, this means that either the library is fully callable (and cannot be suspended), or it is made up of "async def" coroutines, all the way down to where an action occurs which wants to be suspended (say an I/O call). Which is no fun for a library designer - there is no way to hide "awaitability", it seems.

There is some relief ... asyncio also has tasks, which can be created in callable code. It sets up a coroutine to run soon, i.e. after the next suspend happens (which is by definition not now, since we're in callable context at this point). So yes, new work can be started at any time, but the code starting it has no way of stopping right there and waiting for a result: it's not a "rendesvouz" point. In Unix terms: you can "fork", but you cannot "wait". Not before returning anyway.

And a nasty tradeoff it is. The entire call chain from start of the asyncio event loop at the "top" of the application down to the code which does an "await" because it wants to let time pass without running code (or busy-polling), has to consist of "async def" code. If I want to use a library which has an await deep inside it, then my code has to be written using "async def" as well. And unfortunately, an "async def" function is not a normal function. It's an "awaitable", and must be executed in a different way.

So somewhere near the top, there's the launch of an asyncio event loop. It schedules coroutines and tasks. At the bottom, there's traditional run-to-completion "callable" code, however deeply nested, calling into as much traditional library code as needed. And at some point these two worlds meet.

My concern is that programming in an async fashion could be "toxic" (for lack of a better word). Before you know it, everything becomes an async-this and an await-that. Even for functionality which really doesn't have to deal with scheduling and the flow of time.

Here's an example, from an async version of the MQTT client library:

Code: Select all

async def ping(self):
    async with self._lock:
        await self._as_write(b"\xc0\0")
I understand the need, I think. We need a lock, but we may have to reschedule to get it. And we need to send a network message, again with the need to wait until done. Finally, there's an unlock (hidden in the "with" statement), which it too might need to suspend (I'm not sure).

But what is really happening here? If I may ignore exception handling for now, isn't this simply an (asynchronous) sequence of operations?
  • lock, wait until acquired
  • send a few bytes, wait until sent (or perhaps queued for sending)
  • unlock, wait until released
  • resume execution of the caller
In another context, this could be done with callbacks (with the risk of callback hell). Naively, ignoring the lock's waiting:

Code: Select all

def ping(self, cb):
    self.lock()
    self.write(b"\xc0\0", lambda x: self.unlock(); cb(x))
What if there were a way to work one's way through a series of calls, asynchronously?

Code: Select all

def ping(self, cb):
    Seq(self.lock, (self.write, b"\xc0\0"), self.unlock, (-1, cb))
This is now much more like a data structure, ready to be interpreted by some sort of sequential async processor. I've thrown in a few conventions, i.e. if the item is a tuple, then that specifies extra arguments to pass to the function at call time. And the "(-1, cb)" item at the end could be interpreted as: append the result of the previous step as argument when calling cb.

Could this point to a possible way out of "async def" creep? I don't know. I've not coded anything up yet. All I know is that I don't think I want to turn my own (highly- event- and message-oriented) application into a minefield of awaitable/callable mixups.

It would be nice if there were some elegant way out. I'm not sure there is, many people must have looked into this.

Perhaps this lengthy post will help find a practical approach to very async and very responsive, but also understandable, apps.

Thanks for reading this far.
Jean-Claude Wippler

User avatar
deshipu
Posts: 1362
Joined: Thu May 28, 2015 5:54 pm

Re: Avoiding "async def" creep

Post by deshipu » Mon May 25, 2020 12:13 am

I think there is one detail that you failed to mention here, I'm not sure if that's because you missed it, or if you simply consider it unimportant in this case. But I think that it is important, because it complicates the situation considerably and explains why your Seq would still need to be called from inside an async function.

You see, it's not enough to simply wait until the lock becomes available, and then wait until the write operation finishes, etc. — the whole point of using async is that while you wait, you execute all the other tasks that have been scheduled. And to do that you have to return control to the reactor, the main loop that controls running of all the async tasks. And of course to be able to do that, you have to be able to suspend the execution of the current code.

There is a way around this, but it's not pretty. Some implementations of the async loop let you run one or more iterations of that loop from your regular, non-async code. So you can call your async function from inside non-async code by first scheduling that function's execution, and then running the async loop until that async function finishes. The caveat is that you should never call such code from an async function — funny things might happen then.

User avatar
jcw
Posts: 35
Joined: Sat Dec 21, 2019 12:08 pm
Location: CET/CEST

Re: Avoiding "async def" creep

Post by jcw » Mon May 25, 2020 12:41 am

Yes, I understand. Seq would create a task to process the calls listed, asynchronously, one by one. This is definitely not a complete solution, as ping will have to return before all this work can be done (started, even). So this does not remove the need for asyncs/awaits, but perhaps avoid most of them, with callbacks or message sends used to signal completion.

What I'm trying to find, is a way to keep the async part separate from my own message-/dataflow-driven code, with messages getting picked up in the "message pump" logic I currently use to drive successive processing steps.

Again, not a solution yet, as far as I can tell. Maybe it can't be done. Time will tell, in its unique asynchronous way.

User avatar
pythoncoder
Posts: 4242
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

What "async def" creep?

Post by pythoncoder » Mon May 25, 2020 6:13 am

@jcw This is known as the red and blue function problem and has been widely discussed.

Python's asyncio draws a distinction between synchronous and asynchronous functions - given that uasyncio is (largely) a subset, it does too. There are other asynchronous programming paradigms that fix this supposed problem. As an engineer rather than a computer scientist it strikes me as more of a theoretical problem than a practical one.
Before you know it, everything becomes an async-this and an await-that. Even for functionality which really doesn't have to deal with scheduling and the flow of time.
Well, maybe, but the code you cited is not an example of this.

Code: Select all

async def ping(self):
    async with self._lock:
        await self._as_write(b"\xc0\0")
another transmission may be in progress when ping is scheduled. Waiting on the lock allows this to proceed to completion before the ping packet is written, preserving the integrity of the protocol: this method is asynchronous of necessity. Incidentally the implied release of the lock is synchronous.

In practical code I don't think it's a common error to write asynchronous functions which could be synchronous.
Peter Hinch

User avatar
jcw
Posts: 35
Joined: Sat Dec 21, 2019 12:08 pm
Location: CET/CEST

Re: Avoiding "async def" creep

Post by jcw » Mon May 25, 2020 9:12 am

Thanks for that link, Peter! Precisely my point, and worded so much nicer and better. I went into this more from the implementation side, because I was still wrapping my head around why this issue plays such a central role. My conclusion is the same: Go probably nailed this best. As well as Clojure and ClojureScript (which actually solves it for JavaScript, including NodeJS, go figure). But this is Python-land, with await snakes and common-stack ladders.

After writing this up, I now think I can actually prevent async/await from permeating all my code, because of its particular nature, i.e. being deeply message-/dataflow-based. I'll share that code once it has evolved a bit further.

User avatar
jcw
Posts: 35
Joined: Sat Dec 21, 2019 12:08 pm
Location: CET/CEST

Re: Avoiding "async def" creep

Post by jcw » Wed May 27, 2020 10:50 am

As an experiment, I wrote a basic MQTT client which is "minimally blocking". The idea being that blocking is not always a problem (even hangs and failures aren't necessarily). The one essential suspend point which matters in a network client such as this, is being notified when a new message comes in.

The code can be found at https://git.jeelabs.org/jcw/pythonx/src ... aster/mqtt. Use or ignore at your own peril.

My intention is to find a way to code with only the "main" script using await/async, to help reason about the scheduling behaviour of the application. Existing async libraries are fine, but I'm going to try wrapping them up at the top level (with a few callbacks).

xt6master
Posts: 3
Joined: Sat May 30, 2020 7:37 am

Re: Avoiding "async def" creep

Post by xt6master » Sat May 30, 2020 9:03 am

async def "creep" is not a problem. Don't over-elevate Bob's 2015-era blog post. If you want your stack to yield to other stacks with the current pre-emptive thread, you are opting into a behavior. A super convenient behavior, but a distinct behavior nonetheless. You are asking the compiler to interleave the following code until it reaches an await. This is insanely useful, and extremely easy to reason about in one's programs.

Bob lost me when he suggested that threads were an out for red/blue. Threads color your entire application mutually distinct colors and you better make damned sure you only call rainbow-safe functions from multiple threads. It's just silly to submit threads as a simpler alternative to coroutines.

If you see an await on a function that you don't think needs one, look at what it does. Probably it does something that takes involves waiting. If it does something that involves waiting, it can and should (so update the docs or update your brain) be able to suspend until later, if the caller is okay with it. The caller can redeem the wait whenever it wants with `await`. Crucially (please re-read this sentence 4 times), this property is transitive up the stack.

When you write a piece of code will it be reused? (maybe) Do you know at that moment? (no) You don't know what your callers will care about w.r.t. your piece of code's waiting around. Instead of blocking all the time, encode the things you need to wait on via `await` and let your caller do the same.


Read PEP 20. Here's a link just in case: https://www.python.org/dev/peps/pep-0020/

This is Python. Here, explicit (await) is better than implicit (when will the next line of code execute?) and readability counts. If you do not know whether a call stack will yield, how can you know whether and when it will modify state you see? `await` points are quiescent in Python. They are easy to reason about and they clearly communicate the dependency of the following code on some action that is deferrable, during which other deferrable actions may occur. This is so obvious (I'm not even Dutch) that you can explain it in simple words to a 5th grade child.

> This assumes a perfect network world, i.e. no major hiccups, no hangs, no loss of connectivity.

These are unconscionable assumptions ;)


> For those cases, a hardware watchdog can be used to recover - with each restart dealing with network health issues, etc.

Versus exceptions and await? You didn't address "imperfect transfer rate" either. Blocking after receiving the first byte is a pretty surprising choice for an async client library, and sounds more like a bodged workaround for a synchronous library of a highly latent service.



Blocking in async method for a socket. This method invocation should be async and it should await its network activities.
https://git.jeelabs.org/jcw/pythonx/src ... try.py#L25

Blocking in async method for a send/reply message pair. This method invocation should be async and it should await its network activities.
https://git.jeelabs.org/jcw/pythonx/src ... try.py#L29

Blocking in async stack for receive of unknown length. This method invocation should be async and it should await its network activities.
https://git.jeelabs.org/jcw/pythonx/src ... qtt.py#L66

This strategy is highly limiting; employing Python's async/await functionality will increase the durability and performance of your application at the cost of... writing a word or two to denote each asynchronous pointcut's exact location. And not only _your_ application - it will free the applications your library ends up in to do things while your thing is waiting for bytes over a NIC _you do not control_.

Why can't a consumer of your library update a counter or read a sensor while your library is waiting for a byte from the network? Why choose to stop the processor and wait?
In go, they force the issue and you're stuck with having to know which stacks will do this. With Python, the end application developer chooses when the suspend point occurs. In fact, in Python that end developer is transitive! You can write a library that uses libraries and suspend _from your perspective_ but that doesn't force the user of your 2nd layer library to suspend at the same point. It's explicit, it's elegant and it is utterly _fine_ to write async on your special async methods, and await on your expressions that need the results of asynchronous invocations.

User avatar
jcw
Posts: 35
Joined: Sat Dec 21, 2019 12:08 pm
Location: CET/CEST

Re: Avoiding "async def" creep

Post by jcw » Sat May 30, 2020 10:16 am

Thanks for your thoughtful comments. We may not reach agreement, but I appreciate the insights.

The distinction of what blocks and what doesn't can be vague. Await says "something else" may happen here (e.g. elapsed time, side effects). I don't see how that is fundamentally different from say a "print(...)" or a function call. The code "print(x); someFun(); print(x)" can take time, and there is no guarantee that x remains the same.

W.r.t. using networks in a mostly blocking fashion: that's a design choice, I'd say. It depends on context whether that makes sense. Slow "nodes" on a LAN or WLAN may not care about rare slowdowns. A watchdog can capture critical failures, reset the node, with logic in the startup to decide whether to wait, panic, or continue running in offline mode.

Blocking on just the first byte is a choice, it could be changed to the final separator character, i.e. a newline in line-based text protocols. The idea is the same: don't bother the main application code unless there is actionable news.

Note that I'm not arguing against async/await, on the contrary: it's an essential and clean mechanism. But I'm not convinced it leads to "durable" code when let loose unchecked. Every read/readline (and usually also write) is potentially blocking. I've been isolating all such unavoidable cases in my first explorations (they are no more than that at this stage), to be able to write the rest of my code in a synchronous fashion. This does NOT imply blocking - what I'm working on is message-based and dataflow-driven. I now also have a static-file webserver with websocket support, coded in the same style. And so far, it has all been coming together nicely. But it's early days.

xt6master
Posts: 3
Joined: Sat May 30, 2020 7:37 am

Re: Avoiding "async def" creep

Post by xt6master » Sat May 30, 2020 7:18 pm

jcw wrote:
Sat May 30, 2020 10:16 am
The distinction of what blocks and what doesn't can be vague.
Friend, consider what "blocking" means. Every line of code blocks the following line. Some lines simply take longer than others.
jcw wrote:
Sat May 30, 2020 10:16 am
Await says "something else" may happen here (e.g. elapsed time, side effects). I don't see how that is fundamentally different from say a "print(...)" or a function call. The code "print(x); someFun(); print(x)" can take time, and there is no guarantee that x remains the same.
It's not; and the caller gets to choose when those effects are reified from the caller's perspective by choosing when to await.
jcw wrote:
Sat May 30, 2020 10:16 am
W.r.t. using networks in a mostly blocking fashion: that's a design choice, I'd say. It depends on context whether that makes sense. Slow "nodes" on a LAN or WLAN may not care about rare slowdowns. A watchdog can capture critical failures, reset the node, with logic in the startup to decide whether to wait, panic, or continue running in offline mode.
A single slow transfer results in hitches in user interfaces with this pattern, and you can only service 1 client request at a time. It's not a design choice, it's a needless limitation. Consider reflecting on `aiohttp`.
jcw wrote:
Sat May 30, 2020 10:16 am
Blocking on just the first byte is a choice, it could be changed to the final separator character, i.e. a newline in line-based text protocols. The idea is the same: don't bother the main application code unless there is actionable news.
Sure, but then you also block for the duration of sending the reply despite the underlying transport supporting non-blocking transfer integrated into language primitives.
jcw wrote:
Sat May 30, 2020 10:16 am
Note that I'm not arguing against async/await, on the contrary: it's an essential and clean mechanism. But I'm not convinced it leads to "durable" code when let loose unchecked.
"Let loose unchecked" is very imprecise language and I am unsure what you mean. Sure, when a person writes bad code without understanding the result will tend to be fragile and inscrutable. That has nothing to do with async/await. You're suggesting that some global watchdog be employed to nanny individual messages' timings and insert itself implicitly into the error handling path without knowledge of each message's call site semantic (unless you also provide a CallSiteSematic to the watchdog...).
A statement like `await asyncio.wait_for(send(), timeout=3)` is clear at the call site what is happening, and offers graceful exception handling of a timeout case, or raising if a timeout is not able to be handled at the level of the sender. It also allows the processor to get other time-sensitive things done. Asynchrony is beautiful and coroutines should be willingly embraced.
jcw wrote:
Sat May 30, 2020 10:16 am
Every read/readline (and usually also write) is potentially blocking. I've been isolating all such unavoidable cases in my first explorations (they are no more than that at this stage), to be able to write the rest of my code in a synchronous fashion. This does NOT imply blocking - what I'm working on is message-based and dataflow-driven. I now also have a static-file webserver with websocket support, coded in the same style. And so far, it has all been coming together nicely. But it's early days.
Nothing remotely hinders you from calling long-blocking methods from an async method. If that's what you want to do, I mean go for it! Sure, you could `await` a fully materialized message in your framework before passing it along to your handler at this point https://git.jeelabs.org/jcw/pythonx/src ... try.py#L20 but the moment your handler needs to do something that involves waiting you've needlessly utterly constrained your implementation options. If you make that line 20 `await` your handler (and wait for full messages before invoking it, for this simple example) you are able to (not required to) await things in handlers. You've gone to pains to strip out a feature that is not even intrusive into implementations. For realsies, you would do yourself a great favor to just stop worrying and love async.

User avatar
jcw
Posts: 35
Joined: Sat Dec 21, 2019 12:08 pm
Location: CET/CEST

Re: Avoiding "async def" creep

Post by jcw » Sat May 30, 2020 10:29 pm

The watchdog I mentioned is based on hardware which resets the node. When it fires, it's "fatal" for the app - a full hardware reset happens. This is not an exception or error mechanism, it's a hardware panic button.

As for writes: with small messages and TCP/IP's internal buffering, there's rarely blocking involved, I expect.

Let me re-state what I'm doing in different terms: I have N tasks with async/await code running, one of them has all my app logic, the others do nothing but wait on incoming network packets and other data sources. When that happens, they do a callback, thus affecting the app logic (which is at that moment by necessity blocked in await).

So it's all single-threaded, with several tasks, i.e. coroutiness, running cooperatively. Lots of async/await activity (i.e. task switching) may be going on. But the bulk of my application is written as running the show without awaits.
Asynchrony is beautiful and coroutines should be willingly embraced.
Sure, and they are. In moderation.

Post Reply