RuntimeError from PEP479 violation - need help

General discussions and questions abound development of code with MicroPython that is not hardware specific.
Target audience: MicroPython Users.
Post Reply
cefn
Posts: 230
Joined: Tue Aug 09, 2016 10:58 am

RuntimeError from PEP479 violation - need help

Post by cefn » Sun Feb 17, 2019 12:14 pm

Hi all,

The medea low-memory JSON parser I developed in Micropython for ESP8266 has stopped working since v1.10.

I'm struggling to know how to modify the library to satisfy the requirements of PEP479 which was brought in by 1.10. This means all cases where I use nested generators now terminate in a "RuntimeError: generator raised StopIteration" even when I'm apparently catching the StopIteration

Can anyone suggest how I can trace where the PEP479 non-compliant code is in my generator stack, and where extra 'except StopIteration' clauses might be needed as I can't solve it, and I speculate there is not enough information in the RuntimeError to do so.

REFERENCE CASE

I'm currently getting REPL console logs like the one below. It shows a successful read of an account's tweets (a single tweet was requested from Twitter to keep the packet size low), followed by a RuntimeError when the generators run out of bytes to process from the HTTP REST response provided by the Twitter API...

Code: Select all

1096982979107278850 : '....The U.S. does not want to watch as these ISIS fighters permeate Europe, which is where they are expected to go. We do so much, and spend so much - Time for others to step up and do the job that they are so capable of doing. We are pulling back after 100% Caliphate victory!'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "examples/scripts/twitterTimelinePollFields.py", line 22, in loop
  File "examples/scripts/twitterTimelinePollFields.py", line 17, in loop
  File "examples/scripts/twitterTimelineExtractFields.py", line 36, in generateTweets
  File "examples/scripts/twitterTimelineExtractFields.py", line 32, in generateTweets
RuntimeError: generator raised StopIteration
See https://github.com/ShrimpingIt/medea/bl ... tFields.py for this example. It's a main.py I would expect to work without raising RuntimeErrors since the raised StopIteration 'errors' should be caught. Both line 32 and line 36 of twitterTimelineExtractFields.py are guarded by an 'except StopIteration'.

The diff at https://github.com/ShrimpingIt/medea/co ... 5982c0?w=1 shows the extra 'except' clauses I added, expecting to solve the problem.

I think the actual line where a StopIteration is explicitly or implicitly raised can't be traced currently, meaning it's impossible to fix.

WORKAROUNDS

I can work around the issue by building with v1.9.4 which doesn't have PEP479, but this is not sustainable.

An invocation like the following in the REPL successfully swallows all RuntimeErrors, but heapRe doesn't seem to have knowledge of the StopIteration or the line number where it was raised if it was further down in the stack. Anyway I don't feel good about catching a RuntimeError as normal behaviour every time I use the library.

Code: Select all

heapRe = None
from examples.scripts.twitterTimelinePollFields import loop
try:
    loop()
except RuntimeError as re:
    heapRe = re
BACKGROUND

Before PEP479 StopIterations would propagate through a series of generators in the stack without being explicitly caught. Each raised StopIteration caused the calling Generators to also stop implicitly. They bubbled up from a tree of other generators like...

Code: Select all

* Tokenizer#tokenizeValue (generator for SAX-style JSON tokens)
    * calls next() on a byteGenerator which yields bytes
    * yields tokens for matching literals like true, false, null
    * calls 'yield from' to hand over byte processing to e.g...
        * Tokenizer#tokenizeObject()
            * Tokenizer#tokenizeKey()
            * Tokenizer#tokenizeValue()
        * Tokenizer#tokenizeArray()
            * Tokenizer#tokenizeValue()            
        * Tokenizer#tokenizeString()
        * Tokenizer#tokenizeNumber()
PEP479 was focused on knowing where the actual terminating generator was in the stack (and making this explicit) but unfortunately the way Micropython reports the RuntimeError through the stack makes it impossible for me to trace where the uncaught StopIteration originates, while also being forced to fix it!

Can anyone help?

pfalcon
Posts: 1155
Joined: Fri Feb 28, 2014 2:05 pm

Re: RuntimeError from PEP479 violation - need help

Post by pfalcon » Sun Feb 17, 2019 9:10 pm

Did you hear about http://sscce.org/ ?
Awesome MicroPython list
Pycopy - A better MicroPython https://github.com/pfalcon/micropython
MicroPython standard library for all ports and forks - https://github.com/pfalcon/micropython-lib
More up to date docs - http://pycopy.readthedocs.io/

cefn
Posts: 230
Joined: Tue Aug 09, 2016 10:58 am

Re: RuntimeError from PEP479 violation - need help

Post by cefn » Sun Feb 17, 2019 10:35 pm

Sorry not to be clear. I would love to provide a minimal example, but the problem is I can't trace the exception to the line it originates from, so I can't recreate a simple example from the problematic code. Any ideas?

User avatar
pythoncoder
Posts: 5956
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Re: RuntimeError from PEP479 violation - need help

Post by pythoncoder » Mon Feb 18, 2019 6:51 am

At risk of suggesting the obvious, have you tried running the code under CPython to see if it offers a more comprehensive traceback?
Peter Hinch
Index to my micropython libraries.

cefn
Posts: 230
Joined: Tue Aug 09, 2016 10:58 am

Re: RuntimeError from PEP479 violation - need help

Post by cefn » Thu Mar 21, 2019 6:19 pm

@pythoncoder that was the route I took eventually, though it would be great for these exceptions to be traceable some other way within micropython. By coincidence I'd maintained the compatibility with CPython since I was first prototyping it, so was able to follow your guidance.

Perhaps obviously, the issue arose when the bytestream ended from which the JSON data was being read.

This is where next() and generator.send() were being used to get individual bytes from a byte source, and before sequences of bytes were matched to JSON tokens. All other layers were using yield from to consume from token generators, and were therefore immune to the issue.

Seeing the error trace helped me to understand PEP479 better altogether:

In summary StopIteration may be raised by next(generator) or generator.send(msg). It's illegal for generator1 to allow a StopIteration raised by a call to generator2 to propagate to the 'caller' of generator1, since it is ambiguous whether it was generator1 itself which stopped, or generator2. Python's BDFL didn't like this ambiguity so legislated to remove it.

Language structures like for item in generator: or yield from provide explicit handling of the case that an embedded generator raises a StopIteration, and turns it into a well-behaved loop termination or command completion without you having to handle the error. You can therefore ignore them as possible origins for these exceptions.

However if you write generator logic which uses next(otherGenerator) or otherGenerator.send(msg) directly, then those calls can raise a StopIteration. The generator therefore has to explicitly catch the StopIteration, then return, to avoid breaking the BDFL's new law.

So I needed to look for places where next() and send() were used directly, AND which could possibly be the end of a byte stream. In parsing valid JSON there's a lot of places which COULDN'T be the last byte, as you're still in the middle of a structure.

Thanks all for your help on this! I should be able to make a 1.10 compliant release in a bit.

Post Reply