Multi-threading support, sponsored by Pycom

Announcements and news related to MicroPython.
Damien
Site Admin
Posts: 497
Joined: Mon Dec 09, 2013 5:02 pm

Multi-threading support, sponsored by Pycom

Postby Damien » Thu May 05, 2016 12:03 pm

Hi everyone,

Pycom, the people behind the WiPy and LoPy boards, are very generously providing financial support to get multi-threading implemented in MicroPython. This will be a really fantastic feature to have!

Right now this is work in progress. The development is happening on the "threading" branch of the main repository, found here: https://github.com/micropython/micropyt ... /threading . Once it is working smoothly it will be merged into the master branch. [UPDATE: threading branch was merged into master]

The plan is to implement the _thread module, which provides the fundamental functionality for multi-threading: starting new threads and creating mutex objects. For example you will be able to do the following:

Code: Select all

import _thread

def thread_entry(arg):
    print('thread start', arg)

for i in range(4):
    _thread.start_new_thread(thread_entry, (i,))


At the time of writing this post the following is implemented:
  • a new configuration option: MICROPY_PY_THREAD
  • generic _thread module in py/ core (see py/modthread.c, py/mpthread.h)
  • a thread safe memory manager and garbage collector (see py/gc.c, especially the GC_ENTER and GC_EXIT macros)
  • thread safe NLR handlers (exception handling) for x86 and x86-64
  • unix implementation of necessary thread hook functions using pthreads (see unix/mpthreadport.c)
  • a test suite (see tests/thread/)

All tests pass on the unix port (x86 and x86-64 CPUs only). You can try them out by doing the following in the root directory of the main repository:

Code: Select all

git checkout threading
cd unix
make
cd ../tests
./run-tests -d thread


The current implementation does not use a GIL (global interpreter lock). So if you have a 4 core machine then you can have all 4 cores running a Python thread! The threads can even share constant data like ints, floats, tuples and strings. The reason it works without a GIL is because MicroPython does not use reference counting (compared with CPython).

That said, in its current form the unix implementation is not safe to use: there are many operations that you can do that will crash the interpreter. For example, modifying a list that is shared across threads will crash it (they can all read a list without problem). You can protect against such crashes by using a mutex/lock object (which you probably want to do anyway). To see some examples that do work look at the thread tests.

Right now it's not clear whether the VM can remain GIL free. It can in principle, but it will require a lot of work to make everything safe (eg list, dict, set modifications). Certainly though it would be very interesting if MicroPython can have threading without a GIL.

The medium term goal is to apply a simple GIL to make everything safe, and then get threading working on the WiPy. Also pyboard will get threading soon enough. A GIL free VM/runtime may follow in the future. Note that having a GIL on bare-metal ports like WiPy and pyboard doesn't really make a difference (compared with no GIL) because there is only 1 CPU core to make use of.

Regards,
Damien.

mianos
Posts: 84
Joined: Sat Aug 22, 2015 6:42 am

Re: Multi-threading support, sponsored by Pycom

Postby mianos » Thu May 05, 2016 9:58 pm

It would be an interesting outcome if entity level locking was added to the micropython data structures. Micropython could well become a choice over cpython, even under Linux, for situations where real threading is needed. Now that normal CPUs have more and more cores and removing the GIL has proven a near impossible task this is a really exciting future.

stijn
Posts: 111
Joined: Thu Apr 24, 2014 9:13 am

Re: Multi-threading support, sponsored by Pycom

Postby stijn » Fri May 06, 2016 7:01 am

Sounds great!
So in it's current form, you need to protect variables shared amongst threads with mutexes and then all is fine - basically just like what you need to do in C? I'd say that sure has it's benefits (mainly not using mutexes unless you have to, and it's familiar for a lot of people) so if possible an option to switch between this and a GIL would be my preference.
Once I have some time in the next few weeks I'll add an mpthreadport for windows. (in theory we could use C++'s std::thread and have all pc ports covered in one go, but it might blow up the binary size, needs build modifications and as such probably takes more time getting things done than just doing it per-port)

User avatar
pythoncoder
Posts: 1262
Joined: Fri Jul 18, 2014 8:01 am

Re: Multi-threading support, sponsored by Pycom

Postby pythoncoder » Fri May 06, 2016 8:51 am

Impressive! Do you have a feel for timings on the Pyboard yet? If you start N continuously-running threads, at what rate, and for how long, will thread x get CPU time? Doubtless timings will be code-dependent, but some sort of finger in the air figure would be good.
Peter Hinch

Damien
Site Admin
Posts: 497
Joined: Mon Dec 09, 2013 5:02 pm

Re: Multi-threading support, sponsored by Pycom

Postby Damien » Fri May 06, 2016 9:37 am

It would be an interesting outcome if entity level locking was added to the micropython data structures.


That is indeed a path I would like to investigate, but it will take some time to research and develop.

So in it's current form, you need to protect variables shared amongst threads with mutexes and then all is fine - basically just like what you need to do in C? I'd say that sure has it's benefits (mainly not using mutexes unless you have to, and it's familiar for a lot of people) so if possible an option to switch between this and a GIL would be my preference.


Yes that's exactly right: treat it like a C program and you'll be fine. Keeping it raw like this is the most efficient way to make use of multiple cores running in parallel: you only create and use mutexs for the things that need sharing (which are generally very few data structures). If you do this at the Python level (eg "with lock: ...") then it's much more efficient than locking every mutable object at the C (VM/runtime) level, because most of those locks are unnecessary.

Maybe we don't need a GIL at all and just tell people to write thread-safe scripts using mutexs?

Do you have a feel for timings on the Pyboard yet?


No, nothing yet. I will need to write a custom scheduler for stmhal. I guess there could be some Python functions to tune the parameters of the scheduler.

But note that threading is memory hungry: you'll need at least 2k of stack per thread so you don't want to make heaps of them. Cooperative multi-tasking (asyncio) is the way to go for light-weight tasks.

stijn
Posts: 111
Joined: Thu Apr 24, 2014 9:13 am

Re: Multi-threading support, sponsored by Pycom

Postby stijn » Fri May 06, 2016 10:37 am

Damien wrote:Yes that's exactly right: treat it like a C program and you'll be fine. Keeping it raw like this is the most efficient way to make use of multiple cores running in parallel: you only create and use mutexs for the things that need sharing (which are generally very few data structures). If you do this at the Python level (eg "with lock: ...") then it's much more efficient than locking every mutable object at the C (VM/runtime) level, because most of those locks are unnecessary.

Maybe we don't need a GIL at all and just tell people to write thread-safe scripts using mutexs?


That's the typical discussion of course. This way can be hard to do correctly but it's efficient. The other way is less efficient, less 'micro' maybe, but easier to program.
I'm all +1 but only because after years of creating subtle and hard-to-debug bugs due to data races/deadlocks, and then investigating and fixing them, only now I (mostly) know how to properly write multithreaded programs. Optional GIL seems best, but I cannot judge how much work it is, and whether it will be used enough to justify the work?

mianos
Posts: 84
Joined: Sat Aug 22, 2015 6:42 am

Re: Multi-threading support, sponsored by Pycom

Postby mianos » Fri May 06, 2016 11:17 am

If you only have a native queues managed by a mutex (like I did in my mp port), you can get a long way there. (AmigaOS was basically co-operative threads with messages passed between threads in queues with all the data shared in a single address space. The scheduler simply distributed between threads on an io block or a manual yield. No memory protection and extremely limited pre-emption).
This makes a fun, light, operating system model if you don't mind threads, lots of message passing and and the odd callback.

User avatar
pythoncoder
Posts: 1262
Joined: Fri Jul 18, 2014 8:01 am

Re: Multi-threading support, sponsored by Pycom

Postby pythoncoder » Fri May 06, 2016 3:57 pm

Damien wrote:
...Cooperative multi-tasking (asyncio) is the way to go for light-weight tasks.
Indeed, and most embedded projects I've been involved with have used it, not least because it's much less prone to evil bugs which crop up once a month when nobody's on site to investigate. Alas asyncio isn't there yet: when I recently looked at it again, timing granularity was limited to one second. This has its uses, but is too slow for many applications with hardware interfaces.
Peter Hinch

Cornben
Posts: 1
Joined: Sat Jun 11, 2016 12:24 am

Re: Multi-threading support, sponsored by Pycom

Postby Cornben » Sat Jun 11, 2016 1:04 am

Hey, I found this thread via google. I was just trying to solve the problem of trying to use 2 servos independently as threads. Does micropython currently support any threading at all? If not I'd love to help out

Thanks

User avatar
pythoncoder
Posts: 1262
Joined: Fri Jul 18, 2014 8:01 am

Re: Multi-threading support, sponsored by Pycom

Postby pythoncoder » Sat Jun 11, 2016 5:17 am

@Cornben The topic of this thread is pre-emptive scheduling, which is probably overkill for your application. Assuming you're talking about cooperative scheduling, asyncio is supported and is the official approach. An alternative is to use lightweight threads (coroutines using generators). Such a library (which overcomes some performance limitations in the current implementation of asyncio) is available here https://github.com/peterhinch/Micropython-scheduler.git.
Peter Hinch


Return to “Announcements and News”

Who is online

Users browsing this forum: No registered users and 1 guest