Run AI application on Pyboard

C programming, build, interpreter/VM.
Target audience: MicroPython Developers.
Post Reply
BOB63
Posts: 58
Joined: Sat Jul 25, 2015 8:24 pm
Location: Monza , Italy

Run AI application on Pyboard

Post by BOB63 » Thu Dec 10, 2020 9:41 pm

Hi,
I've recently starter to play with a Tiny ML as a service platform "Edge Impulse" that permit to define and deploy AI applications on edge devices.
The platform support such board as Arduino Nano 33 BLE Sense , OpenMV that is build around micropython and some others.
It could be really interesting run AI application also on the Pyboards, considering that are boards with performance higher that
an Arduino Nano .

I know that is probably a question to address to the Edge Impulse team, but from Micropython team Is there any future plan to run Tensorflow / Keras model also on the Pyboard ?
Thanks. Roberto

michael.o
Posts: 15
Joined: Sun Oct 25, 2020 12:38 am

Re: Run AI application on Pyboard

Post by michael.o » Wed Feb 03, 2021 5:50 am

I'm working on a custom micropython firmware that can run the tensorflow examples and facilitate running completely custom tensorflow lite for microcontroller models.

https://github.com/mocleiri/tensorflow- ... n-examples

I'm sort of just started on this the firmware can be built and runs the hello world example:
https://github.com/mocleiri/tensorflow- ... ello-world.

There are lots of areas to improve on.

I only have esp32 boards so don't know if it will work or what needs to be changed to get it working with PyBoards.

I had to make some changes to the esp32 Makefile to allow me to build the tensorflow library seperately and then link aswell as some changes related to building c++ files (I need to upstream these and/or find out the correct way I should have implemented things).

At the moment it needs 1.7 MB for the firmware on esp32.

bootloader 21440
partitions 3072
application 1639264
total 1704800

If you think it would fit on the pyboard perhaps you can try to compile it and file an issue with what you find?

The size is due to including all of the tensor op's on the firmware side. I want to find a way to externalize the ops into native python modules so that the firmware can shrink and then the implementer chooses which ops to include and they include them on the file system.

Post Reply