ulab, or what you will - numpy on bare metal

C programming, build, interpreter/VM.
Target audience: MicroPython Developers.
v923z
Posts: 86
Joined: Mon Dec 28, 2015 6:19 pm

Re: ulab, or what you will - numpy on bare metal

Post by v923z » Sat Oct 05, 2019 12:05 pm

pythoncoder wrote:
Sat Oct 05, 2019 11:11 am
Would it be possible to trap the exception; if it occurs use the integer to instantiate an object for which you define __mul__ etc?
But I am afraid, all this must happen at the micropython level, and not in ulab. Or have I misunderstood you?

User avatar
pythoncoder
Posts: 3607
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Re: ulab, or what you will - numpy on bare metal

Post by pythoncoder » Sat Oct 05, 2019 12:39 pm

I was wondering if this could be done by ulab. I wonder how numpy does it?
Peter Hinch

v923z
Posts: 86
Joined: Mon Dec 28, 2015 6:19 pm

Re: ulab, or what you will - numpy on bare metal

Post by v923z » Sat Oct 05, 2019 12:59 pm

pythoncoder wrote:
Sat Oct 05, 2019 12:39 pm
I was wondering if this could be done by ulab. I wonder how numpy does it?
The problem is that the binary operators are evaluated in runtime.c, which calls the relevant implementations for the list, tuple, etc. instances. The example that I showed,

Code: Select all

[0]*5
works, because the runtime.c function properly recognises the left hand side as a list, and then calls the multiplication operator from objlist.c. Now, if you have

Code: Select all

5*[0]
instead, then runtime.c calls the operator from objint.c, and objint.c then inspects the right hand side, classifies it as a list, and calls the multiplication of objlist.c with swapped arguments. Here is the relevant function: https://github.com/micropython/micropyt ... int.c#L370 .

And this is exactly the snag: that this inspection of the operands is hard-wired, and I don't see at the moment, how one can bypass that. I haven't yet found, how numpy does that, but I assume that there must be a tool for such cases in the python implementation itself.

User avatar
pythoncoder
Posts: 3607
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Re: ulab, or what you will - numpy on bare metal

Post by pythoncoder » Sat Oct 05, 2019 5:55 pm

Ah, I see. I don't know the answer to that one.

To settle the confusion over DFT benchmarks I've pushed an update to my repo which includes dftbench.py. This acquires data from an ADC and times a forward transform, timing being from when data acquisition ends to when conversion is complete. On a Pyboard 1.x (which runs at 168MHz) the time is 12.9ms. On a Pyboard D SF2W running at 214MHz it is 3.6ms, also on the SF6W.

On the Pyboard 1.x over 12ms of that time is spent in the assembler DFT routine so the small amount of other code (Python and asm) is not having a significant impact. Note the twiddle factors are computed once only in the constructor so that isn't a factor.

This improvement is remarkable and, given that the time consuming code is in assembler, must surely be down to architectural improvements in the chip.
Peter Hinch

v923z
Posts: 86
Joined: Mon Dec 28, 2015 6:19 pm

Re: ulab, or what you will - numpy on bare metal

Post by v923z » Sat Oct 19, 2019 2:00 pm

pythoncoder wrote:
Sat Oct 05, 2019 5:55 pm
To settle the confusion over DFT benchmarks I've pushed an update to my repo which includes dftbench.py. This acquires data from an ADC and times a forward transform, timing being from when data acquisition ends to when conversion is complete. On a Pyboard 1.x (which runs at 168MHz) the time is 12.9ms. On a Pyboard D SF2W running at 214MHz it is 3.6ms, also on the SF6W.

On the Pyboard 1.x over 12ms of that time is spent in the assembler DFT routine so the small amount of other code (Python and asm) is not having a significant impact. Note the twiddle factors are computed once only in the constructor so that isn't a factor.

This improvement is remarkable and, given that the time consuming code is in assembler, must surely be down to architectural improvements in the chip.
Peter, thanks for summarising your observations! I think this gives a useful overview of the various approaches.
Last edited by v923z on Sat Oct 19, 2019 6:13 pm, edited 1 time in total.

v923z
Posts: 86
Joined: Mon Dec 28, 2015 6:19 pm

Re: ulab, or what you will - numpy on bare metal

Post by v923z » Sat Oct 19, 2019 2:33 pm

Hi all,

In the past two weeks, I have implemented a number of features, and made ulab's behaviour more consistent with that of numpy's. This adds a bit of overhead in the size of the firmware, however, I wouldn't consider the difference significant, especially, that some of it comes from new functionality. A large chuck of the work involved behind-the-scenes changes, but e.g., the slicing improvements I find significant. @Damien, thanks for the pointers on that subject! You can now do really wicked things, e.g., print those elements of an array that are smaller than 5

Code: Select all

import ulab as np

a = np.array(range(9), dtype=np.float)
print("a:\t", a)
print("a < 5:\t", a[a < 5])
and you can take this even further, as in

Code: Select all

import ulab as np

a = np.array(range(9), dtype=np.uint8)
b = np.array([4, 4, 4, 3, 3, 3, 13, 13, 13], dtype=np.uint8)
print("a:\t", a)
print("\na**2:\t", a*a)
print("\nb:\t", b)
print("\n100*sin(b):\t", np.sin(b)*100.0)
print("\na[a*a > np.sin(b)*100.0]:\t", a[a*a > np.sin(b)*100.0])
I have, in addition, managed to add a couple of new functions. E.g., you can now flip arrays, calculate the determinant of matrices, and get the eigenvalues. (Sorry, I am a physicist, I couldn't resist the temptation.) But more seriously, with the help of eigenvectors, one can implement quite sophisticated feedback routines for balancing robots and what not.

But perhaps the most significant improvement is a half-decent user manual that you can find here: https://micropython-ulab.readthedocs.io/. In the manual, there are tons of examples, benchmarks, comments on costs, and always warnings, when the behaviour is different to numpy, as well as there is a short section on how ulab can be extended. I hope this will encourage people ;)

I would, however, like to raise an issue, namely, the problem with binary operators, viewtopic.php?f=3&t=7005&start=20#p40044. In short,

Code: Select all

import ulab as np
a = np.array([1, 2, 3])
1 + a
will fail, while

Code: Select all

import ulab as np
a = np.array([1, 2, 3])
a+1
produces the expected result. I would like to propose, or at least, float the idea of the following change in micropython itself. The difficulty is in mp_binary_op, where the object types are hard wired, therefore, it is impossible to override the default behaviour. My suggestion would be to add a Boolean tag to the base type, own_binary_handler = true/false, or something similar, which could be set to true in external modules. Then at the very beginning of mp_binary_op, this tag of the right hand side should be inspected, and if it true, then handling should immediately be passed to the external handler, which can always be accessed though

Code: Select all

    type = mp_obj_get_type(rhs);
    if (type->binary_op != NULL) {
    ...
    }
As far as I see, such an addition would not break micropython, but I might very well be mistaken. @Damien, @jimmo, @pfalcon? Such an approach would also render the special handling of lists and types unnecessary, i.e,

Code: Select all

5*[0]
could immediately be resolved.

Another option could be to check the types against the native types, and if the operand is not in the set, then the external handler could be called immediately. This wouldn't even involve the definition of a new tag, so is perhaps easier.

I would like to stress that this issue is not ulab-specific: any other externally defined type will run into the same problem, and this is why I think that this should somehow be resolved. I am willing to take a stab at the implementation, but I would first like to know if the ideas above are reasonable at all.

Cheers,

Zoltán

Post Reply