ulab, or what you will - numpy on bare metal

C programming, build, interpreter/VM.
Target audience: MicroPython Developers.
Post Reply
v923z
Posts: 168
Joined: Mon Dec 28, 2015 6:19 pm

ulab, or what you will - numpy on bare metal

Post by v923z » Wed Sep 25, 2019 4:12 pm

Hi all,

As advertised in the first post of viewtopic.php?f=3&t=6874, and also under https://micropython-usermod.readthedocs ... ds_14.html, I am releasing a C module with a numpy-like interface.
For a whooping 12 kB of extra flash space, you are going to get the following:

- compact, iterable and slicable containers of numerical data in 1, and 2 dimensions (arrays and matrices). Arrays and matrices can be initialised by passing iterables or iterables of iterables into the constructor. The type of the array can be restricted by supplying the dtype keyword argument.

Code: Select all

import ulab
a = ulab.ndarray([1, 2, 3], dtype=ulab.uint8)
b = ulab.ndarray([[1, 2, 3], [4, 5, 6]], dtype=ulab.float)
Arrays and matrices are pretty-printed, and can be sliced, and iterated over, even if the container is two-dimensional (in this case the iterator will return a new ndarray)

Code: Select all

import ulab

a = ulab.ndarray([1, 2, 3, 4, 5], dtype=ulab.uint8)

print(a[-1], a[0], a[0:2])

b = ulab.ndarray([[1, 2, 3], [4, 5, 6]], dtype=ulab.float)
print(b[0])
When it makes sense for a particular matrix operator, the axis keyword argument can be supplied. Thus, e.g.,

Code: Select all

import ulab

b = ulab.ndarray([[1, 2, 3], [4, 5, 6]], dtype=ulab.float)
ulab.max(b, axis=0)
will return a new ndarray, whose elements are the maxima along the vertical axis. The axis keyword is defined for the min/max, argmin/argmax, sum, mean, std, and roll functions.

- In addition, ndarrays support all the relevant unary and binary operators (e.g., len, ==, +, *, etc.), i.e., the following is valid:

Code: Select all

from ulab import ndarray, float

a = ndarray([1, 2, 3], dtype=float)
print(a + a*5.0)
Binary operators work element-wise. For this reason, you can easily get a weighted rolling average as follows:

Code: Select all

from ulab import ndarray, roll, mean

weight = ndarray([1, 2, 3, 4, 5]) # These are the weights; the last entry is the most dominant

samples = ndarray([0]*5) # initial array of samples

for i in range(5):
    # a new datum is inserted on the right hand side. This simply overwrites whatever was in the last slot
    samples[-1] = 5-i
    print(mean(samples*weight))
    # the data are then shifted by one position to the left
    roll(samples, 1)
- universal functions and vectorised computations on micropython iterables and numerical arrays/matrices. Amongst other things, this means that the following all work

Code: Select all

from ulab import exp, ndarray

exp(1.5) # single numerical value
exp([1.5, 2.5]) # a list
exp(range(5)) # a range
a = ndarray([1, 2, 3]) # a 1D ndarray
exp(a) 
a = ndarray([[1, 2, 3], [4, 5, 6]]) # a 2D ndarray
exp(a)
- basic linear algebra routines (matrix inversion, matrix reshaping, and transposition)

Code: Select all

from ulab import ndarray, inv

a = ndarray([[1, 2], [3, 4]])
b = inv(a)
b.transpose()
b.reshape((1, 4))
- polynomial fits to numerical data

Code: Select all

import ulab

x = ulab.ndarray([-3, -2, -1, 0, 1, 2, 3])
y = ulab.ndarray([10, 5, 1, 0, 1, 4.2, 9.1])

p = ulab.polyfit(x, y, 2)
ulab.polyval(p, x)
- fast Fourier transforms of linear arrays (I don't know if 2D transforms are worth the trouble): since many a time one is interested only in the power spectrum of the signal, beyond the actual FFT, this sub-module also implements a function called spectrum that returns only the absolute value of the transformed signal. The FFT routine calculates the transform in place, i.e. there should be very little RAM overhead.

Code: Select all

import ulab

a = ulab.ndarray([0, 1, 2, 3, 0, 1, 2, 3])

re, im = ulab.fft(a) # this returns two new ndarrays
print('real part: ', re)
print('imag part: ', im)

ulab.spectrum(a) # this overwrites the input array
print(ulab.log10(a)) # you get the power in dB
@pythoncoder would surely like to know if an FFT costs a king's ransom. No, it doesn't. In fact, a 1024-point float transform can be gotten in less than 2 ms on the pyboard.

Code: Select all

>>> import ulab, utime

>>> x = ulab.linspace(0, 100, 1024)
>>> y = ulab.sin(x)

>>> t = utime.ticks_us()
>>> a, b = ulab.fft(y)
>>> print(utime.ticks_diff(utime.ticks_us(), t))
1948
A couple of very general remarks:

1. ndarrays are built on top of the micropython binary arrays, thus some of the binary facilities are used. However, in the long run, ndarrays could be detached from binary arrays. I bring up this point, because the binary array utilities are actually not exposed in the C code, hence, I had to copy parts verbatim. If I might ask a favour of the maintainers of micropython, then this would be it: think about what could/should be exposed at the C level. In other words, which functions could find applications in external modules. array_new is definitely such a function.

2. upcasting:

Since ndarrays can be of different types, one has to decide what should happen, if two dissimilar arrays are added, multiplied, etc. The upcasting rules of numpy are not entirely consistent; I chose mine, but they might not be perfect.

3. slicing and indexing:

While a single slice will work ask expected,

Code: Select all

from ulab import ndarray, uint8

a = ndarray([range(10), range(10), range(10)], dtype=uint8)

a[0, 1]
won't be valid. This has to do with the fact that the slice object in micropython does not support the comma. I think, a fix for this might be difficult, and it might happen in micropython itself. I would be happy to get feedback on this from more knowledgeable people.

At present, not all possible combinations of slices are defined, e.g., backwards slices won't work (though, you can still retrieve the very last element of an array, see the rolling average example above).

4. parsing of arguments:

I huge chunk of the code is really boring stuff, namely, parsing of the input arguments, and building a decision tree based upon, whether the input is a standard python iterable, or an ndarray. Now, the problem is not the iterator itself, because ndarrays are also iterable (though, iterating over the C array is faster). But depending on the exact C type of the input argument, different intermediate containers have to be defined. So, e.g., when finding the maximum of an array, one must define a temporary variable that holds the current maximum. This applies to all operations. If one knows a sensible universal (i.e., independent from input data type) solution to this problem, I would like to hear it.

5. additional functions

I would also like to know which other numpy functions you would find useful. At the moment, only 12 kB of extra space is used, so there is still a lot of room for additional tools. My vision was to thematically separate functions into sub-modules that one can easily exclude from the compilation, but I don't see that flash is going to be a problem anytime soon.


For the impatient: you can download the firmware for the pyboard v.1.1 from https://github.com/v923z/micropython-ul ... rmware.dfu . Otherwise, the source code can be found under https://github.com/v923z/micropython-ul ... aster/code .

I should like to hope that you find something useful in this module, and I would be glad to have feedback. If it pertains to general discussion, here, if it is related to a very specific implementation detail, then github is probably a better place.

You will find a lot of examples under https://github.com/v923z/micropython-ul ... ulab.ipynb. Incidentally, the notebook also contains the heavily commented source code itself.

If you don't fancy the jupyter notebook, you can read its content under https://github.com/v923z/micropython-ul ... e/ulab.rst. I would like to stress that the notebook should be treated as a technical document, or developer manual, and not as a user guide. It contains mostly my verbose comments on various implementation details and difficulties. As soon as I have a bit more time, and all the dust settles down, I will write up a proper user manual, though, the convention follows that of numpy's.

As stated on github, the MIT licence applies.

Cheers,

Zoltán
Last edited by v923z on Fri Sep 27, 2019 6:55 am, edited 3 times in total.

User avatar
marfis
Posts: 215
Joined: Fri Oct 31, 2014 10:29 am
Location: Zurich / Switzerland

Re: ulab, or what you will - numpy on bare metal

Post by marfis » Wed Sep 25, 2019 7:12 pm

awesome work, thanks a lot for sharing this!

OutoftheBOTS_
Posts: 847
Joined: Mon Nov 20, 2017 10:18 am

Re: ulab, or what you will - numpy on bare metal

Post by OutoftheBOTS_ » Wed Sep 25, 2019 8:32 pm

This is great functionality that MP has been missing compared to cpython.

User avatar
jimmo
Posts: 2754
Joined: Tue Aug 08, 2017 1:57 am
Location: Sydney, Australia
Contact:

Re: ulab, or what you will - numpy on bare metal

Post by jimmo » Thu Sep 26, 2019 3:53 am

This is amazing!! Thank you!!!

I saw your hints at this earlier and have been excitedly waiting for an announcement like this :)

I look forward to being able to use this in conjunction with the native loadable shared library feature (as discussed in that thread viewtopic.php?f=3&t=6884), i.e. ulab.mpy (Especially down the track, when that feature doesn't require copying the executable data into RAM, but on devices like ESP32 with external SPIRAM it would be fine today).

Damien
Site Admin
Posts: 647
Joined: Mon Dec 09, 2013 5:02 pm

Re: ulab, or what you will - numpy on bare metal

Post by Damien » Thu Sep 26, 2019 4:22 am

Wow, that looks really great, thanks for sharing it!

IMO it would be good to match numpy naming and semantics as much as possible, even for things that seem a little awkward. People who already use numpy already know about the subtleties like upcasting and so it'd make sense to just copy the numpy behaviour so people don't need to re-learn.

Re slicing with a comma: I'm pretty sure this is supported in MicroPython. When you do a[1, 2] the index object is just a tuple and you should be able to access it, test that it's a tuple, and extract its entries.

v923z
Posts: 168
Joined: Mon Dec 28, 2015 6:19 pm

Re: ulab, or what you will - numpy on bare metal

Post by v923z » Thu Sep 26, 2019 6:27 am

Hi all,

First, thanks a lot for the encouraging words.

Second, yesterday I uploaded the announcement in such a hurry that I managed to hide a couple of mistakes in the code examples. You have probably figured this out already, but in any case, uint8, float, etc., and functions should be preceded by ulab. I will fix this in the OP.
jimmo wrote:
Thu Sep 26, 2019 3:53 am
I look forward to being able to use this in conjunction with the native loadable shared library feature (as discussed in that thread viewtopic.php?f=3&t=6884), i.e. ulab.mpy (Especially down the track, when that feature doesn't require copying the executable data into RAM, but on devices like ESP32 with external SPIRAM it would be fine today).
At one point, we should discuss what the interface should look like, but perhaps it is too early. Please, raise this issue once more, when the time comes, and if I happen to miss it.
Damien wrote:
Thu Sep 26, 2019 4:22 am
IMO it would be good to match numpy naming and semantics as much as possible, even for things that seem a little awkward. People who already use numpy already know about the subtleties like upcasting and so it'd make sense to just copy the numpy behaviour so people don't need to re-learn.
I certainly agree. Here are a couple of points to consider.

1. As discussed in https://github.com/v923z/micropython-ul ... s-in-numpy in numpy, you can return properties by saying

Code: Select all

a = array([[1, 2, 3], [4, 5, 6]])
a.dtype
i.e., as a proper property of the object. I haven't yet found, how one is supposed to do that in micropython. I would appreciate, if you could give me a pointer, where I have to look.

2. As mentioned in https://github.com/v923z/micropython-ul ... #upcasting, I don't think that numpy's upcasting is completely consistent. Also, numpy supports many more types, e.g., int32, double, etc. These would probably not have too much use on a microcontroller, and then somewhere you have to truncate the sequence of types, at which point, "compatibility" with numpy will be broken. I appreciate that this is a difficult issue, and this is why I would like to have discussions like this. Also note that by adding more types, operators will become more cumbersome to implement. E.g., if you want to add two ndarrays, each with a number of 5 possible types, you end up with 25 combinations that you have to handle in C. If you add two more, you are already at 49.

Now, micropython solves this issue by pushing everything into an 8-byte container, but that has an overhead that I would like to avoid, if possible. So, if someone has a feasible idea as to how to work with an arbitrary number of C types, and still keep the code reasonable, I would be more than happy to implement it.

One last comment on this subject: supporting complex numbers would open up a nasty can of worms, but as soon as you have FFT, you can't escape that. This is why I actually return the real and imaginary parts separately, while numpy does that in a single complex array. I am not sure I am ready to go down that route.

3. I floated the idea of the option of passing/accepting raw data in an ndarray in https://github.com/v923z/micropython-ul ... -raw-bytes. What I had in mind there was direct access to measurement data. E.g., you could calculate the FFT of timed ADC measurements with absolutely no RAM overhead, if you can expose the bare binary array of an ndarray. This would definitely be a break from numpy convention, but I think that on a microcontroller this makes a lot of sense.
Damien wrote:
Thu Sep 26, 2019 4:22 am
Re slicing with a comma: I'm pretty sure this is supported in MicroPython. When you do a[1, 2] the index object is just a tuple and you should be able to access it, test that it's a tuple, and extract its entries.
OK, thanks! I will look into that.

Cheers,

Zoltán

stijn
Posts: 735
Joined: Thu Apr 24, 2014 9:13 am

Re: ulab, or what you will - numpy on bare metal

Post by stijn » Thu Sep 26, 2019 8:38 am

This is a great start!
Agree with Damien: in order not to hinder adoption it should match numpy as closely as possible. In fact if you see it feasible, you could add the tests in the same way the MicroPython tests themselves are written (and hence also using it's run-tests): for supported features, write valid numpy code, run with CPython, compare output with what ulab produces. This would also be much easier than writing tests from scratch because then you'd have to come up with the computation results yourself.

Wrt point 4 about argument passing, can you point to the lines of code which you mean? It suonds like something for which you can write a generic function and/or macro but I'm not sure.

Wrt to the implementation reusing MicroPython internals: typically most of this isn't exposed as an API because the larger the API, the more chances of breaking changes. Which is a risk you are taking now, but I assume you considered that. Now there's a way to avoid some duplication: non-public methods are guarded with STATIC so if you override the definition in mpconfigport.h you can actually access those functions by declaring them extern in your code, instead of copying. That does mean your code won't work with standard builds though, not sure if that is really a problem. Another possibly problem is that everything then becomes not static which is overkill but could eventually be remediated by having e.g. ARRAY_STATIC instead of STATIC just for the array type.

Also just an unverified idea, but MicroPython's array_new can be accessed indirectly through mp_type_array.make_new I think?

Another idea since you talk about eventually detaching the array type: have you considered offloading actual calculations to 'standard' libraries like Lapack? I know this might make the build process very complicated and potentially blow up code size, but it would mean the calculations are done using de facto standards.

User avatar
pythoncoder
Posts: 5956
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Re: ulab, or what you will - numpy on bare metal

Post by pythoncoder » Thu Sep 26, 2019 9:01 am

v923z wrote:
Wed Sep 25, 2019 4:12 pm
...
@pythoncoder would surely like to know if an FFT costs a king's ransom. No, it doesn't. In fact, a 1024-point float transform can be gotten in less than 2 ms on the pyboard...
I'm astounded. When I find time I'll study your code: are you using a better algorithm than my Cooley-Tukey code?

I'm unfamiliar with numpy so I can't comment on compatibility issues but the functionality is very impressive. :D
Peter Hinch
Index to my micropython libraries.

v923z
Posts: 168
Joined: Mon Dec 28, 2015 6:19 pm

Re: ulab, or what you will - numpy on bare metal

Post by v923z » Thu Sep 26, 2019 9:15 am

pythoncoder wrote:
Thu Sep 26, 2019 9:01 am
v923z wrote:
Wed Sep 25, 2019 4:12 pm
...
@pythoncoder would surely like to know if an FFT costs a king's ransom. No, it doesn't. In fact, a 1024-point float transform can be gotten in less than 2 ms on the pyboard...
I'm astounded. When I find time I'll study your code: are you using a better algorithm than my Cooley-Tukey code?
Peter, it is pure Cooley-Tukey, but in C. Here it is:

https://github.com/v923z/micropython-ul ... /fft.c#L22

Cheers,
Zoltán

v923z
Posts: 168
Joined: Mon Dec 28, 2015 6:19 pm

Re: ulab, or what you will - numpy on bare metal

Post by v923z » Thu Sep 26, 2019 9:29 am

stijn wrote:
Thu Sep 26, 2019 8:38 am
Agree with Damien: in order not to hinder adoption it should match numpy as closely as possible. In fact if you see it feasible, you could add the tests in the same way the MicroPython tests themselves are written (and hence also using it's run-tests): for supported features, write valid numpy code, run with CPython, compare output with what ulab produces. This would also be much easier than writing tests from scratch because then you'd have to come up with the computation results yourself.
The question of testing is a good point, thanks for bringing that up. As for compatibility with numpy, I took the numpy manual, and went by that. There might still be rough edges, but those are not intentional, and should be ironed out in the long run. Of course, there are a couple of extra functions that you won't find in numpy, but I think there are good arguments in their favour.
stijn wrote:
Thu Sep 26, 2019 8:38 am
Wrt point 4 about argument passing, can you point to the lines of code which you mean? It suonds like something for which you can write a generic function and/or macro but I'm not sure.
Here is an ugly example: https://github.com/v923z/micropython-ul ... ray.c#L402
stijn wrote:
Thu Sep 26, 2019 8:38 am
Wrt to the implementation reusing MicroPython internals: typically most of this isn't exposed as an API because the larger the API, the more chances of breaking changes. Which is a risk you are taking now, but I assume you considered that. Now there's a way to avoid some duplication: non-public methods are guarded with STATIC so if you override the definition in mpconfigport.h you can actually access those functions by declaring them extern in your code, instead of copying. That does mean your code won't work with standard builds though, not sure if that is really a problem. Another possibly problem is that everything then becomes not static which is overkill but could eventually be remediated by having e.g. ARRAY_STATIC instead of STATIC just for the array type.
I understand your concern, in fact, a couple of months ago I brought up this question on this very forum. That time there didn't seem to be pressing enough reasons for exposing certain functions. I feel that the situation is different now.
stijn wrote:
Thu Sep 26, 2019 8:38 am
Also just an unverified idea, but MicroPython's array_new can be accessed indirectly through mp_type_array.make_new I think?
I will look into that, thanks!
stijn wrote:
Thu Sep 26, 2019 8:38 am
Another idea since you talk about eventually detaching the array type: have you considered offloading actual calculations to 'standard' libraries like Lapack? I know this might make the build process very complicated and potentially blow up code size, but it would mean the calculations are done using de facto standards.
While I see your point, I would like to keep ulab free from external code. First, as you mentioned, compilation will become difficult. Second, we would bring in dependencies that might cause trouble down the road. Third, licensing might be problematic.

As for getting rid of the binary array: I am more inclined to keep it now. My main reason is that as long as the underlying container of an ndarray is a binary array, you can pass ndarrays (with the help of an extra function) to anything that uses the buffer protocol. I think one should not underestimate the advantages of that.

Cheers,

Zoltán

Post Reply