Complex Numbers

General discussions and questions abound development of code with MicroPython that is not hardware specific.
Target audience: MicroPython Users.
nande
Posts: 13
Joined: Fri Oct 10, 2014 6:11 pm
Location: argentina
Contact:

Re: Complex Numbers

Post by nande » Thu Nov 06, 2014 7:48 am

Well i'm not a pro on micropython (i just got my board a few days ago), but i do know something about python's speed.
Just like any other language (even c++) the more you do, the more cpu it takes.
Recursion is really slower because it needs to save the information about the current function while it calls it again. and if you do a recursion too many times you run out of memory. So usually, recursion doesnt save much memory, instead it requires more, because it adds some overhead to the data.
If you call several functions its the same story.
If you have complex data its the same story, it adds hidden complexity ("there's no such thing as magic" principle).
Same thing with objects.Just because of the overhead of working with objects. (also i've seen that python (CPython at least) takes some time to create the so called bounded-methods).
So in the long run, having a flat structure will always (in theory at least) be faster. The difference is that in other languages (like c) you dont actually have a choice.

I dont quite understand what you mean by not being able to do multidimensional arrays. you can do them, is just not as similar to other languages.

Code: Select all

mda = [
 [0,1,2,3],
 [0,1,2,3]
]
print (mda[0][1])

r = 0
i = 1
a = array_of_complexes =[
 [0,1],#
 [0,-1]
]
m = sqrt( a[0][r]**2 + a[0][i]**2)
i would try to avoid functions as much as i can to gain more speed. also lambdas are functions.
i dont know how much improvement has Damien (iirc) George's implementation, but i bet an object is slower and more "heavy" than an array of simple types. If i had to bet i would say that storing pairs of floats instead of a "complex" number will save time and space.

not quite surprised, http://screencloud.net/v/qsNs
though the size of complex appears to be smaller, maybe because it's a built in.
it does seems to take more space. but i wouldnt care that much.

Code: Select all

>>> b = list(complex() for i in range(1000000))
>>> a = list((0.0,0.1) for i in range(1000000))
>>> sum(map(sys.getsizeof, b))
32000000
>>> sum(map(sys.getsizeof, a))
64000000
>>> sys.getsizeof(complex())
32
>>> sys.getsizeof(0.0)
24
>>> sum([sum(map(sys.getsizeof, i)) for i in a])
48000000
>>> sum(map(sys.getsizeof, [0.0 for i in range(2000000)]))
48000000
~namida de ashita ga mienai~

User avatar
pythoncoder
Posts: 5956
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Re: Complex Numbers

Post by pythoncoder » Fri Nov 07, 2014 7:45 am

I agree entirely about recurson, functions, lambdas and objects which is why, in my view, your code is an excellent solution. I was simply speculating whether it could be improved further in terms of RAM usage and speed by using arrays. I have mixed feelings about this since Python arrays can't hold complex numbers and it seems perverse not to use the complex type in this application. Especially when, as you point out, a complex number is stored more efficiently than a pair of floats. I'm becoming convinced that these factors settle the issue in favour of a list of complex numbers but I'd be interested to hear any other views.

Your code is filed on my server as the best solution I've found to date.

Incidentally you may be interested in the following observations in cPython:

Code: Select all

>>> sys.getsizeof(float())
24
>>> sys.getsizeof((float(),float()))
64
>>> sys.getsizeof([float(),float()])
80
>>> l =[float(1.2)]*1024
>>> sys.getsizeof(l)
8256
>>> a = array.array('f',l)
>>> sys.getsizeof(a)
4160
>>> 
In other words arrays of floats are substantially smaller than lists of floats, presumably storing an element in one 32 bit word with a very small overhead for bounds checking. Storing pairs of floats as two element mutable lists is particularly inefficient. But this is perhaps particularly significant:

Code: Select all

>>> sys.getsizeof(l)/sys.getsizeof(float)
20.64
>>> sys.getsizeof(a)/sys.getsizeof(float)
10.4
>>> 
IOW a 1024 element array of floats is only ten times the size of a single one! I gather from general reading that the reason is that, under the hood, an object is created whenever a float or double is instantiated, bearing out your point about objects.

Regards, Pete
Peter Hinch
Index to my micropython libraries.

Post Reply