Just like any other language (even c++) the more you do, the more cpu it takes.
Recursion is really slower because it needs to save the information about the current function while it calls it again. and if you do a recursion too many times you run out of memory. So usually, recursion doesnt save much memory, instead it requires more, because it adds some overhead to the data.
If you call several functions its the same story.
If you have complex data its the same story, it adds hidden complexity ("there's no such thing as magic" principle).
Same thing with objects.Just because of the overhead of working with objects. (also i've seen that python (CPython at least) takes some time to create the so called bounded-methods).
So in the long run, having a flat structure will always (in theory at least) be faster. The difference is that in other languages (like c) you dont actually have a choice.
I dont quite understand what you mean by not being able to do multidimensional arrays. you can do them, is just not as similar to other languages.
Code: Select all
mda = [
[0,1,2,3],
[0,1,2,3]
]
print (mda[0][1])
r = 0
i = 1
a = array_of_complexes =[
[0,1],#
[0,-1]
]
m = sqrt( a[0][r]**2 + a[0][i]**2)
i dont know how much improvement has Damien (iirc) George's implementation, but i bet an object is slower and more "heavy" than an array of simple types. If i had to bet i would say that storing pairs of floats instead of a "complex" number will save time and space.
not quite surprised, http://screencloud.net/v/qsNs
though the size of complex appears to be smaller, maybe because it's a built in.
it does seems to take more space. but i wouldnt care that much.
Code: Select all
>>> b = list(complex() for i in range(1000000))
>>> a = list((0.0,0.1) for i in range(1000000))
>>> sum(map(sys.getsizeof, b))
32000000
>>> sum(map(sys.getsizeof, a))
64000000
>>> sys.getsizeof(complex())
32
>>> sys.getsizeof(0.0)
24
>>> sum([sum(map(sys.getsizeof, i)) for i in a])
48000000
>>> sum(map(sys.getsizeof, [0.0 for i in range(2000000)]))
48000000