Module for MPU9150

Showroom for MicroPython related hardware projects.
Target audience: Users wanting to show off their project!
User avatar
dhylands
Posts: 2256
Joined: Mon Jan 06, 2014 6:08 pm
Location: Shuswap, BC, Canada
Contact:

Re: Module for MPU9150

Postby dhylands » Tue Oct 21, 2014 6:49 pm

ok - so you want to be able to distinguish between MPU9150_Exception and Exception? I'm still new to some of the python subtleties...

Turbinenreiter
Posts: 255
Joined: Sun May 04, 2014 8:54 am

Re: Module for MPU9150

Postby Turbinenreiter » Tue Oct 21, 2014 7:49 pm

@dave:

kinda.

consider:

Code: Select all

>>> g
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'g' is not defined
>>> g===1
  File "<stdin>", line 1
    g===1
       ^
SyntaxError: invalid syntax


I think it would be nice to have something like I2CError.

so, instead of:

Code: Select all

>>> i.mem_write(12, 12, 12)   
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
Exception: HAL_I2C_Mem_Write failed with code 1


it would be:

Code: Select all

>>> i.mem_write(12, 12, 12)   
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
I2CError: HAL_I2C_Mem_Write failed with code 1


also, 'code 1' should be replaced with something more meaningful. this issue is similar to stuff like 'OSError: 2' - it just doesn't provide helpful information.

User avatar
pythoncoder
Posts: 1381
Joined: Fri Jul 18, 2014 8:01 am

Re: Module for MPU9150

Postby pythoncoder » Wed Oct 22, 2014 7:01 am

It also allows for nested exception handling. To be specific, we have an MPU9150 class with read() and write() methods: these use I2C and raise a custom exception MPU9150_Exception if the I2C read fails (ideally this would come from the I2C code itself). The following method allows the I2C error to be trapped over a write and a read, even though another exception is handled within the try-except section. This (obviously) would work if there were 100 reads - the specific issue of an I2C failure may be handled at the outermost scope of the function. Crucially without cattling the reporting of other errors.

Code: Select all

    def accel_range(selif, accel_range=None):
        try:
            if accel_range is None:
                pass
            else:
                ar = (0x00, 0x08, 0x10, 0x18)
                try:
                    self.write(ar[accel_range], 0x1C) # Performs I2C activity
                except IndexError:
                    print('accel_range can only be 0, 1, 2 or 3')
            # get range
            ari = int(unp('<H', self.read(1, 0x1C))[0]/8) # More I2C stuff
        except MPU9150_Exception as response:
            print(response)
            ari = None
        if ari is not None:
            self._ar = ari
        return ari

Or it might not be handled within the method at all: rather the caller might handle that particular exception.In an earler post I gave an example where the caller handles the exception. It deals with the case where the I2C device is probably absent or broken:

Code: Select all

from mpu9150 import MPU9150, MPU9150_Exception
try:
    obj_mpu = MPU9150()
except MPU9150_Exception as response:
    print(response) # Then presumably shut down the application in an orderly fashion

In this nstance the constructor performs a number of I2C operations. Delegating the exception handler to the caller allows appropriate action to be taken in the case where the device couldn't be contacted at all. Meanwhile, if other exceptions occur, they can be trapped in the constructor itself. If the caller had been forced to use a catch-all except Exception formulation it would hide any unexpected errors in the constructor code. He would assume a device problem when in fact the constructor itself had a bug.

The Python exception handling system is powerful and flexible, so long as one isn't forced to use a catch-all except: or except Exception: formulation. In our code I've simulated the effect of an I2C exception by raising a custom exception in the read() and write() methods. While this works, it still means that the (simple) read and write methods use the except Exception formulation.

Code: Select all

    def write(self, data, memaddr, devaddr = None):
        if devaddr is None:
            devaddr = self.mpu_addr
        irq_state = True
        if self.disable_interrupts:
            irq_state = pyb.disable_irq()
        try:
            result = self._mpu_i2c.mem_write(data, devaddr, memaddr, timeout = self.timeout)
        except: # I'd really rather not!
            raise MPU9150_Exception("I2C Memory write failed")
        pyb.enable_irq(irq_state)
        return result

Thus endeth the sermon ;)

Regards, Pete
Peter Hinch

User avatar
pythoncoder
Posts: 1381
Joined: Fri Jul 18, 2014 8:01 am

Re: Module for MPU9150

Postby pythoncoder » Wed Oct 22, 2014 2:43 pm

@Turbinenreiter
If you're happy with my efforts to date and want some help, I'm prpared to code up the changes to the manetometer. I could have some code in about a week.
1. Enable non-blocking reads.
2. Read and implement the factory set of compensation data.
I have a slight preference for using a closure rather than a generator for 1, for reasons I'll explain if you want me to have a crack at this.

Regards, Pete
Peter Hinch

Turbinenreiter
Posts: 255
Joined: Sun May 04, 2014 8:54 am

Re: Module for MPU9150

Postby Turbinenreiter » Wed Oct 22, 2014 2:53 pm

Sure, hack away!
Merging your stuff right now. Take a look at it after, I will rename some stuff.
Thanks for the help.

And yes, explain! I don't even know what a closure is ;) Same was true for generators, but now I use them in the bmp180 module. Learning is fun!

Turbinenreiter
Posts: 255
Joined: Sun May 04, 2014 8:54 am

Re: Module for MPU9150

Postby Turbinenreiter » Wed Oct 22, 2014 3:25 pm

Ok, I merged your code with some slight changes.

You now have to make sure that you are working with the newest version of the code, as it is in my repo now.
You can either:
* delete your fork and your local repo, and make a new fork
* add a remote for the upstream

quick guide for the second option:
add upstream:
https://help.github.com/articles/config ... or-a-fork/

sync the fork:
https://help.github.com/articles/syncing-a-fork/

User avatar
pythoncoder
Posts: 1381
Joined: Fri Jul 18, 2014 8:01 am

Re: Module for MPU9150

Postby pythoncoder » Thu Oct 23, 2014 6:00 am

@Turbinenreiter
Thanks for the pointers re github - it's very unfamilar territory for me. I'll probably delete the fork and create a new one when I have some code to offer as that seems a simple concept for a n00b like myself.

Re closures. The requirement is for a function-like obect which retains state between calls. This is because the first call has to trigger the hardware, while subsequent calls need to test and return its state. As far as I know there are four ways of doing this in python.
1. Function attributes. Nobody seems to use these and I've never known why. They've been in the language for years.
2. Generators
3. Functors: these are classes having a __call__ method. Very powerful, well worth studying if you've not encountered them, but overkill for such a simple task.
4. Closures.

Closures are functions which return a function. For example here is one I use. It facilitates measuring the time between two events, notably in timing bits of code and performing realtime calculus.

Code: Select all

def makedt():     # Produce a function returning the time since it was created or last called
    t = pyb.micros() # Creation time
    def gdt(): # Here we define the function which will be returned to the caller
        nonlocal t # This is its persistent state, outside the body of the function
        end = pyb.micros()
        dt = ((end - t) & TIMERPERIOD)/1000000.0
        t = end
        return dt # The function instance returns the time since last called
    return gdt # Return a function instance

The nonlocal keyword defines a variable which is rather like a global to the returned function yet is unique to that function instance and invisible to the code which calls it. And here is a code fragment which uses it

Code: Select all

getdt = makedt() # getdt is an instance of the inner function, initialised with the time of its creation
       # code omitted
while True:
    yield rr # Yield to scheduler: takes an arbitrary amount of time
    dt = getdt() # Time since created or last called
    # Code using dt omited for clarity

The getdt = makedt() produces a function instance getdt which retains state (the time it was created or last called). So the repeated call to dt = getdt() will provide the time elapsed since the previous call. As with generators you can have multiple instances of functions created by makedt() each with its own local state.

The reason I have a small preference for a closure over a generator in this instance is that, from the point of view of the caller the syntax is slightly clearer. Also for the selfish reason that a function returning None if not ready and integers indicating ready or an error state integrates neatly with my scheduler. In particular the part where polling can be delegated to the scheduler itself. Here are a couple of examples showing the usage cases of a generator and a closure.

Code: Select all

def getit_using_generator():
    gm = mpu9150.get_mag_status() # Create the generator instance
    state = next(gm)                         # 1st iteration starts conversion and returns None
    while state is None:
        # do something useful e.g. yield control to another thread
        state = next(gm)  # subsequent iterations test status and return None if not ready
                                  # else an integer indicating ready or error
    # error handling involving checking state omitted
    return mpu9150.get_mag() # Will now return immediately

def getit_using_closure():
    gm = mpu9150.get_mag_status() # Create function instance
    state = gm()                               # first call  starts mag
    while state is None:
        # do something useful e.g. yield control to another thread
        state = gm()                           # subsequent calls test its status
    # error handling involving checking state omitted
    return mpu9150.get_mag() # Will now return immediately

It's simply a minor preference for function call syntax in this instance. In either case the user must create an object instance prior to each nonblocking read and the solutions are very similar. I hope this clarifies.

Regards, Pete
Peter Hinch

User avatar
pythoncoder
Posts: 1381
Joined: Fri Jul 18, 2014 8:01 am

Re: Module for MPU9150

Postby pythoncoder » Thu Oct 23, 2014 6:04 am

Incidentally I did a lot of testing of my project yesterday and never saw a timeout or I2C bus crash. Previously I would have encountered numerous instances so I'm convinced that disabling interrupts for the duration of I2C access has fixed the problem. I wish I knew why, but life's too short...

Regards, Pete
Peter Hinch

User avatar
pythoncoder
Posts: 1381
Joined: Fri Jul 18, 2014 8:01 am

Re: Module for MPU9150

Postby pythoncoder » Thu Oct 23, 2014 8:09 am

@Turbinenreiter
Alas you've broken it ;)
To conform with the change you made to the read() method line 70 should read:

Code: Select all

self.chip_id = int(unp('>h', self._read(1, 0x75, self.mpu_addr))[0])

I suggest you also remove the devaddr default from read() and write() so:

Code: Select all

def _read(self, count, memaddr, devaddr):

since the default of None is now meaningless and bombs if applied. I appreciate the reasons for your changes, by the way. No criticism implied.

Ill-advisedly I'd deleted my fork before testing, hence reporting here.

Regards, Pete
Peter Hinch

Turbinenreiter
Posts: 255
Joined: Sun May 04, 2014 8:54 am

Re: Module for MPU9150

Postby Turbinenreiter » Thu Oct 23, 2014 5:06 pm

oh, thanks, it's fixed now!

thanks for explanation. i used decorators before, which do something else, but look similar. will experiment with that!


Return to “Hardware Projects”

Who is online

Users browsing this forum: No registered users and 1 guest