asyncio: sinchronization in async programming

Discussion about programs, libraries and tools that work with MicroPython. Mostly these are provided by a third party.
Target audience: All users and developers of MicroPython.
Post Reply
User avatar
Zoland
Posts: 20
Joined: Wed Jan 23, 2019 2:12 pm
Location: Russia, Altai

asyncio: sinchronization in async programming

Post by Zoland » Thu Feb 28, 2019 11:04 am

Here is short research possibilities of async programming mostly for understanding some key definitions and solusions

In sequential programming, I constantly encounter an obvious desire not to stop the program at the moment when the goal of individual tasks (processes) has periodic actions - for example, polling sensor values, or data transfer to the server according to the schedule, or input / output of a large amount of data. The simplest, of course, is to wait complete a periodic event and then, slowly, continue execute other tasks.

Code: Select all

    while True:
        do_ext_proc_before()
        do_internal_proc()
        sleep(5)
        do_ext_proc_after()
You can opt out of 'sleep ()' and enable to check some conditions in the cycle, which will allow not to delay the main cycle at least until the onset of this periodic event:

Code: Select all

		
    start = time()
    set_timer(start,wait=5)           # set schedule
    set_timeout(start,wait_to=7)  # set timeout
    set_irq(alarm)                        # set interrupt

    while True:
        curTime = time()
        do_ext_proc_before()

        if timer(curTime) or timeout(curTime) or alarm:
        # if all events crazy start simultaneously - reset all
            	start = time()
		    	set_timer(start,wait=5)           # set schedule
		    	set_timeout(start,wait_to=7)  # set timeout
    			set_irq(alarm)                        # set interrupt

            	do_internal_proc()
        
        do_ext_proc_after()
In asynchronous programming, each task becomes an independent process and is executed, depending on the specific implementation, parallel or pseudo-parallel, using inner understanding natural or artificially established conditions of waiting or using a limited resource, for example, disk or channel communication.

Code: Select all

    setTask(do_ext_proc_before())
    setTask(do_internal_proc(),timer=5,timeout=7,alarm_handler=alarm)
    setTask(do_ext_proc_after())

    runTasks()
Now there is a task that does not exist in sequential programming - what if you need to synchronize some processes during their asynchronous execution? For example, after receiving data from sensors, initiate the process of sending data to the server or respond on an emergency. In asynchronous programming, the organization of asynchronous I / O is organically solved in the language standard, and other situations are resolved in libraries. I studied this question using the extended 'asyncio' Micropython library published by
Peter Hinch (https://github.com/peterhinch/micropyth ... UTORIAL.md)

The simplest solution is to signal an event to an interested process. For this is the class Event (),
which contains several modules
  • Event.Set (timeout = None,data = None) - set event occurrence (Event = True), for example,completion of data acquisition from sensors
    Event.IsSet () - check if an event has occurred, returns True if set and False if not
    Event.Wait () - wait for the event to occur, returns the reason for its completion - Done, Timeout, Cancel
    Event.Data () - get data associated with this event
    Event.Clear () - event completed (Event = False).
Event is fixed, as a rule, by a process awaiting the occurrence of an event, for example, the process of displaying on the screen or the process of saving data to disk or timeout, then there is no need to update or save data, since it is not updated for any reason, either because it was interrupted by another important event, for example, going to sleep or rebooting, which may require release all pending processes by resetting the corresponding events

Keep in mind that Event.Clear () is advisable to do by only one process, if it does not contradict for a given algorithm. Otherwise, if several processes expect a Event.Set () event, it is assumed that Event.Clear () should be performed by one of the interested processes, only making sure that all interested processes responded to that event. What complicates decision-making logic when using Event-Class
while waiting for an event by multiple processes. This situation is resolved by setting a certain amount of Clear () event to occur.
  • Barrier.Set (quantity = 1, timeout = None, data = None) - quantity = 1 identical Event.Set ()
    Barrier.IsSet () - check if an event has occurred, returns True if set and False if not
    Barrier.Wait () - wait for the event to occur, returns the reason for its completion - Done, Timeout, Cancel
    Barrier.Data () - get data associated with this event
    Barrier.qty - number of processes not yet completed
    Barrier.Clear () - reset event (Event = False), executed by each process waiting for an event decreasing the Barrier.quantity counter by one until it is reset, only then the event will be reset

It does not keep track of which process has already responded, which may cause a problem re-respond to the event, if it is essential for a given algorithm. If instead of Barrier.quantity transfer the list of names of interested processes; such a conflict can be avoided. Also, in case of timeout or interruption of the event can be determined - which specific pending processes have not yet worked. All of the above applies to a situation where one or more processes are waiting for the occurrence of some event, or 'one-to-many' situations. This occurs when the process or processes do_ext_proc_after () by sequential programming would only be executed after do_internal_proc () is completed.

For the convenience of further understanding, we will extend the existing Event-Class and Barrier-Class to the new EEvent-Class and make it or the objects generated by him - global. Here 'creators' - the name or list of names of the process that initiates the occurrence of an event or blocks the resource and 'folowers' - the name or list of names of the processes awaiting the occurrence of an event or unlocking a resource
  • EEvent.Set (creators, folowers, timeout = None, data = None) - as result returns True if it was possible to set an event or block a resource
    EEvent.IsSet (procName) - procName - the name or ID of the process that ended
    EEvent.Wait (procName)
    EEvent.Clear (procName)
    EEvent.Folowers () - as result returns a list of pending processes still queued for execution. Barrier.qty = len (EEvent.List ())
    EEvent.Creators () - returns the list of processes that have already initiated the event
Using the modules EEvent-Class, you can describe the solution of the previously proposed situation:

Code: Select all

    def do_internal_proc():
        ...
        EEvent.Set ('p_Creator',('p_Folwer1','p_Folwer2'))    # exec 'p_Folwer1','p_Folwer2' after event is come in 'p_Creator'
        ...
        
    def do_ext_proc_after1()
        ...
        EEvent.Wait('p_Creator')
        ...
        EEvent.Clear('p_Folwer1')

    def do_ext_proc_after1()
        ...
        EEvent.Wait('p_Creator')
        ...
        EEvent.Clear('p_Folwer2')
Consider the opposite situation - when one process is waiting for the completion of several events, or the 'many to one' situation. In other words, what if the execution of do_internal_proc () can only be after the execution of do_ext_proc_before () the limiting case, when one process is waiting for the completion / occurrence of one event, the problem can be solved by applying simple Event-class. When is expected the completion of several events, for example, only after the displayed data and transfer them to the server, save them to disk, it is necessary that each process executed establish its participation in the expected event and wait until all the processes are completed.

Code: Select all

   def do_ext_proc_before1()
        ...
        EEvent.Set('p_Creator1','p_Folwer')
        ...

    def do_ext_proc_before2()
        ...
        EEvent.Set('p_Creator2','p_Folwer')
        ...

    def do_internal_proc():
        ...
        EEvent.Wait(('p_Creator1','p_Creator2'))
        ...
        EEvent.Clear('p_Folwer')
 
Another important aspect of asynchronous programming is sharing a limited resource. For example, data updating should be carried out by only one process, the remaining processes that are for a similar action, must queue up or wait until data is updated. At the same time, it is possible that reading data for display or transfer will not be critical. Therefore, it is necessary to know the list of competing processes when organizing relevant events. In the standard asynchronous programming, this problem is solved by the Lock-Class modules. In the general case, the problem can be solved similarly to the situation “one to many”

Code: Select all

    def do_internal_proc():                                   # lock activity all 'followers' in list
        ...
        EEvent.Set ('p_Creator',('p_Folwer1','p_Folwer2'))    # exec 'p_Folwer1','p_Folwer2' after event is come in 'p_Creator'
        ...
        
    def do_ext_proc_after1()
        ...
        EEvent.Wait('p_Creator')                              # waiting for recourse releale
        if ( EEvent.Set ('p_Folwer1','p_Folwer2')):           # lock resource 'p_Folower1' now is 'p_Creator'
            ...
        else:
            EEvent.Wait('p_Folower2')                         # continue waiting for recourse releale
            ...
        EEvent.Clear('p_Folwer1')                              # releafe recourse

    def do_ext_proc_after1()
        ...
        EEvent.Wait('p_Creator')
        if ( EEvent.Set ('p_Folwer2','p_Folwer1')):           # lock resource 'p_Folower2' now is 'p_Creator'
            ...
        else:
            EEvent.Wait('p_Folower1')                         # continue waiting for recourse releale
            ...
        EEvent.Clear('p_Folwer2')                              # releafe recourse
In addition to the options considered, there are solutions that limit bandwidth, queuing and managed dispatching of processes, but in my activity so far this has not become necessary and, as a result, need for sufficient understanding for themselves, although I do not exclude that there are more elegant or economical solutions.

In conclusion, I want to say that sequential and asynchronous approaches have an equal right to exist and successfully implement the specified algorithms. Therefore, the use of a particular approach is determined by the priorities of the creator - what is more significant for him when implementing specified algorithms - transparency and readability, speed or amount of the resulting code.

Post Reply