Ir al contenido principal

First approach to the python3.6 async programming, a simple consumer/producer (async programming with python 3.6 step 1)

As a first approach to the async programming I'm developing small scripts with python.


The main thing I need to learn is how to develop an asynchronous consumer/producer using an async Queue and python3.6.

First, a reproduction of the problem with synchronous programming, later the solution I found with async programming.

Synchronous


# First attempt, synchronous

from queue import Queue
from time import sleep, perf_counter


def produce(q: Queue, n: int):
    for i in range(n):
        print(f'produce {i}')
        q.put(i)
        sleep(0.1)


def consume(q: Queue):
    for i in q.queue:
        print(f'consume {i}')
        sleep(0.1)


start = perf_counter()
q = Queue()
produce(q, 10)
consume(q)
print(perf_counter() - start)


# Output

produce 0
produce 1
produce 2
... # imagin that
consume 7
consume 8
consume 9
2.005685806274414

It takes about 2 seconds

Asynchronous

# Second attempt, Asynchronous

import asyncio
from time import perf_counter


async def produce(queue: asyncio.Queue, n: int):
    for x in range(n):
        print(f'produce {x}')
        await asyncio.sleep(0.1)
        await queue.put(x)


async def consume(queue: asyncio.Queue):
    while True:
        x = await queue.get()
        print(f'consume {x}')
        await asyncio.sleep(0.1)
        queue.task_done()


async def run(n: int = 10):
    queue = asyncio.Queue()
    consumer = asyncio.ensure_future(consume(queue))

    await produce(queue, n)
    await queue.join()

    consumer.cancel()



start = perf_counter()
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
loop.close()
print(perf_counter() - start)

# Output

produce 0
produce 1
consume 0
produce 2
consume 1
produce 3
...
consume 7
produce 9
consume 8
consume 9
1.1065495014190674

It takes 1.10 seconds, pretty much better, no?

Comentarios

Entradas populares de este blog

Use django ORM standalone within your nameko micro-services

Learning about micro services with python, I found a great tool named nameko . https://www.nameko.io/ Nameko is a Python framework to build microservices that doesn't care in concrete technologies you will use within your project. To allow that microservices to work with a database, you can install into your project a wide variety of third parties, like SQLAlchemy (just like any other). To have an easy way to communicate with the database and keep track of the changes made to the models, I chose Django: I'm just learning about microservices and I want to keep focused on that. Easy to use, Django is a reliable web framework, have a powerful and well known ORM. Also using Django we will have many of the different functionalities that this framework provide. To make all this magic to work together, I developed a python package that allow you to use Django as a Nameko injected dependency: https://pypi.org/project/django-nameko-standalone/ You can found the source ...

Stop python debugger exactly on the failing loop iteration

This post is for ipdb users. Is this case familiar for you?: Write a break point on your code Go into a loop Start pressing 'c' and 'n' until you found the iteration that is failing. It can works, but you can only do this ... one or two times maybe?? If you need to this frequently, this method doesn't works and you need another tool, but .... what's the tool I need? You need launch_ipdb_on_exception ... ... from ipdb import launch_ipdb_on_exception with launch_ipdb_on_exception(): for i in range ( 9 ): print (i) if i== 4 : raise Exception('Debug time!') ... # Output 0 1 2 3 4 Exception('Debug time!',) > <ipython-input-1-4f44dca149ad>(7)<module>() 5 print(i) 6 if i==4: ----> 7 raise Exception('Debug time!') ipdb> i # Note ipdb console!! 4 ipdb> c # Continues the execution In [2]: i # Ipython console again Out[2]: 4 In [3...

Stop measuring time with time.clock() in Python3

If we read the time.clock() documentation ( https://docs.python.org/3/library/time.html#time.clock): Deprecated since version 3.3: The behaviour of this function depends on the platform: use perf_counter() or process_time() instead, depending on your requirements, to have a well defined behaviour. I use time.perf_counter, but feel free to use time.process_time if fits for your requirements. The main difference they have are: time.perf_counter() It does include time elapsed during sleep and is system-wide. time.process_time() It does not include time elapsed during sleep. It is process-wide by definition.