Ir al contenido principal

A simple python async spider (async programming with python 3.6 step 2)

As a second step learning async programming I developed a very simple spider with python 3.6 and asynchronous developing.

In this case the spider request a bunch of urls. The server that is serving waits on each request.

http://localhost:8000/1

The number '1' tells the number of seconds that the server should wait:

http://localhost:8000/2 -> makes server wait for 2 seconds
http://localhost:8000/5 -> makes server wait for 5 seconds

I'm testing with a server that makes to wait my consumer, feel free to use random waits or whatever you prefer.

The consumer accepts two queues, a queue for urls to retrieve, and a queue to store the results.

At the moment the consumer don't store nothing at the urls queue, only retrieve the urls configured on the hardcoded urls list.

More functionalities will be added in the future.

In this example aiohttp==2.3.10 is used.

(iospider) $ pip install "aiohttp==2.3.10"


import asyncio
from contextlib import closing
from time import perf_counter

import aiohttp

SPIDER_WORKERS = 16

async def consume(client: aiohttp.ClientSession, queue_results: asyncio.Queue, queue_urls: asyncio.Queue):
    while True:
        if queue_urls.empty():
            break
        url = await queue_urls.get()
        print(f'consumed {url}')
        with aiohttp.Timeout(10):
            async with client.get(url) as response:
                if response.status == 200:
                    page = await response.text()
                    await queue_results.put(page)


def run(queue_results: asyncio.Queue, queue_urls: asyncio.Queue, workers: int):
    with closing(asyncio.get_event_loop()) as loop:
        with aiohttp.ClientSession() as client:
            tasks = [consume(client, queue_results, queue_urls) for i in range(workers)]
            loop.run_until_complete(asyncio.gather(*tasks))

urls = ['http://localhost:8000/1', 'http://localhost:8000/2', 'http://localhost:8000/3', 'http://localhost:8000/4'] * 6
start = perf_counter()
queue_urls = asyncio.Queue()
queue_results = asyncio.Queue()
[queue_urls.put_nowait(url) for url in urls]
run(queue_results, queue_urls, SPIDER_WORKERS if queue_urls.qsize() > SPIDER_WORKERS else queue_urls.qsize())
print(f'Retrieved {queue_results.qsize()} pages in {perf_counter() - start}')

Example output, for 24 urls, with the spider configured with 16 default workers (same as scrapy).

# IOSpider output

(iospider) jesus@laptop:~/iospider$ time python iospider.py
Creating a client session outside of coroutine
client_session: <aiohttp.client.ClientSession object at 0x7f25c55ea208>
consumed http://localhost:8000/1
consumed http://localhost:8000/4
...
consumed http://localhost:8000/2
consumed http://localhost:8000/3
...
consumed http://localhost:8000/1

consumed http://localhost:8000/4
Retrieved 24 pages in 6.038133026999731

real    0m6,222s
user    0m0,217s
sys    0m0,024s

Using scrapy to retrieve the same number of urls with the same number of workers:

(scrapy) jesus@laptop:~/scrapy$ time scrapy runspider -s CONCURRENT_REQUESTS=16 -s CONCURRENT_REQUESTS_PER_DOMAIN=16 client_scrapy.py

# Scrapy output
...

real    0m6,870s
user    0m0,860s
sys    0m0,036s

The average times show that IOSpider is faster than Scrapy, but we need to consider that IOSpider is a very simplistic approach with only one main feature.

Comentarios

Entradas populares de este blog

Join o producto cartesiano de dos tablas en EXCEL 2007

Hace unos dias inicie mi ocupacion como becario de informatica en la facultad de humanidades y ciencias de la educacion de la UJAEN. Y como no, no han tardado en surgir los problemas. Supongamos que tenemos dos tablas, y queremos hacer una tabla que tenga datos de estas dos tablas, segun un criterio , y es que solo pueden aparecer ciertas filas, mas exactamente aquellas donde coincida cierto campo, en este ejemplo, el codigo de la asignatura. Si queremos realizar el join o producto cartesiano tal y como lo hariamos en una base de datos, parece ser que si no estamos trabajando con una bbdd sino con Excel, la cosa se complica un poco. Para "multiplicar tablas" en excel, primero vamos a hacer una cosa, cada tabla la vamos a guardar en hojas separadas, en nuestro caso, una tabla la guardamos en Hoja1 , y la otra en Hoja2 Ahora, nos situamos en la hoja donde queramos que aparezca el producto cartesiano de nuestras dos tablas, nos vamos a la ficha DATOS . Veremos que h...

Use django ORM standalone within your nameko micro-services

Learning about micro services with python, I found a great tool named nameko . https://www.nameko.io/ Nameko is a Python framework to build microservices that doesn't care in concrete technologies you will use within your project. To allow that microservices to work with a database, you can install into your project a wide variety of third parties, like SQLAlchemy (just like any other). To have an easy way to communicate with the database and keep track of the changes made to the models, I chose Django: I'm just learning about microservices and I want to keep focused on that. Easy to use, Django is a reliable web framework, have a powerful and well known ORM. Also using Django we will have many of the different functionalities that this framework provide. To make all this magic to work together, I developed a python package that allow you to use Django as a Nameko injected dependency: https://pypi.org/project/django-nameko-standalone/ You can found the source ...

Polynomial regression using python

We are going to learn how to create a polynomial regression and make a prediction over a future value using python. The data set have been fetched from INE (national statistics institute) , that data is the EPA ( active population survey ), that tell us the national total (Spain), both genders. 16 and over are unemployed ( in thousands ). Example data: label serie rate 0 2002T1 0 2152.8 1 2002T2 1 2103.3 2 2002T3 2 2196.0 3 2002T4 3 2232.4 4 2003T1 4 2328.5 Data CSV can be downloaded here: https://drive.google.com/file/d/1fwvAZe7lah5DX8-DDEpmfeUDYQhKcfzG/view?usp=sharing Lets see how looks that data: Fine, as we can see the data describe a curve, so its for that because we want to use a polynomial regression. To try to approximate that curve we will use a grade 2 polynomial or...