架构概览

This document describes the architecture of Scrapy and how its components interact.

概述

The following diagram shows an overview of the Scrapy architecture with its components and an outline of the data flow that takes place inside the system (shown by the green arrows). A brief description of the components is included below with links for more detailed information about them. The data flow is also described below.

Data flow

Scrapy architecture

The data flow in Scrapy is controlled by the execution engine, and goes like this:

  1. EngineSpider获取要爬取的初始Requests 。
  2. Engine调度在Scheduler中的Requests,并请求下一批要爬取的Requests。
  3. Scheduler返回下一批Requests给Engine
  4. Engine发送Requests给Downloader,并透过Downloader Middleware(Requests方向)。
  5. 一旦在页面完成下载, Downloader生成Response(该网页),并将其发送到Engine,并透过 Downloader Middleware(Response方向)。
  6. EngineDownloader收到Response,将它发送给Spider处理,其中透过Spider Middleware(输入方向)。
  7. Spider处理Response并返回爬取的Item和新的Request给Engine,其中透过Spider Middleware(输出方向)。
  8. Engine发送处理过的Item给Item Pipelines,然后发送处理过的Request给Scheduler并请求下批将要爬取的Requests。
  9. 这个过程不停重复(从第1步),直到Scheduler没有更多的Request 。

组件

Scrapy Engine

The engine is responsible for controlling the data flow between all components of the system, and triggering events when certain actions occur. 详细内容查看下面的Data Flow部分。

Scheduler

The Scheduler receives requests from the engine and enqueues them for feeding them later (also to the engine) when the engine requests them.

Downloader

The Downloader is responsible for fetching web pages and feeding them to the engine which, in turn, feeds them to the spiders.

Spiders

Spiders are custom classes written by Scrapy users to parse responses and extract items (aka scraped items) from them or additional requests to follow. 更多内容请看 Spiders

Item Pipeline

The Item Pipeline is responsible for processing the items once they have been extracted (or scraped) by the spiders. Typical tasks include cleansing, validation and persistence (like storing the item in a database). 更多内容查看Item Pipeline

Downloader middlewares

Downloader middlewares are specific hooks that sit between the Engine and the Downloader and process requests when they pass from the Engine to the Downloader, and responses that pass from Downloader to the Engine.

如果你需要执行以下操作之一,使用下载器中间件︰

  • process a request just before it is sent to the Downloader (i.e. right before Scrapy sends the request to the website);
  • change received response before passing it to a spider;
  • send a new Request instead of passing received response to a spider;
  • pass response to a spider without fetching a web page;
  • silently drop some requests.

For more information see Downloader Middleware.

Spider中间件

Spider middlewares are specific hooks that sit between the Engine and the Spiders and are able to process spider input (responses) and output (items and requests).

使用Spider中间件,如果你需要

  • post-process output of spider callbacks - change/add/remove requests or items;
  • post-process start_requests;
  • handle spider exceptions;
  • call errback instead of callback for some of the requests based on response content.

For more information see Spider Middleware.

Event-driven networking

Scrapy is written with Twisted, a popular event-driven networking framework for Python. Thus, it’s implemented using a non-blocking (aka asynchronous) code for concurrency.

For more information about asynchronous programming and Twisted see these links: