Downloader Middleware

The downloader middleware is a framework of hooks into Scrapy’s request/response processing. It’s a light, low-level system for globally altering Scrapy’s requests and responses.

激活下载器中间件

要激活下载器中间件组件,将其加入到 DOWNLOADER_MIDDLEWARES 设置中,该设置是一个字典(dict),键为中间件类的路径,值为其中间件的顺序(order)。

Here’s an example:

DOWNLOADER_MIDDLEWARES = {
    'myproject.middlewares.CustomDownloaderMiddleware': 543,
}

The DOWNLOADER_MIDDLEWARES setting is merged with the DOWNLOADER_MIDDLEWARES_BASE setting defined in Scrapy (and not meant to be overridden) and then sorted by order to get the final sorted list of enabled middlewares: the first middleware is the one closer to the engine and the last is the one closer to the downloader.

To decide which order to assign to your middleware see the DOWNLOADER_MIDDLEWARES_BASE setting and pick a value according to where you want to insert the middleware. The order does matter because each middleware performs a different action and your middleware could depend on some previous (or subsequent) middleware being applied.

If you want to disable a built-in middleware (the ones defined in DOWNLOADER_MIDDLEWARES_BASE and enabled by default) you must define it in your project’s DOWNLOADER_MIDDLEWARES setting and assign None as its value. For example, if you want to disable the user-agent middleware:

DOWNLOADER_MIDDLEWARES = {
    'myproject.middlewares.CustomDownloaderMiddleware': 543,
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
}

Finally, keep in mind that some middlewares may need to be enabled through a particular setting. See each middleware documentation for more info.

编写你自己的下载器中间件

Each middleware component is a Python class that defines one or more of the following methods:

class scrapy.downloadermiddlewares.DownloaderMiddleware

任何下载器中间件方法也可能会返回延迟。

process_request(request, spider)

This method is called for each request that goes through the download middleware.

process_request()应该:返回None、返回一个Response对象、返回一个Request对象或引发一个IgnoreRequest

If it returns None, Scrapy will continue processing this request, executing all other middlewares until, finally, the appropriate downloader handler is called the request performed (and its response downloaded).

如果它返回一个Response对象,Scrapy将不会调用任何其它process_request()process_exception()方法,或相应的下载函数; 它返回该Response。已安装的中间件的process_response()方法会始终在每个Response上调用。

If it returns a Request object, Scrapy will stop calling process_request methods and reschedule the returned request. Once the newly returned request is performed, the appropriate middleware chain will be called on the downloaded response.

如果它引发一个IgnoreRequest异常,则已安装的下载器中间件的process_exception()方法会被调用。If none of them handle the exception, the errback function of the request (Request.errback) is called. If no code handles the raised exception, it is ignored and not logged (unlike other exceptions).

Parameters:
  • request (Request object) – the request being processed
  • spider (Spider object) – the spider for which this request is intended
process_response(request, response, spider)

process_response()应该:返回一个Response对象,返回一个Request对象,或者引发一个IgnoreRequest异常。

如果其返回一个 Response(可以与传入的response相同,也可以是全新的对象), 该response会被在链中的其他中间件的process_response()方法处理。

If it returns a Request object, the middleware chain is halted and the returned request is rescheduled to be downloaded in the future. 处理类似于process_request()返回request所做的那样。

If it raises an IgnoreRequest exception, the errback function of the request (Request.errback) is called. If no code handles the raised exception, it is ignored and not logged (unlike other exceptions).

Parameters:
  • request (is a Request object) – the request that originated the response
  • response (Response object) – the response being processed
  • spider (Spider object) – the spider for which this response is intended
process_exception(request, exception, spider)

当下载处理器或process_request() (从下载中间件)抛出异常(包括IgnoreRequest异常)时, Scrapy调用process_exception()

process_exception()应该返回:None、一个Response对象,或者一个Request对象。

如果其返回None,Scrapy将会继续处理该异常,接着调用已安装的其他中间件的process_exception() 方法,直到所有中间件都被调用完毕,则调用默认的异常处理。

如果其返回一 Response对象,则已安装的中间件链的 process_response()方法被调用,Scrapy将不会调用任何其他中间件的 process_exception()方法。

If it returns a Request object, the returned request is rescheduled to be downloaded in the future. 这将停止中间件的process_exception()方法执行,就如返回一个response的那样。

Parameters:
  • request (is a Request object) – the request that generated the exception
  • exception (an Exception object) – the raised exception
  • spider (Spider object) – the spider for which this request is intended

内置下载中间件参考

This page describes all downloader middleware components that come with Scrapy. For information on how to use them and how to write your own downloader middleware, see the downloader middleware usage guide.

For a list of the components enabled by default (and their orders) see the DOWNLOADER_MIDDLEWARES_BASE setting.

CookiesMiddleware

class scrapy.downloadermiddlewares.cookies.CookiesMiddleware

This middleware enables working with sites that require cookies, such as those that use sessions. It keeps track of cookies sent by web servers, and send them back on subsequent requests (from that spider), just like web browsers do.

The following settings can be used to configure the cookie middleware:

COOKIES_ENABLED

Default: True

Whether to enable the cookies middleware. If disabled, no cookies will be sent to web servers.

COOKIES_DEBUG

Default: False

如果启用,Scrapy将记录所有在request中发送的cookie(即Cookie头)和所有在response中收到的cookie(即Set-Cookie头)。

Here’s an example of a log with COOKIES_DEBUG enabled:

2011-04-06 14:35:10-0300 [scrapy] INFO: Spider opened
2011-04-06 14:35:10-0300 [scrapy] DEBUG: Sending cookies to: <GET http://www.diningcity.com/netherlands/index.html>
        Cookie: clientlanguage_nl=en_EN
2011-04-06 14:35:14-0300 [scrapy] DEBUG: Received cookies from: <200 http://www.diningcity.com/netherlands/index.html>
        Set-Cookie: JSESSIONID=B~FA4DC0C496C8762AE4F1A620EAB34F38; Path=/
        Set-Cookie: ip_isocode=US
        Set-Cookie: clientlanguage_nl=en_EN; Expires=Thu, 07-Apr-2011 21:21:34 GMT; Path=/
2011-04-06 14:49:50-0300 [scrapy] DEBUG: Crawled (200) <GET http://www.diningcity.com/netherlands/index.html> (referer: None)
[...]

DefaultHeadersMiddleware

class scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware

This middleware sets all default requests headers specified in the DEFAULT_REQUEST_HEADERS setting.

DownloadTimeoutMiddleware

class scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware

This middleware sets the download timeout for requests specified in the DOWNLOAD_TIMEOUT setting or download_timeout spider attribute.

Note

You can also set download timeout per-request using download_timeout Request.meta key; this is supported even when DownloadTimeoutMiddleware is disabled.

HttpAuthMiddleware

class scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware

该中间件完成某些使用Basic access authentication (或者叫HTTP认证)的spider生成的请求的认证过程(即HTTP auth)。

To enable HTTP authentication from certain spiders, set the http_user and http_pass attributes of those spiders.

Example:

from scrapy.spiders import CrawlSpider

class SomeIntranetSiteSpider(CrawlSpider):

    http_user = 'someuser'
    http_pass = 'somepass'
    name = 'intranet.example.com'

    # .. rest of the spider code omitted ...

HttpCacheMiddleware

class scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware

This middleware provides low-level cache to all HTTP requests and responses. It has to be combined with a cache storage backend as well as a cache policy.

Scrapy ships with two HTTP cache storage backends:

You can change the HTTP cache storage backend with the HTTPCACHE_STORAGE setting. Or you can also implement your own storage backend.

Scrapy ships with two HTTP cache policies:

You can change the HTTP cache policy with the HTTPCACHE_POLICY setting. Or you can also implement your own policy.

You can also avoid caching a response on every policy using dont_cache meta key equals True.

Dummy policy (default)

This policy has no awareness of any HTTP Cache-Control directives. Every request and its corresponding response are cached. When the same request is seen again, the response is returned without transferring anything from the Internet.

Dummpy策略对于测试spider十分有用,其能使spider运行更快(不需要每次等待下载完成), 同时在没有网络连接时也能测试。The goal is to be able to “replay” a spider run exactly as it ran before.

In order to use this policy, set:

RFC2616 policy

该策略提供了符合RFC2616的HTTP缓存,例如符合HTTP Cache-Control, 针对生产环境并且应用在持续性运行环境所设置,该策略能避免下载未修改的数据(来节省带宽,提高爬取速度)。

实现了:

  • no-storecache-control指令设置时不存储response/request

  • Do not serve responses from cache if no-cache cache-control directive is set even for fresh responses

  • Compute freshness lifetime from max-age cache-control directive

  • Compute freshness lifetime from Expires response header

  • Compute freshness lifetime from Last-Modified response header (heuristic used by Firefox)

  • Compute current age from Age response header

  • Compute current age from Date header

  • Revalidate stale responses based on Last-Modified response header

  • Revalidate stale responses based on ETag response header

  • Set Date header for any received response missing it

  • Support max-stale cache-control directive in requests

    这使Spider能配置充分的RFC2616高速缓存策略,但要避免request-by-request上的重复认证,而其余完全符合HTTP规范。

    示例︰

    添加Cache-Control: max-stale=600到Request头以接受已超出过期时间不超过600秒的响应。

    See also: RFC2616, 14.9.3

目前仍然缺失:

In order to use this policy, set:

Filesystem storage backend (default)

File system storage backend is available for the HTTP cache middleware.

In order to use this storage backend, set:

Each request/response pair is stored in a different directory containing the following files:

  • request_body - the plain request body
  • request_headers - the request headers (in raw HTTP format)
  • response_body - the plain response body
  • response_headers - the request headers (in raw HTTP format)
  • meta - some metadata of this cache resource in Python repr() format (grep-friendly format)
  • pickled_meta - the same metadata in meta but pickled for more efficient deserialization

The directory name is made from the request fingerprint (see scrapy.utils.request.fingerprint), and one level of subdirectories is used to avoid creating too many files into the same directory (which is inefficient in many file systems). An example directory could be:

/path/to/cache/dir/example.com/72/72811f648e718090f041317756c03adb0ada46c7

DBM storage backend

New in version 0.13.

同时也有 DBM存储后端可以用于HTTP缓存中间件。

默认情况下,其采用 anydbm模块,不过您也可以通过 HTTPCACHE_DBM_MODULE设置进行修改。

In order to use this storage backend, set:

LevelDB storage backend

New in version 0.23.

A LevelDB storage backend is also available for the HTTP cache middleware.

This backend is not recommended for development because only one process can access LevelDB databases at the same time, so you can’t run a crawl and open the scrapy shell in parallel for the same spider.

In order to use this storage backend:

HTTPCache中间件设置

HttpCacheMiddleware可以通过以下设置进行配置:

HTTPCACHE_ENABLED

New in version 0.11.

Default: False

Whether the HTTP cache will be enabled.

Changed in version 0.11: Before 0.11, HTTPCACHE_DIR was used to enable cache.

HTTPCACHE_EXPIRATION_SECS

Default: 0

Expiration time for cached requests, in seconds.

Cached requests older than this time will be re-downloaded. If zero, cached requests will never expire.

Changed in version 0.11: Before 0.11, zero meant cached requests always expire.

HTTPCACHE_DIR

Default: 'httpcache'

The directory to use for storing the (low-level) HTTP cache. If empty, the HTTP cache will be disabled. If a relative path is given, is taken relative to the project data dir. For more info see: Default structure of Scrapy projects.

HTTPCACHE_IGNORE_HTTP_CODES

New in version 0.10.

Default: []

Don’t cache response with these HTTP codes.

HTTPCACHE_IGNORE_MISSING

Default: False

If enabled, requests not found in the cache will be ignored instead of downloaded.

HTTPCACHE_IGNORE_SCHEMES

New in version 0.10.

Default: ['file']

Don’t cache responses with these URI schemes.

HTTPCACHE_STORAGE

Default: 'scrapy.extensions.httpcache.FilesystemCacheStorage'

The class which implements the cache storage backend.

HTTPCACHE_DBM_MODULE

New in version 0.13.

Default: 'anydbm'

The database module to use in the DBM storage backend. This setting is specific to the DBM backend.

HTTPCACHE_POLICY

New in version 0.18.

Default: 'scrapy.extensions.httpcache.DummyPolicy'

The class which implements the cache policy.

HTTPCACHE_GZIP

New in version 1.0.

Default: False

如果启用,将用gzip压缩所有缓存的数据。This setting is specific to the Filesystem backend.

HTTPCACHE_ALWAYS_STORE

New in version 1.1.

默认: False

If enabled, will cache pages unconditionally.

A spider may wish to have all responses available in the cache, for future use with Cache-Control: max-stale, for instance. The DummyPolicy caches all responses but never revalidates them, and sometimes a more nuanced policy is desirable.

This setting still respects Cache-Control: no-store directives in responses.If you don’t want that, filter no-store out of the Cache-Control headers in responses you feedto the cache middleware.

HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS

New in version 1.1.

Default: []

List of Cache-Control directives in responses to be ignored.

Sites often set “no-store”, “no-cache”, “must-revalidate”, etc., but get upset at the traffic a spider can generate if it respects those directives. This allows to selectively ignore Cache-Control directives that are known to be unimportant for the sites being crawled.

We assume that the spider will not issue Cache-Control directives in requests unless it actually needs them, so directives in requests are not filtered.

HttpCompressionMiddleware

class scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware

This middleware allows compressed (gzip, deflate) traffic to be sent/received from web sites.

HttpCompressionMiddleware Settings

COMPRESSION_ENABLED

Default: True

Whether the Compression middleware will be enabled.

ChunkedTransferMiddleware

class scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware

This middleware adds support for chunked transfer encoding

HttpProxyMiddleware

New in version 0.8.

class scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware

中间件提供了对request设置HTTP代理的支持,你可以通过在 Request 对象中设置 proxy 元数据来开启代理。

Like the Python standard library modules urllib and urllib2, it obeys the following environment variables:

  • http_proxy
  • https_proxy
  • no_proxy

You can also set the meta key proxy per-request, to a value like http://some_proxy_server:port.

RedirectMiddleware

class scrapy.downloadermiddlewares.redirect.RedirectMiddleware

This middleware handles redirection of requests based on response status.

The urls which the request goes through (while being redirected) can be found in the redirect_urls Request.meta key.

The RedirectMiddleware can be configured through the following settings (see the settings documentation for more info):

If Request.meta has dont_redirect key set to True, the request will be ignored by this middleware.

If you want to handle some redirect status codes in your spider, you can specify these in the handle_httpstatus_list spider attribute.

For example, if you want the redirect middleware to ignore 301 and 302 responses (and pass them through to your spider) you can do this:

class MySpider(CrawlSpider):
    handle_httpstatus_list = [301, 302]

The handle_httpstatus_list key of Request.meta can also be used to specify which response codes to allow on a per-request basis.You can also set the meta key handle_httpstatus_all to True if you want to allow any response code for a request.

RedirectMiddleware settings

REDIRECT_ENABLED

New in version 0.13.

Default: True

Whether the Redirect middleware will be enabled.

REDIRECT_MAX_TIMES

Default: 20

The maximum number of redirections that will be follow for a single request.

MetaRefreshMiddleware

class scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware

This middleware handles redirection of requests based on meta-refresh html tag.

The MetaRefreshMiddleware can be configured through the following settings (see the settings documentation for more info):

This middleware obey REDIRECT_MAX_TIMES setting, dont_redirect and redirect_urls request meta keys as described for RedirectMiddleware

MetaRefreshMiddleware settings

METAREFRESH_ENABLED

New in version 0.17.

Default: True

Whether the Meta Refresh middleware will be enabled.

METAREFRESH_MAXDELAY

Default: 100

The maximum meta-refresh delay (in seconds) to follow the redirection. Some sites use meta-refresh for redirecting to a session expired page, so we restrict automatic redirection to the maximum delay.

RetryMiddleware

class scrapy.downloadermiddlewares.retry.RetryMiddleware

A middleware to retry failed requests that are potentially caused by temporary problems such as a connection timeout or HTTP 500 error.

Failed pages are collected on the scraping process and rescheduled at the end, once the spider has finished crawling all regular (non failed) pages. Once there are no more failed pages to retry, this middleware sends a signal (retry_complete), so other extensions could connect to that signal.

The RetryMiddleware can be configured through the following settings (see the settings documentation for more info):

If Request.meta has dont_retry key set to True, the request will be ignored by this middleware.

RetryMiddleware Settings

RETRY_ENABLED

New in version 0.13.

Default: True

Whether the Retry middleware will be enabled.

RETRY_TIMES

Default: 2

Maximum number of times to retry, in addition to the first download.

RETRY_HTTP_CODES

Default: [500, 502, 503, 504, 408]

Which HTTP response codes to retry. Other errors (DNS lookup issues, connections lost, etc) are always retried.

In some cases you may want to add 400 to RETRY_HTTP_CODES because it is a common code used to indicate server overload. It is not included by default because HTTP specs say so.

RobotsTxtMiddleware

class scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware

This middleware filters out requests forbidden by the robots.txt exclusion standard.

To make sure Scrapy respects robots.txt make sure the middleware is enabled and the ROBOTSTXT_OBEY setting is enabled.

If Request.meta has dont_obey_robotstxt key set to True the request will be ignored by this middleware even if ROBOTSTXT_OBEY is enabled.

DownloaderStats

class scrapy.downloadermiddlewares.stats.DownloaderStats

Middleware that stores stats of all requests, responses and exceptions that pass through it.

To use this middleware you must enable the DOWNLOADER_STATS setting.

UserAgentMiddleware

class scrapy.downloadermiddlewares.useragent.UserAgentMiddleware

Middleware that allows spiders to override the default user agent.

In order for a spider to override the default user agent, its user_agent attribute must be set.

AjaxCrawlMiddleware

class scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware

Middleware that finds ‘AJAX crawlable’ page variants based on meta-fragment html tag. See https://developers.google.com/webmasters/ajax-crawling/docs/getting-started for more info.

Note

Scrapy finds ‘AJAX crawlable’ pages for URLs like 'http://example.com/!#foo=bar' even without this middleware. AjaxCrawlMiddleware is necessary when URL doesn’t contain '!#'. This is often a case for ‘index’ or ‘main’ website pages.

AjaxCrawlMiddleware Settings

AJAXCRAWL_ENABLED

New in version 0.21.

Default: False

Whether the AjaxCrawlMiddleware will be enabled. You may want to enable it for broad crawls.

HttpProxyMiddleware settings

HTTPPROXY_AUTH_ENCODING

Default: "latin-1"

The default encoding for proxy authentication on HttpProxyMiddleware.