Settings¶
Scrapy设定(settings)提供了定制Scrapy组件的方法,包括核心(core),插件(extension),pipeline及spider组件。
The infrastructure of the settings provides a global namespace of key-value mappings that the code can use to pull configuration values from. The settings can be populated through different mechanisms, which are described below.
The settings are also the mechanism for selecting the currently active Scrapy project (in case you have many).
For a list of available built-in settings see: Built-in settings reference.
Designating the settings¶
When you use Scrapy, you have to tell it which settings you’re using. You can do this by using an environment variable, SCRAPY_SETTINGS_MODULE
.
The value of SCRAPY_SETTINGS_MODULE
should be in Python path syntax, e.g. myproject.settings
. 注意,设定模块应该在 Python import search path中。
获取设定值¶
Settings can be populated using different mechanisms, each of which having a different precedence. Here is the list of them in decreasing order of precedence:
- Command line options (most precedence)
- Settings per-spider
- Project settings module
- Default settings per-command
- Default global settings (less precedence)
The population of these settings sources is taken care of internally, but a manual handling is possible using API calls. See the Settings API topic for reference.
These mechanisms are described in more detail below.
1. 命令行选项¶
Arguments provided by the command line are the ones that take most precedence, overriding any other options. You can explicitly override one (or more) settings using the -s
(or --set
) command line option.
Example:
scrapy crawl myspider -s LOG_FILE=scrapy.log
2. 每个Spider的Setting¶
Spiders (See the Spiders chapter for reference) can define their own settings that will take precedence and override the project ones. They can do so by setting their custom_settings
attribute:
class MySpider(scrapy.Spider):
name = 'myspider'
custom_settings = {
'SOME_SETTING': 'some value',
}
3. 项目设定模块¶
The project settings module is the standard configuration file for your Scrapy project, it’s where most of your custom settings will be populated. For a standard Scrapy project, this means you’ll be adding or changing the settings in the settings.py
file created for your project.
4. 每个命令的默认设定¶
Each Scrapy tool command can have its own default settings, which override the global default settings. Those custom command settings are specified in the default_settings
attribute of the command class.
5. 默认全局设定¶
The global defaults are located in the scrapy.settings.default_settings
module and documented in the Built-in settings reference section.
How to access settings¶
在一个Spider中,设置通过self.settings
访问:
class MySpider(scrapy.Spider):
name = 'myspider'
start_urls = ['http://example.com']
def parse(self, response):
print("Existing settings: %s" % self.settings.attributes.keys())
Note
The settings
attribute is set in the base Spider class after the spider is initialized. If you want to use the settings before the initialization (e.g., in your spider’s __init__()
method), you’ll need to override the from_crawler()
method.
Settings can be accessed through the scrapy.crawler.Crawler.settings
attribute of the Crawler that is passed to from_crawler
method in extensions, middlewares and item pipelines:
class MyExtension(object):
def __init__(self, log_is_enabled=False):
if log_is_enabled:
print("log is enabled!")
@classmethod
def from_crawler(cls, crawler):
settings = crawler.settings
return cls(settings.getbool('LOG_ENABLED'))
The settings object can be used like a dict (e.g., settings['LOG_ENABLED']
), but it’s usually preferred to extract the setting in the format you need it to avoid type errors, using one of the methods provided by the Settings
API.
设定名字的命名规则¶
Setting names are usually prefixed with the component that they configure. For example, proper setting names for a fictional robots.txt extension would be ROBOTSTXT_ENABLED
, ROBOTSTXT_OBEY
, ROBOTSTXT_CACHEDIR
, etc.
内置设定参考手册¶
Here’s a list of all available Scrapy settings, in alphabetical order, along with their default values and the scope where they apply.
The scope, where available, shows where the setting is being used, if it’s tied to any particular component. In that case the module of that component will be shown, typically an extension, middleware or pipeline. It also means that the component must be enabled in order for the setting to have any effect.
BOT_NAME¶
默认值:'scrapybot'
The name of the bot implemented by this Scrapy project (also known as the project name). This will be used to construct the User-Agent by default, and also for logging.
It’s automatically populated with your project name when you create your project with the startproject
command.
CONCURRENT_ITEMS¶
默认值:100
Maximum number of concurrent items (per response) to process in parallel in the Item Processor (also known as the Item Pipeline).
CONCURRENT_REQUESTS¶
默认值:16
The maximum number of concurrent (ie. simultaneous) requests that will be performed by the Scrapy downloader.
CONCURRENT_REQUESTS_PER_DOMAIN¶
默认值:8
The maximum number of concurrent (ie.simultaneous) requests that will be performed to any single domain.
另见:AutoThrottle extension和它的AUTOTHROTTLE_TARGET_CONCURRENCY
选项。
CONCURRENT_REQUESTS_PER_IP¶
默认值:0
The maximum number of concurrent (ie. simultaneous) requests that will be performed to any single IP. If non-zero, the CONCURRENT_REQUESTS_PER_DOMAIN
setting is ignored, and this one is used instead. In other words, concurrency limits will be applied per IP, not per domain.
该设定也影响 DOWNLOAD_DELAY
和AutoThrottle extension:如果 CONCURRENT_REQUESTS_PER_IP
非0,下载延迟应用在IP而不是网站上。
DEFAULT_ITEM_CLASS¶
默认值:'scrapy.item.Item'
The default class that will be used for instantiating items in the the Scrapy shell.
DEFAULT_REQUEST_HEADERS¶
默认值:
{
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
}
The default headers used for Scrapy HTTP Requests. 由 DefaultHeadersMiddleware
产生。
DEPTH_LIMIT¶
默认值:0
Scope: scrapy.spidermiddlewares.depth.DepthMiddleware
The maximum depth that will be allowed to crawl for any site. If zero, no limit will be imposed.
DEPTH_PRIORITY¶
默认值:0
Scope: scrapy.spidermiddlewares.depth.DepthMiddleware
An integer that is used to adjust the request priority based on its depth:
- if zero (default), no priority adjustment is made from depth
- a positive value will decrease the priority, i.e. higher depth requests will be processed later ; this is commonly used when doing breadth-first crawls (BFO)
- a negative value will increase priority, i.e., higher depth requests will be processed sooner (DFO)
另见:Does Scrapy crawl in breadth-first or depth-first order?来调整Scrapy的BFO或DFO。
Note
This setting adjusts priority in the opposite way compared to other priority settings REDIRECT_PRIORITY_ADJUST
and RETRY_PRIORITY_ADJUST
.
DEPTH_STATS_VERBOSE¶
默认值:False
范围:'scrapy.dupefilter.RFPDupeFilter'
Whether to collect verbose depth stats. If this is enabled, the number of requests for each depth is collected in the stats.
DOWNLOADER_HTTPCLIENTFACTORY¶
默认值:'scrapy.statscol.MemoryStatsCollector'
Defines a Twisted protocol.ClientFactory
class to use for HTTP/1.0 connections (for HTTP10DownloadHandler
).
Note
HTTP/1.0 is rarely used nowadays so you can safely ignore this setting, unless you use Twisted<11.1, or if you really want to use HTTP/1.0 and override DOWNLOAD_HANDLERS_BASE
for http(s)
scheme accordingly, i.e. to 'scrapy.core.downloader.handlers.http.HTTP10DownloadHandler'
.
DOWNLOADER_CLIENTCONTEXTFACTORY¶
默认值:'scrapy.core.downloader.contextfactory.ScrapyClientContextFactory'
Represents the classpath to the ContextFactory to use.
Here, “ContextFactory” is a Twisted term for SSL/TLS contexts, defining the TLS/SSL protocol version to use, whether to do certificate verification, or even enable client-side authentication (and various other things).
Note
Scrapy default context factory does NOT perform remote server certificate verification. This is usually fine for web scraping.
If you do need remote server certificate verification enabled, Scrapy also has another context factory class that you can set, 'scrapy.core.downloader.contextfactory.BrowserLikeContextFactory'
, which uses the platform’s certificates to validate remote endpoints. This is only available if you use Twisted>=14.0.
If you do use a custom ContextFactory, make sure it accepts a method
parameter at init (this is the OpenSSL.SSL
method mapping DOWNLOADER_CLIENT_TLS_METHOD
).
DOWNLOADER_CLIENT_TLS_METHOD¶
默认值:'TLS'
Use this setting to customize the TLS/SSL method used by the default HTTP/1.1 downloader.
This setting must be one of these string values:
'TLS'
: maps to OpenSSL’sTLS_method()
(a.k.aSSLv23_method()
), which allows protocol negotiation, starting from the highest supported by the platform; default, recommended'TLSv1.0'
: this value forces HTTPS connections to use TLS version 1.0 ; set this if you want the behavior of Scrapy<1.1'TLSv1.1'
: forces TLS version 1.1'TLSv1.2'
: forces TLS version 1.2'SSLv3'
: forces SSL version 3 (not recommended)
Note
We recommend that you use PyOpenSSL>=0.13 and Twisted>=0.13 or above (Twisted>=14.0 if you can).
DOWNLOADER_MIDDLEWARES¶
默认值:{}
A dict containing the downloader middlewares enabled in your project, and their orders. For more info see Activating a downloader middleware.
DOWNLOADER_MIDDLEWARES_BASE¶
默认值:
{
'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,
'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,
'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware': 830,
'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,
'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,
}
A dict containing the downloader middlewares enabled by default in Scrapy. Low orders are closer to the engine, high orders are closer to the downloader. You should never modify this setting in your project, modify DOWNLOADER_MIDDLEWARES
instead. For more info see Activating a downloader middleware.
DOWNLOAD_DELAY¶
默认值:0
The amount of time (in secs) that the downloader should wait before downloading consecutive pages from the same website. This can be used to throttle the crawling speed to avoid hitting servers too hard. Decimal numbers are supported. Example:
DOWNLOAD_DELAY = 0.25 # 250 ms of delay
This setting is also affected by the RANDOMIZE_DOWNLOAD_DELAY
setting (which is enabled by default). By default, Scrapy doesn’t wait a fixed amount of time between requests, but uses a random interval between 0.5 * DOWNLOAD_DELAY
and 1.5 * DOWNLOAD_DELAY
.
When CONCURRENT_REQUESTS_PER_IP
is non-zero, delays are enforced per ip address instead of per domain.
你可以通过设置Spider的download_delay
属性来更改每个Spider的这个设置。
DOWNLOAD_HANDLERS¶
默认值:{}
A dict containing the request downloader handlers enabled in your project. See DOWNLOAD_HANDLERS_BASE
for example format.
DOWNLOAD_HANDLERS_BASE¶
默认值:
{
'file': 'scrapy.core.downloader.handlers.file.FileDownloadHandler',
'http': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',
'https': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',
's3': 'scrapy.core.downloader.handlers.s3.S3DownloadHandler',
'ftp': 'scrapy.core.downloader.handlers.ftp.FTPDownloadHandler',
}
A dict containing the request download handlers enabled by default in Scrapy. You should never modify this setting in your project, modify DOWNLOAD_HANDLERS
instead.
You can disable any of these download handlers by assigning None
to their URI scheme in DOWNLOAD_HANDLERS
. 例如,若要禁用内置的FTP处理程序(没有可替换的),则将它放在你的settings.py
中:
DOWNLOAD_HANDLERS = {
'ftp': None,
}
DOWNLOAD_TIMEOUT¶
默认值:180
The amount of time (in secs) that the downloader will wait before timing out.
Note
This timeout can be set per spider using download_timeout
spider attribute and per-request using download_timeout
Request.meta key.
DOWNLOAD_MAXSIZE¶
默认值:1073741824 (1024MB)
The maximum response size (in bytes) that downloader will download.
If you want to disable it set to 0.
Note
This size can be set per spider using download_maxsize
spider attribute and per-request using download_maxsize
Request.meta key.
This feature needs Twisted >= 11.1.
DOWNLOAD_WARNSIZE¶
默认值:33554432 (32Mb)
The response size (in bytes) that downloader will start to warn.
If you want to disable it set to 0.
Note
This size can be set per spider using download_warnsize
spider attribute and per-request using download_warnsize
Request.meta key.
这些功能要求Twisted >= 11.1。
DUPEFILTER_CLASS¶
默认值︰'scrapy.dupefilters.RFPDupeFilter'
The class used to detect and filter duplicate requests.
The default (RFPDupeFilter
) filters based on request fingerprint using the scrapy.utils.request.request_fingerprint
function. In order to change the way duplicates are checked you could subclass RFPDupeFilter
and override its request_fingerprint
method. This method should accept scrapy Request
object and return its fingerprint (a string).
DUPEFILTER_DEBUG¶
默认值︰False
By default, RFPDupeFilter
only logs the first duplicate request. Setting DUPEFILTER_DEBUG
to True
will make it log all duplicate requests.
EDITOR¶
默认值︰取决于环境
The editor to use for editing spiders with the edit
command. It defaults to the EDITOR
environment variable, if set. 否则,它默认为vi
(在Unix系统上)或IDLE编辑器(在Windows上)。
EXTENSIONS_BASE¶
默认值︰
{
'scrapy.extensions.corestats.CoreStats': 0,
'scrapy.extensions.telnet.TelnetConsole': 0,
'scrapy.extensions.memusage.MemoryUsage': 0,
'scrapy.extensions.memdebug.MemoryDebugger': 0,
'scrapy.extensions.closespider.CloseSpider': 0,
'scrapy.extensions.feedexport.FeedExporter': 0,
'scrapy.extensions.logstats.LogStats': 0,
'scrapy.extensions.spiderstate.SpiderState': 0,
'scrapy.extensions.throttle.AutoThrottle': 0,
}
A dict containing the extensions available by default in Scrapy, and their orders. This setting contains all stable built-in extensions. Keep in mind that some of them need to be enabled through a setting.
For more information See the extensions user guide and the list of available extensions.
FEED_TEMPDIR¶
The Feed Temp dir allows you to set a custom folder to save crawler temporary files before uploading with FTP feed storage and Amazon S3.
ITEM_PIPELINES¶
默认值︰{}
A dict containing the item pipelines to use, and their orders. Order values are arbitrary, but it is customary to define them in the 0-1000 range. Lower orders process before higher orders.
示例︰
ITEM_PIPELINES = {
'mybot.pipelines.validate.ValidateMyItem': 300,
'mybot.pipelines.validate.StoreMyItem': 800,
}
ITEM_PIPELINES_BASE¶
默认值︰{}
A dict containing the pipelines enabled by default in Scrapy. You should never modify this setting in your project, modify ITEM_PIPELINES
instead.
LOG_FORMAT¶
默认值︰'%(asctime)s [%(name)s] %(levelname)s: %(message)s'
String for formatting log messsages. Refer to the Python logging documentation for the whole list of available placeholders.
LOG_DATEFORMAT¶
默认值︰'%Y-%m-%d %H:%M:%S'
String for formatting date/time, expansion of the %(asctime)s
placeholder in LOG_FORMAT
. Refer to the Python datetime documentation for the whole list of available directives.
LOG_LEVEL¶
默认值︰'DEBUG'
Minimum level to log. Available levels are: CRITICAL, ERROR, WARNING, INFO, DEBUG. For more info see Logging.
LOG_STDOUT¶
默认值︰False
如果为True
,你的进程所有标准输出 (和错误) 将被重定向到日志。For example if you print 'hello'
it will appear in the Scrapy log.
MEMDEBUG_NOTIFY¶
默认值︰[]
When memory debugging is enabled a memory report will be sent to the specified addresses if this setting is not empty, otherwise the report will be written to the log.
示例︰
MEMDEBUG_NOTIFY = ['user@example.com']
MEMUSAGE_ENABLED¶
默认值︰False
范围:scrapy.contrib.memusage
是否启用内存使用插件,当Scrapy进程占用的内存超出限制时,该插件将会关闭Scrapy进程, 同时发送email进行通知。
MEMUSAGE_LIMIT_MB¶
默认值︰0
范围:scrapy.contrib.memusage
The maximum amount of memory to allow (in megabytes) before shutting down Scrapy (if MEMUSAGE_ENABLED is True). 如果为None,则不做检查。
MEMUSAGE_CHECK_INTERVAL_SECONDS¶
New in version 1.1.
默认值︰60.0
范围:scrapy.contrib.downloadermiddleware.robotstxt
The Memory usage extension checks the current memory usage, versus the limits set by MEMUSAGE_LIMIT_MB
and MEMUSAGE_WARNING_MB
, at fixed time intervals.
This sets the length of these intervals, in seconds.
MEMUSAGE_NOTIFY_MAIL¶
默认值:False
范围︰scrapy.extensions.memusage
达到内存限制时通知的email列表。
示例︰
MEMUSAGE_NOTIFY_MAIL = ['user@example.com']
MEMUSAGE_REPORT¶
默认值:False
范围︰scrapy.extensions.memusage
Whether to send a memory usage report after each spider has been closed.
MEMUSAGE_WARNING_MB¶
默认值:0
范围︰scrapy.extensions.memusage
The maximum amount of memory to allow (in megabytes) before sending a warning email notifying about it. If zero, no warning will be produced.
NEWSPIDER_MODULE¶
默认值︰''
Module where to create new spiders using the genspider
command.
示例︰
NEWSPIDER_MODULE = 'mybot.spiders_dev'
RANDOMIZE_DOWNLOAD_DELAY¶
默认值:True
If enabled, Scrapy will wait a random amount of time (between 0.5 * DOWNLOAD_DELAY
and 1.5 * DOWNLOAD_DELAY
) while fetching requests from the same website.
该随机值降低了crawler被检测到(接着被block)的机会,某些网站会分析请求, 查找请求之间时间的相似性。
The randomization policy is the same used by wget --random-wait
option.
If DOWNLOAD_DELAY
is zero (default) this option has no effect.
REACTOR_THREADPOOL_MAXSIZE¶
默认值:10
Twisted Reactor线程池大小的最大值。这是由多个Scrapy组件使用的通用多用途线程池。Threaded DNS Resolver、BlockingFeedStorage、S3FilesStore只是其中的一部分。如果你遇到IO不足并阻塞问题,请增加此值。
REDIRECT_MAX_TIMES¶
默认值:20
定义请求可以被重定向的最大次数。After this maximum the request’s response is returned as is. We used Firefox default value for the same task.
REDIRECT_PRIORITY_ADJUST¶
默认值:+2
Scope: scrapy.downloadermiddlewares.redirect.RedirectMiddleware
Adjust redirect request priority relative to original request:
- a positive priority adjust (default) means higher priority.
- a negative priority adjust means lower priority.
RETRY_PRIORITY_ADJUST¶
默认值︰-1
范围:scrapy.downloadermiddlewares.retry.RetryMiddleware
Adjust retry request priority relative to original request:
- a positive priority adjust means higher priority.
- a negative priority adjust (default) means lower priority.
ROBOTSTXT_OBEY¶
默认值:False
范围:scrapy.downloadermiddlewares.robotstxt
如果启用,Scrapy会遵守robots.txt的规则。For more information see RobotsTxtMiddleware.
Note
While the default value is False
for historical reasons, this option is enabled by default in settings.py file generated by scrapy startproject
command.
SCHEDULER_DEBUG¶
默认值:False
Setting to True
will log debug information about the requests scheduler. This currently logs (only once) if the requests cannot be serialized to disk. Stats counter (scheduler/unserializable
) tracks the number of times this happens.
Example entry in logs:
1956-01-31 00:00:00+0800 [scrapy] ERROR: Unable to serialize request:
<GET http://example.com> - reason: cannot serialize <Request at 0x9a7c7ec>
(type Request)> - no more unserializable requests will be logged
(see 'scheduler/unserializable' stats counter)
SPIDER_CONTRACTS¶
默认值:{}
A dict containing the spider contracts enabled in your project, used for testing spiders. For more info see Spiders Contracts.
SPIDER_CONTRACTS_BASE¶
默认值:
{
'scrapy.contracts.default.UrlContract' : 1,
'scrapy.contracts.default.ReturnsContract': 2,
'scrapy.contracts.default.ScrapesContract': 3,
}
一个字典,包含Scrapy中默认启用的Scrapy协议。You should never modify this setting in your project, modify SPIDER_CONTRACTS
instead. For more info see Spiders Contracts.
You can disable any of these contracts by assigning None
to their class path in SPIDER_CONTRACTS
. E.g., to disable the built-in ScrapesContract
, place this in your settings.py
:
SPIDER_CONTRACTS = {
'scrapy.contracts.default.ScrapesContract': None,
}
SPIDER_LOADER_CLASS¶
默认值:'scrapy.spiderloader.SpiderLoader'
The class that will be used for loading spiders, which must implement the SpiderLoader API.
SPIDER_MIDDLEWARES¶
默认值:{}
A dict containing the spider middlewares enabled in your project, and their orders. For more info see Activating a spider middleware.
SPIDER_MIDDLEWARES_BASE¶
默认值:
{
'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 50,
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': 500,
'scrapy.spidermiddlewares.referer.RefererMiddleware': 700,
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware': 800,
'scrapy.spidermiddlewares.depth.DepthMiddleware': 900,
}
A dict containing the spider middlewares enabled by default in Scrapy, and their orders. Low orders are closer to the engine, high orders are closer to the spider. For more info see Activating a spider middleware.
SPIDER_MODULES¶
默认值:[]
A list of modules where Scrapy will look for spiders.
示例︰
SPIDER_MODULES = ['mybot.spiders_prod', 'mybot.spiders_dev']
STATS_CLASS¶
默认值:'scrapy.statscollectors.MemoryStatsCollector'
The class to use for collecting stats, who must implement the Stats Collector API.
STATS_DUMP¶
默认值:True
Dump the Scrapy stats (to the Scrapy log) once the spider finishes.
For more info see: Stats Collection.
TELNETCONSOLE_ENABLED¶
默认值:True
A boolean which specifies if the telnet console will be enabled (provided its extension is also enabled).
TELNETCONSOLE_PORT¶
默认值:[6023, 6073]
The port range to use for the telnet console. If set to None
or 0
, a dynamically assigned port is used. For more info see Telnet Console.
TEMPLATES_DIR¶
默认值:Scrapy模块内部的templates
目录
The directory where to look for templates when creating new projects with startproject
command and new spiders with genspider
command.
The project name must not conflict with the name of custom files or directories in the project
subdirectory.
URLLENGTH_LIMIT¶
默认值:2083
范围:spidermiddlewares.urllength
The maximum URL length to allow for crawled URLs. For more information about the default value for this setting see: http://www.boutell.com/newfaq/misc/urllength.html
USER_AGENT¶
默认值:"Scrapy/VERSION (+http://scrapy.org)"
The default User-Agent to use when crawling, unless overridden.
Settings documented elsewhere:¶
The following settings are documented elsewhere, please check each specific case to see how to enable and use them.
- AJAXCRAWL_ENABLED
- AUTOTHROTTLE_DEBUG
- AUTOTHROTTLE_ENABLED
- AUTOTHROTTLE_MAX_DELAY
- AUTOTHROTTLE_START_DELAY
- AUTOTHROTTLE_TARGET_CONCURRENCY
- CLOSESPIDER_ERRORCOUNT
- CLOSESPIDER_ITEMCOUNT
- CLOSESPIDER_PAGECOUNT
- CLOSESPIDER_TIMEOUT
- COMMANDS_MODULE
- COMPRESSION_ENABLED
- COOKIES_DEBUG
- COOKIES_ENABLED
- FEED_EXPORTERS
- FEED_EXPORTERS_BASE
- FEED_EXPORT_ENCODING
- FEED_EXPORT_FIELDS
- FEED_FORMAT
- FEED_STORAGES
- FEED_STORAGES_BASE
- FEED_STORE_EMPTY
- FEED_URI
- FILES_EXPIRES
- FILES_RESULT_FIELD
- FILES_STORE
- FILES_STORE_S3_ACL
- FILES_URLS_FIELD
- HTTPCACHE_ALWAYS_STORE
- HTTPCACHE_DBM_MODULE
- HTTPCACHE_DIR
- HTTPCACHE_ENABLED
- HTTPCACHE_EXPIRATION_SECS
- HTTPCACHE_GZIP
- HTTPCACHE_IGNORE_HTTP_CODES
- HTTPCACHE_IGNORE_MISSING
- HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS
- HTTPCACHE_IGNORE_SCHEMES
- HTTPCACHE_POLICY
- HTTPCACHE_STORAGE
- HTTPERROR_ALLOWED_CODES
- HTTPERROR_ALLOW_ALL
- HTTPPROXY_AUTH_ENCODING
- IMAGES_EXPIRES
- IMAGES_MIN_HEIGHT
- IMAGES_MIN_WIDTH
- IMAGES_RESULT_FIELD
- IMAGES_STORE
- IMAGES_STORE_S3_ACL
- IMAGES_THUMBS
- IMAGES_URLS_FIELD
- MAIL_FROM
- MAIL_HOST
- MAIL_PASS
- MAIL_PORT
- MAIL_SSL
- MAIL_TLS
- MAIL_USER
- METAREFRESH_ENABLED
- METAREFRESH_MAXDELAY
- REDIRECT_ENABLED
- REDIRECT_MAX_TIMES
- REFERER_ENABLED
- RETRY_ENABLED
- RETRY_HTTP_CODES
- RETRY_TIMES
- TELNETCONSOLE_HOST
- TELNETCONSOLE_PORT