Item Exporters

Once you have scraped your items, you often want to persist or export those items, to use the data in some other application. That is, after all, the whole purpose of the scraping process.

For this purpose Scrapy provides a collection of Item Exporters for different output formats, such as XML, CSV or JSON.

使用Item Exporters

If you are in a hurry, and just want to use an Item Exporter to output scraped data see the Feed exports. 否则,如果你想要知道Item Exporter如何工作或需要更多的自定义功能(默认的Export没有覆盖到),请继续阅读下文。

In order to use an Item Exporter, you must instantiate it with its required args. Each Item Exporter requires different arguments, so check each exporter documentation to be sure, in Built-in Item Exporters reference. After you have instantiated your exporter, you have to:

1. 调用方法start_exporting()以标识 exporting 过程的开始

2. 对要导出的每个项目调用export_item()方法

3. 最后调用finish_exporting()表示 exporting 过程的结束

Here you can see an Item Pipeline which uses an Item Exporter to export scraped items to different files, one per spider:

from scrapy import signals
from scrapy.exporters import XmlItemExporter

class XmlExportPipeline(object):

    def __init__(self):
        self.files = {}

     @classmethod
     def from_crawler(cls, crawler):
         pipeline = cls()
         crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
         crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
         return pipeline

    def spider_opened(self, spider):
        file = open('%s_products.xml' % spider.name, 'w+b')
        self.files[spider] = file
        self.exporter = XmlItemExporter(file)
        self.exporter.start_exporting()

    def spider_closed(self, spider):
        self.exporter.finish_exporting()
        file = self.files.pop(spider)
        file.close()

    def process_item(self, item, spider):
        self.exporter.export_item(item)
        return item

序列化 item fields

默认情况下,该字段值将不变的传递到序列化库,如何对其进行序列化的决定被委托给每一个特定的序列化库。

但是,你可以自定义每个字段值如何序列化在它被传递到序列化库中之前

有两种方法可以自定义一个字段如何被序列化,请看下文。

1. 在 field 类中声明一个 serializer

如果你使用Item,你可以在field metadata声明一个 serializer。The serializer must be a callable which receives a value and returns its serialized form.

Example:

import scrapy

def serialize_price(value):
    return '$ %s' % str(value)

class Product(scrapy.Item):
    name = scrapy.Field()
    price = scrapy.Field(serializer=serialize_price)

2. 覆盖serialize_field()方法

你还可以覆盖serialize_field()方法来自定义如何输出你的数据。

在你的自定义代码后确保你调用父类的 serialize_field()方法。

Example:

from scrapy.exporter import XmlItemExporter

class ProductXmlExporter(XmlItemExporter):

    def serialize_field(self, field, name, value):
        if field == 'price':
            return '$ %s' % str(value)
        return super(Product, self).serialize_field(field, name, value)

内建的Item Exporters参考

下面是一些Scrapy内置的 Item Exporters类。其中一些包括了实例, 假设你要输出以下2个Items:

Item(name='Color TV', price='1200')
Item(name='DVD player', price='200')

BaseItemExporter

class scrapy.exporters.BaseItemExporter(fields_to_export=None, export_empty_fields=False, encoding='utf-8')

This is the (abstract) base class for all Item Exporters. It provides support for common features used by all (concrete) Item Exporters, such as defining what fields to export, whether to export empty fields, or which encoding to use.

These features can be configured through the constructor arguments which populate their respective instance attributes: fields_to_export, export_empty_fields, encoding.

export_item(item)

输出给定item。此方法必须在子类中实现。

serialize_field(field, name, value)

返回给定field的序列化值。你可以覆盖此方法来控制序列化或输出指定的field。

默认情况下, 此方法寻找一个 serializer 在 item field 中声明并返回它的值。如果没有发现 serializer, 则值不会改变,除非你使用 unicode 值并编码到 str, 编码可以在 encoding属性中声明。

Parameters:
  • field (Field object or an empty dict) – the field being serialized. If a raw dict is being exported (not Item) field value is an empty dict.
  • name (str) – the name of the field being serialized
  • value – the value being serialized
start_exporting()

表示exporting过程的开始。一些exporters用于产生需要的头元素(例如 XmlItemExporter)。在实现exporting item前必须调用此方法。

finish_exporting()

表示exporting过程的结束。一些exporters用于产生需要的尾元素 (例如 XmlItemExporter)。在完成exporting item后必须调用此方法。

fields_to_export

列出export什么fields值, None表示export所有fields。默认值为None。

一些exporters (例如CsvItemExporter) 按照定义在属性中fields的次序依次输出。

Some exporters may require fields_to_export list in order to export the data properly when spiders return dicts (not Item instances).

export_empty_fields

是否在输出数据中包含为空的item fields。默认为False一些 exporters (例如 CsvItemExporter) 会忽略此属性并输出所有fields。

This option is ignored for dict items.

encoding

将用于编码unicode值的编码。这只会影响unicode值(总是使用此编码序列化为str)。其它值类型将原样传递到特定的序列化库。

XmlItemExporter

class scrapy.exporters.XmlItemExporter(file, item_element='item', root_element='items', **kwargs)

Exports Items in XML format to the specified file object.

Parameters:
  • file – the file-like object to use for exporting the data.
  • root_element (str) – The name of root element in the exported XML.
  • item_element (str) – The name of each item element in the exported XML.

The additional keyword arguments of this constructor are passed to the BaseItemExporter constructor.

A typical output of this exporter would be:

<?xml version="1.0" encoding="utf-8"?>
<items>
  <item>
    <name>Color TV</name>
    <price>1200</price>
 </item>
  <item>
    <name>DVD player</name>
    <price>200</price>
 </item>
</items>

Unless overridden in the serialize_field() method, multi-valued fields are exported by serializing each value inside a <value> element. This is for convenience, as multi-valued fields are very common.

For example, the item:

Item(name=['John', 'Doe'], age='23')

Would be serialized as:

<?xml version="1.0" encoding="utf-8"?>
<items>
  <item>
    <name>
      <value>John</value>
      <value>Doe</value>
    </name>
    <age>23</age>
  </item>
</items>

CsvItemExporter

class scrapy.exporters.CsvItemExporter(file, include_headers_line=True, join_multivalued=', ', **kwargs)

输出CSV格式的Item到给定的文件对象。If the fields_to_export attribute is set, it will be used to define the CSV columns and their order. The export_empty_fields attribute has no effect on this exporter.

Parameters:
  • file – the file-like object to use for exporting the data.
  • include_headers_line (str) – If enabled, makes the exporter output a header line with the field names taken from BaseItemExporter.fields_to_export or the first exported item fields.
  • join_multivalued – The char (or chars) that will be used for joining multi-valued fields, if found.

The additional keyword arguments of this constructor are passed to the BaseItemExporter constructor, and the leftover arguments to the csv.writer constructor, so you can use any csv.writer constructor argument to customize this exporter.

A typical output of this exporter would be:

product,price
Color TV,1200
DVD player,200

PickleItemExporter

class scrapy.exporters.PickleItemExporter(file, protocol=0, **kwargs)

Exports Items in pickle format to the given file-like object.

Parameters:
  • file – the file-like object to use for exporting the data.
  • protocol (int) – The pickle protocol to use.

For more information, refer to the pickle module documentation.

The additional keyword arguments of this constructor are passed to the BaseItemExporter constructor.

Pickle isn’t a human readable format, so no output examples are provided.

PprintItemExporter

class scrapy.exporters.PprintItemExporter(file, **kwargs)

输入Pretty Print格式的Item到指定的文件对象。

参数:file —— 用于输出数据的类文件对象。

The additional keyword arguments of this constructor are passed to the BaseItemExporter constructor.

A typical output of this exporter would be:

{'name': 'Color TV', 'price': '1200'}
{'name': 'DVD player', 'price': '200'}

Longer lines (when present) are pretty-formatted.

JsonItemExporter

class scrapy.exporters.JsonItemExporter(file, **kwargs)

Exports Items in JSON format to the specified file-like object, writing all objects as a list of objects. The additional constructor arguments are passed to the BaseItemExporter constructor, and the leftover arguments to the JSONEncoder constructor, so you can use any JSONEncoder constructor argument to customize this exporter.

Parameters:file – the file-like object to use for exporting the data.

A typical output of this exporter would be:

[{"name": "Color TV", "price": "1200"},
{"name": "DVD player", "price": "200"}]

Warning

JSON is very simple and flexible serialization format, but it doesn’t scale well for large amounts of data since incremental (aka.stream-mode) parsing is not well supported (if at all) among JSON parsers (on any language), and most of them just parse the entire object in memory. If you want the power and simplicity of JSON with a more stream-friendly format, consider using JsonLinesItemExporter instead, or splitting the output in multiple chunks.

JsonLinesItemExporter

class scrapy.exporters.JsonLinesItemExporter(file, **kwargs)

Exports Items in JSON format to the specified file-like object, writing one JSON-encoded item per line. The additional constructor arguments are passed to the BaseItemExporter constructor, and the leftover arguments to the JSONEncoder constructor, so you can use any JSONEncoder constructor argument to customize this exporter.

Parameters:file – the file-like object to use for exporting the data.

A typical output of this exporter would be:

{"name": "Color TV", "price": "1200"}
{"name": "DVD player", "price": "200"}

Unlike the one produced by JsonItemExporter, the format produced by this exporter is well suited for serializing large amounts of data.