日志操作手册

作者:Vinay Sajip

这个页面包含一些关于日志的“菜谱”,它们的实用性经过了时间考验。

在多个模块中使用日志

多数对logging.getLogger('someLogger')的调用会返回一个相同logger对象的引用。不仅在同一个模块中如此,跨模块也是一样,只要它们在同一个Python解释器进程中。对相同logger对象的引用是有效的;而且应用程序的代码可以在一个模块中定义和配置一个父logger也可以在不同模块中创建但不能配置一个子logger,并且所有对子logger的调用最终都会传到父logger那里。以下是在main模块中:

import logging
import auxiliary_module

# create logger with 'spam_application'
logger = logging.getLogger('spam_application')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)

logger.info('creating an instance of auxiliary_module.Auxiliary')
a = auxiliary_module.Auxiliary()
logger.info('created an instance of auxiliary_module.Auxiliary')
logger.info('calling auxiliary_module.Auxiliary.do_something')
a.do_something()
logger.info('finished auxiliary_module.Auxiliary.do_something')
logger.info('calling auxiliary_module.some_function()')
auxiliary_module.some_function()
logger.info('done with auxiliary_module.some_function()')

以下是在auxiliary模块:

import logging

# create logger
module_logger = logging.getLogger('spam_application.auxiliary')

class Auxiliary:
    def __init__(self):
        self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
        self.logger.info('creating an instance of Auxiliary')

    def do_something(self):
        self.logger.info('doing something')
        a = 1 + 1
        self.logger.info('done doing something')

def some_function():
    module_logger.info('received a call to "some_function"')

输出如下:

2005-03-23 23:47:11,663 - spam_application - INFO -
   creating an instance of auxiliary_module.Auxiliary
2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
   creating an instance of Auxiliary
2005-03-23 23:47:11,665 - spam_application - INFO -
   created an instance of auxiliary_module.Auxiliary
2005-03-23 23:47:11,668 - spam_application - INFO -
   calling auxiliary_module.Auxiliary.do_something
2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
   doing something
2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
   done doing something
2005-03-23 23:47:11,670 - spam_application - INFO -
   finished auxiliary_module.Auxiliary.do_something
2005-03-23 23:47:11,671 - spam_application - INFO -
   calling auxiliary_module.some_function()
2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
   received a call to 'some_function'
2005-03-23 23:47:11,673 - spam_application - INFO -
   done with auxiliary_module.some_function()

从多个线程记录

从多个线程进行日志记录不需要特殊的工作。以下示例显示从主(初始)线程和另一个线程进行记录:

import logging
import threading
import time

def worker(arg):
    while not arg['stop']:
        logging.debug('Hi from myfunc')
        time.sleep(0.5)

def main():
    logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')
    info = {'stop': False}
    thread = threading.Thread(target=worker, args=(info,))
    thread.start()
    while True:
        try:
            logging.debug('Hello from main')
            time.sleep(0.75)
        except KeyboardInterrupt:
            info['stop'] = True
            break
    thread.join()

if __name__ == '__main__':
    main()

运行时,脚本应该打印如下内容:

   0 Thread-1 Hi from myfunc
   3 MainThread Hello from main
 505 Thread-1 Hi from myfunc
 755 MainThread Hello from main
1007 Thread-1 Hi from myfunc
1507 MainThread Hello from main
1508 Thread-1 Hi from myfunc
2010 Thread-1 Hi from myfunc
2258 MainThread Hello from main
2512 Thread-1 Hi from myfunc
3009 MainThread Hello from main
3013 Thread-1 Hi from myfunc
3515 Thread-1 Hi from myfunc
3761 MainThread Hello from main
4017 Thread-1 Hi from myfunc
4513 MainThread Hello from main
4518 Thread-1 Hi from myfunc

这显示了日志输出按照预期分布。这种方法适用于比这里显示的更多的线程。

多个处理程序和格式化程序

Logger是普通的Python对象。addHandler()方法对于添加handler的数量没有限制。应用可以将所有级别的消息记录到文本文件,同时将错误及其以上的消息输出到控制台,这通常是很有用的。要达到这个目的,只要合适地配置处理器。应用代码中对日志函数的调用可以保持不变。这里对前面的基于模块的配置的例子做了很小的改动:

import logging

logger = logging.getLogger('simple_example')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)

# 'application' code
logger.debug('debug message')
logger.info('info message')
logger.warn('warn message')
logger.error('error message')
logger.critical('critical message')

我们可以看到,应用程序的代码并不关心是否存在多个日志处理器。程序代码的改动很小,我们只是添加并配置了一个叫做fh的日志处理器。

在编写和测试应用程序时,用更高或更低级别的过滤器来创建新的日志处理器是很有用的。记住不要使用print语句来除错,而应该用logger.debug:和print语句不同(你将来不得不删除或者注释掉它们),logger.debug语句可以保留在源代码中,因为你可以随时使它们失效。当需要时,唯一要变动的就是将记录器/处理器的级别修改为DEBUG。

记录到多个目的地

假定你需要根据不同的条件,以不同的消息格式将日志记录到控制台和文件。假定你希望将DEBUG及其以上的消息记录到文件,将INFO及其以上的消息记录到控制台。同时假定文件中应该包含时间戳,但是控制台消息并不需要。为此,你可以这样做:

import logging

# set up logging to file - see previous section for more details
logging.basicConfig(level=logging.DEBUG,
                    format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
                    datefmt='%m-%d %H:%M',
                    filename='/temp/myapp.log',
                    filemode='w')
# define a Handler which writes INFO messages or higher to the sys.stderr
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# set a format which is simpler for console use
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
# add the handler to the root logger
logging.getLogger('').addHandler(console)

# Now, we can log to the root logger, or any other logger. First the root...
logging.info('Jackdaws love my big sphinx of quartz.')

# Now, define a couple of other loggers which might represent areas in your
# application:

logger1 = logging.getLogger('myapp.area1')
logger2 = logging.getLogger('myapp.area2')

logger1.debug('Quick zephyrs blow, vexing daft Jim.')
logger1.info('How quickly daft jumping zebras vex.')
logger2.warning('Jail zesty vixen who grabbed pay from quack.')
logger2.error('The five boxing wizards jump quickly.')

当运行时,在控制台上可以看到:

root        : INFO     Jackdaws love my big sphinx of quartz.
myapp.area1 : INFO     How quickly daft jumping zebras vex.
myapp.area2 : WARNING  Jail zesty vixen who grabbed pay from quack.
myapp.area2 : ERROR    The five boxing wizards jump quickly.

而在文件中可以看到:

10-22 22:19 root         INFO     Jackdaws love my big sphinx of quartz.
10-22 22:19 myapp.area1  DEBUG    Quick zephyrs blow, vexing daft Jim.
10-22 22:19 myapp.area1  INFO     How quickly daft jumping zebras vex.
10-22 22:19 myapp.area2  WARNING  Jail zesty vixen who grabbed pay from quack.
10-22 22:19 myapp.area2  ERROR    The five boxing wizards jump quickly.

可以看到,DEBUG消息只存在于文件中。其它的消息则在两个目的地中都有。

这个例子使用了控制台处理器和文件处理器,但是你可以使用任意处理器的组合。

配置服务器示例

这是一个使用了日志配置服务器的模块的例子:

import logging
import logging.config
import time
import os

# read initial config file
logging.config.fileConfig('logging.conf')

# create and start listener on port 9999
t = logging.config.listen(9999)
t.start()

logger = logging.getLogger('simpleExample')

try:
    # loop through logging calls to see the difference
    # new configurations make, until Ctrl+C is pressed
    while True:
        logger.debug('debug message')
        logger.info('info message')
        logger.warn('warn message')
        logger.error('error message')
        logger.critical('critical message')
        time.sleep(5)
except KeyboardInterrupt:
    # cleanup
    logging.config.stopListening()
    t.join()

这是一个脚本,它有一个文件名参数,将文件发送到服务器作为新的日志配置(发送文件之前要先发送二进制编码的文件长度):

#!/usr/bin/env python
import socket, sys, struct

with open(sys.argv[1], 'rb') as f:
    data_to_send = f.read()

HOST = 'localhost'
PORT = 9999
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('connecting...')
s.connect((HOST, PORT))
print('sending config...')
s.send(struct.pack('>L', len(data_to_send)))
s.send(data_to_send)
s.close()
print('complete')

处理阻止的处理程序

有时你必须让你的日志处理程序做他们的工作,而不阻塞你正在记录的线程。这在Web应用程序中很常见,但当然也会发生在其他情况下。

显示缓慢行为的常见原因是SMTPHandler:由于开发人员控制之外的一些原因(例如,性能不佳的邮件或网络基础架构),发送电子邮件可能需要很长时间。但几乎任何基于网络的处理程序都可以阻止:即使一个SocketHandler操作也可以在引擎下做一个DNS查询太慢了(这个查询可以深入到套接字库代码中,低于Python层,并在您的控制之外)。

一个解决方案是使用两部分方法。对于第一部分,只对从性能关键线程访问的记录器附加QueueHandler他们只是写入他们的队列,可以将其大小设置为足够大的容量或初始化为没有上限的大小。对队列的写入通常会被快速接受,但您可能需要捕获queue.Full异常作为代码中的预防措施。如果你是一个库开发人员在他们的代码中有性能关键线程,请确保记录这一点(以及建议只将QueueHandlers附加到你的日志记录器),以便其他开发人员的利益,你的代码。

解决方案的第二部分是QueueListener,它被设计为QueueHandler的对应部分。QueueListener非常简单:它传递一个队列和一些处理程序,并且触发一个内部线程,监听从QueueHandlers发送的LogRecords的队列的LogRecords)。LogRecords从队列中删除并传递给处理程序进行处理。

具有单独的QueueListener类的优点是,您可以使用相同的实例来服务多个QueueHandlers这比现有处理程序类的线程版本更加资源友好,每个处理程序会占用一个线程,没有特别的好处。

使用这两个类的示例如下(导入省略):

que = queue.Queue(-1)  # no limit on size
queue_handler = QueueHandler(que)
handler = logging.StreamHandler()
listener = QueueListener(que, handler)
root = logging.getLogger()
root.addHandler(queue_handler)
formatter = logging.Formatter('%(threadName)s: %(message)s')
handler.setFormatter(formatter)
listener.start()
# The log output will display the thread which generated
# the event (the main thread) rather than the internal
# thread which monitors the internal queue. This is what
# you want to happen.
root.warning('Look out!')
listener.stop()

当运行时,将产生:

MainThread: Look out!

在版本3.5中已更改:在Python 3.5之前,QueueListener始终将从队列接收到的每条消息传递给它初始化的每个处理程序。(这是因为假设级别过滤全部在另一侧完成,其中队列被填充。)从3.5开始,可以通过将关键字参数respect_handler_level=True传递给侦听器的构造函数来更改此行为。当这样做时,监听器将每个消息的级别与处理程序的级别进行比较,并且只有在适当的情况下才将消息传递给处理程序。

通过网络发送和接收日志记录事件

假定你希望通过网络来发送日志事件,并在接收端处理它们。简单的做法就是在发送端给根记录器添加一个SocketHandler实例:

import logging, logging.handlers

rootLogger = logging.getLogger('')
rootLogger.setLevel(logging.DEBUG)
socketHandler = logging.handlers.SocketHandler('localhost',
                    logging.handlers.DEFAULT_TCP_LOGGING_PORT)
# don't bother with a formatter, since a socket handler sends the event as
# an unformatted pickle
rootLogger.addHandler(socketHandler)

# Now, we can log to the root logger, or any other logger. First the root...
logging.info('Jackdaws love my big sphinx of quartz.')

# Now, define a couple of other loggers which might represent areas in your
# application:

logger1 = logging.getLogger('myapp.area1')
logger2 = logging.getLogger('myapp.area2')

logger1.debug('Quick zephyrs blow, vexing daft Jim.')
logger1.info('How quickly daft jumping zebras vex.')
logger2.warning('Jail zesty vixen who grabbed pay from quack.')
logger2.error('The five boxing wizards jump quickly.')

在接收端,您可以使用socketserver模块设置接收器。这是一个基本的能工作的例子:

import pickle
import logging
import logging.handlers
import socketserver
import struct


class LogRecordStreamHandler(socketserver.StreamRequestHandler):
    """Handler for a streaming logging request.

    This basically logs the record using whatever logging policy is
    configured locally.
    """

    def handle(self):
        """
        Handle multiple requests - each expected to be a 4-byte length,
        followed by the LogRecord in pickle format. Logs the record
        according to whatever policy is configured locally.
        """
        while True:
            chunk = self.connection.recv(4)
            if len(chunk) < 4:
                break
            slen = struct.unpack('>L', chunk)[0]
            chunk = self.connection.recv(slen)
            while len(chunk) < slen:
                chunk = chunk + self.connection.recv(slen - len(chunk))
            obj = self.unPickle(chunk)
            record = logging.makeLogRecord(obj)
            self.handleLogRecord(record)

    def unPickle(self, data):
        return pickle.loads(data)

    def handleLogRecord(self, record):
        # if a name is specified, we use the named logger rather than the one
        # implied by the record.
        if self.server.logname is not None:
            name = self.server.logname
        else:
            name = record.name
        logger = logging.getLogger(name)
        # N.B. EVERY record gets logged. This is because Logger.handle
        # is normally called AFTER logger-level filtering. If you want
        # to do filtering, do it at the client end to save wasting
        # cycles and network bandwidth!
        logger.handle(record)

class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
    """
    Simple TCP socket-based logging receiver suitable for testing.
    """

    allow_reuse_address = True

    def __init__(self, host='localhost',
                 port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
                 handler=LogRecordStreamHandler):
        socketserver.ThreadingTCPServer.__init__(self, (host, port), handler)
        self.abort = 0
        self.timeout = 1
        self.logname = None

    def serve_until_stopped(self):
        import select
        abort = 0
        while not abort:
            rd, wr, ex = select.select([self.socket.fileno()],
                                       [], [],
                                       self.timeout)
            if rd:
                self.handle_request()
            abort = self.abort

def main():
    logging.basicConfig(
        format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
    tcpserver = LogRecordSocketReceiver()
    print('About to start TCP server...')
    tcpserver.serve_until_stopped()

if __name__ == '__main__':
    main()

先运行服务器,然后是客户端。在客户端,控制台上不打印任何内容;在服务器端,您应该看到类似:

About to start TCP server...
   59 root            INFO     Jackdaws love my big sphinx of quartz.
   59 myapp.area1     DEBUG    Quick zephyrs blow, vexing daft Jim.
   69 myapp.area1     INFO     How quickly daft jumping zebras vex.
   69 myapp.area2     WARNING  Jail zesty vixen who grabbed pay from quack.
   69 myapp.area2     ERROR    The five boxing wizards jump quickly.

注意在某些场景下pickle有一些安全问题。如果这对你有影响,你可以重写makePickle()方法实现你自己的序列化机制,并在上面的脚本中使用替换的序列化方式。

向记录输出添加上下文信息

除了传递给日志调用的参数之外,有些时候你希望日志的输出包含一些上下文信息。例如,在一个网络应用中,可能希望将指定的客户信息记录下来(例如:远程的客户名,或者IP地址)。尽管可以通过使用其它参数来达到目的,但是这样并不方便。也许可以尝试在每次链接时都创建Logger实例,但是这显然不是一个好的方案,因为这些实例不会被垃圾回收。虽然这在实践中不是问题,但是当Logger实例的数量取决于要在记录应用程序时使用的粒度级别,如果Logger实例变得无效。

使用LoggerAdapters传递上下文信息

使用LoggerAdapter类是将上下文信息随日志事件信息一起输出的好方法。这个类被设计得像Logger, 所以你可以调用 debug(), info(), warning(), error(), exception(), critical() and log()这些方法。这些方法和相对相应的Logger的方法有相同的参数, 所以你可以交换的使用两类实例。

当你创建一个LoggerAdapter实例时, 你就给了它一个Logger实例和包含你的上下文信息的dict-like对象。当你在LoggerAdapter实例中调用logging方法时,它将这个调用Logger passed to its constructor, and arranges to pass the contextual information in the delegated call.这是LoggerAdapter的代码片段:

def debug(self, msg, *args, **kwargs):
    """
    Delegate a debug call to the underlying logger, after adding
    contextual information from this adapter instance.
    """
    msg, kwargs = self.process(msg, kwargs)
    self.logger.debug(msg, *args, **kwargs)

LoggerAdapterprocess()方法是将上下文信息添加到日志记录输出的地方。日志调用时的消息和关键字参数被传递给process方法,该方法传回(可能)修改过的版本,然后用以调用底层的记录器。该方法的默认实现不修改消息,然后给关键字参数添加一个‘extra’的键,对应的值为构造LoggerAdapter时传入的类字典对象。当然,如果调用方法时已经传入了个‘extra’关键字参数,则它会被默默地覆盖掉。

使用“extra”的优点是,类似dict的对象中的值被合并到LogRecord实例的__dict__中,允许您使用自定义字符串与您的Formatter实例知道关于类似dict的对象的键。如果您需要其他方法,例如如果你想在消息字符串前添加或附加上下文信息,你只需要继承LoggerAdapter并覆盖process()来做你想做的。这里有个简单的例子:

class CustomAdapter(logging.LoggerAdapter):
    """
    This example adapter expects the passed in dict-like object to have a
    'connid' key, whose value in brackets is prepended to the log message.
    """
    def process(self, msg, kwargs):
        return '[%s] %s' % (self.extra['connid'], msg), kwargs

可以这样使用:

logger = logging.getLogger(__name__)
adapter = CustomAdapter(logger, {'connid': some_conn_id})

用适配器记录的任何事件在其日志消息之前都会添加some_conn_id的值。

使用除对象外的对象传递上下文信息

您不需要向LoggerAdapter传递实际的字典 - 您可以传递实现__getitem____iter__的类的实例,以便它看起来像记录日志。如果需要动态生成值的时候这就很有用(与此对比,字典的值是个常量)。

使用过滤器传递上下文信息

你可能也想使用用户定义的Filter增加上下文信息到日志输出。Filter实例被允许修改LogRecords,包括增加可以使用合适格式化字符串输出的属性,或者自定义一个Formatter

例如,在web应用程序中,正被处理的请求(或至少其感兴趣的部分)可以存储在threadlocal(threading.local)变量中,然后从过滤器使用属性名称'ip'和'user'将请求中的信息(例如远程IP地址和远程用户的用户名)添加到LogRecord上面的LoggerAdapter示例。这种情况下,可以和上述例子一样,用相同的格式化字符串来得到相似的输出。这是例子脚本:

import logging
from random import choice

class ContextFilter(logging.Filter):
    """
    This is a filter which injects contextual information into the log.

    Rather than use actual contextual information, we just use random
    data in this demo.
    """

    USERS = ['jim', 'fred', 'sheila']
    IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']

    def filter(self, record):

        record.ip = choice(ContextFilter.IPS)
        record.user = choice(ContextFilter.USERS)
        return True

if __name__ == '__main__':
    levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
    logging.basicConfig(level=logging.DEBUG,
                        format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
    a1 = logging.getLogger('a.b.c')
    a2 = logging.getLogger('d.e.f')

    f = ContextFilter()
    a1.addFilter(f)
    a2.addFilter(f)
    a1.debug('A debug message')
    a1.info('An info message with %s', 'some parameters')
    for x in range(10):
        lvl = choice(levels)
        lvlname = logging.getLevelName(lvl)
        a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')

当运行时有类似如下的输出:

2010-09-06 22:38:15,292 a.b.c DEBUG    IP: 123.231.231.123 User: fred     A debug message
2010-09-06 22:38:15,300 a.b.c INFO     IP: 192.168.0.1     User: sheila   An info message with some parameters
2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1       User: sheila   A message at CRITICAL level with 2 parameters
2010-09-06 22:38:15,300 d.e.f ERROR    IP: 127.0.0.1       User: jim      A message at ERROR level with 2 parameters
2010-09-06 22:38:15,300 d.e.f DEBUG    IP: 127.0.0.1       User: sheila   A message at DEBUG level with 2 parameters
2010-09-06 22:38:15,300 d.e.f ERROR    IP: 123.231.231.123 User: fred     A message at ERROR level with 2 parameters
2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1     User: jim      A message at CRITICAL level with 2 parameters
2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1       User: sheila   A message at CRITICAL level with 2 parameters
2010-09-06 22:38:15,300 d.e.f DEBUG    IP: 192.168.0.1     User: jim      A message at DEBUG level with 2 parameters
2010-09-06 22:38:15,301 d.e.f ERROR    IP: 127.0.0.1       User: sheila   A message at ERROR level with 2 parameters
2010-09-06 22:38:15,301 d.e.f DEBUG    IP: 123.231.231.123 User: fred     A message at DEBUG level with 2 parameters
2010-09-06 22:38:15,301 d.e.f INFO     IP: 123.231.231.123 User: fred     A message at INFO level with 2 parameters

从多个进程记录到单个文件

尽管logging库是线程安全的,同时从一个进程的多个线程中向同一个文件记日志支持的,从多个进程向同一个文件记日志是支持的,因为在Python中并没有一种标准化的方法来在多进程中序列化对一个文件的访问。如果确实需要,一种可行的方法就是让所有的进程将日志记录到SocketHandler,然后由一个单独的进程来实现一个套接字服务器,它从套接字中读取并记录日志到文件。(如果愿意,也可以让某个已存的进程的一个线程专门来做此事。)This section详细的描述了这个方法并包括了一个可以工作的套接字接收器,可以在你的应用中使用它作为起点。

如果你正在使用最新的包括multiprocessing模块的Python,你可以自定义处理器,使用模块中的Lock类来序列化多进程中对文件的访问。现有的FileHandler和子类目前不使用multiprocessing,虽然他们可能在将来这样做。请注意,目前,multiprocessing模块不在所有平台上提供工作锁功能(请参阅https://bugs.python.org/issue3770)。

或者,您可以使用QueueQueueHandler将所有日志记录事件发送到多进程应用程序中的其中一个进程。以下示例脚本演示了如何执行此操作;在示例中,单独的侦听器进程侦听由其他进程发送的事件,并根据其自己的日志记录配置记录它们。虽然该示例仅演示了一种执行方式(例如,您可能想使用监听器线程,而不是一个单独的监听器进程 - 实现将类似),它允许为监听器和其他进程完全不同的日志配置在您的应用程序中,并且可以用作满足您自己的特定要求的代码的基础:

# You'll need these imports in your own code
import logging
import logging.handlers
import multiprocessing

# Next two import lines for this demo only
from random import choice, random
import time

#
# Because you'll want to define the logging configurations for listener and workers, the
# listener and worker process functions take a configurer parameter which is a callable
# for configuring logging for that process. These functions are also passed the queue,
# which they use for communication.
#
# In practice, you can configure the listener however you want, but note that in this
# simple example, the listener does not apply level or filter logic to received records.
# In practice, you would probably want to do this logic in the worker processes, to avoid
# sending events which would be filtered out between processes.
#
# The size of the rotated files is made small so you can see the results easily.
def listener_configurer():
    root = logging.getLogger()
    h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10)
    f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
    h.setFormatter(f)
    root.addHandler(h)

# This is the listener process top-level loop: wait for logging events
# (LogRecords)on the queue and handle them, quit when you get a None for a
# LogRecord.
def listener_process(queue, configurer):
    configurer()
    while True:
        try:
            record = queue.get()
            if record is None:  # We send this as a sentinel to tell the listener to quit.
                break
            logger = logging.getLogger(record.name)
            logger.handle(record)  # No level or filter logic applied - just do it!
        except Exception:
            import sys, traceback
            print('Whoops! Problem:', file=sys.stderr)
            traceback.print_exc(file=sys.stderr)

# Arrays used for random selections in this demo

LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
          logging.ERROR, logging.CRITICAL]

LOGGERS = ['a.b.c', 'd.e.f']

MESSAGES = [
    'Random message #1',
    'Random message #2',
    'Random message #3',
]

# The worker configuration is done at the start of the worker process run.
# Note that on Windows you can't rely on fork semantics, so each process
# will run the logging configuration code when it starts.
def worker_configurer(queue):
    h = logging.handlers.QueueHandler(queue)  # Just the one handler needed
    root = logging.getLogger()
    root.addHandler(h)
    # send all messages, for demo; no other level or filter logic applied.
    root.setLevel(logging.DEBUG)

# This is the worker process top-level loop, which just logs ten events with
# random intervening delays before terminating.
# The print messages are just so you know it's doing something!
def worker_process(queue, configurer):
    configurer(queue)
    name = multiprocessing.current_process().name
    print('Worker started: %s' % name)
    for i in range(10):
        time.sleep(random())
        logger = logging.getLogger(choice(LOGGERS))
        level = choice(LEVELS)
        message = choice(MESSAGES)
        logger.log(level, message)
    print('Worker finished: %s' % name)

# Here's where the demo gets orchestrated. Create the queue, create and start
# the listener, create ten workers and start them, wait for them to finish,
# then send a None to the queue to tell the listener to finish.
def main():
    queue = multiprocessing.Queue(-1)
    listener = multiprocessing.Process(target=listener_process,
                                       args=(queue, listener_configurer))
    listener.start()
    workers = []
    for i in range(10):
        worker = multiprocessing.Process(target=worker_process,
                                         args=(queue, worker_configurer))
        workers.append(worker)
        worker.start()
    for w in workers:
        w.join()
    queue.put_nowait(None)
    listener.join()

if __name__ == '__main__':
    main()

上述脚本的变体在单独的线程中保持主进程中的日志记录:

import logging
import logging.config
import logging.handlers
from multiprocessing import Process, Queue
import random
import threading
import time

def logger_thread(q):
    while True:
        record = q.get()
        if record is None:
            break
        logger = logging.getLogger(record.name)
        logger.handle(record)


def worker_process(q):
    qh = logging.handlers.QueueHandler(q)
    root = logging.getLogger()
    root.setLevel(logging.DEBUG)
    root.addHandler(qh)
    levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
              logging.CRITICAL]
    loggers = ['foo', 'foo.bar', 'foo.bar.baz',
               'spam', 'spam.ham', 'spam.ham.eggs']
    for i in range(100):
        lvl = random.choice(levels)
        logger = logging.getLogger(random.choice(loggers))
        logger.log(lvl, 'Message no. %d', i)

if __name__ == '__main__':
    q = Queue()
    d = {
        'version': 1,
        'formatters': {
            'detailed': {
                'class': 'logging.Formatter',
                'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
            }
        },
        'handlers': {
            'console': {
                'class': 'logging.StreamHandler',
                'level': 'INFO',
            },
            'file': {
                'class': 'logging.FileHandler',
                'filename': 'mplog.log',
                'mode': 'w',
                'formatter': 'detailed',
            },
            'foofile': {
                'class': 'logging.FileHandler',
                'filename': 'mplog-foo.log',
                'mode': 'w',
                'formatter': 'detailed',
            },
            'errors': {
                'class': 'logging.FileHandler',
                'filename': 'mplog-errors.log',
                'mode': 'w',
                'level': 'ERROR',
                'formatter': 'detailed',
            },
        },
        'loggers': {
            'foo': {
                'handlers': ['foofile']
            }
        },
        'root': {
            'level': 'DEBUG',
            'handlers': ['console', 'file', 'errors']
        },
    }
    workers = []
    for i in range(5):
        wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,))
        workers.append(wp)
        wp.start()
    logging.config.dictConfig(d)
    lp = threading.Thread(target=logger_thread, args=(q,))
    lp.start()
    # At this point, the main process could do some useful work of its own
    # Once it's done that, it can wait for the workers to terminate...
    for wp in workers:
        wp.join()
    # And now tell the logging thread to finish up, too
    q.put(None)
    lp.join()

此变体显示您可以应用特定记录器的配置 - 例如。foo日志记录器有一个特殊的处理程序,它将foo子系统中的所有事件存储在文件mplog-foo.log中。这将由主进程中的日志记录机制使用(即使在工作进程中生成日志记录事件),以将消息定向到适当的目标。

使用文件旋转

有时候你希望日志文件增长到一个特定大小,然后打开一个新的文件并在其中记录日志。你也希望保持一定数量的日志文件,当文件数量达到上限时循环文件,这样文件的数量及其文件的大小都不会太大。对于这种使用场景,logging包提供了RotatingFileHandler

import glob
import logging
import logging.handlers

LOG_FILENAME = 'logging_rotatingfile_example.out'

# Set up a specific logger with our desired output level
my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)

# Add the log message handler to the logger
handler = logging.handlers.RotatingFileHandler(
              LOG_FILENAME, maxBytes=20, backupCount=5)

my_logger.addHandler(handler)

# Log some messages
for i in range(20):
    my_logger.debug('i = %d' % i)

# See what files are created
logfiles = glob.glob('%s*' % LOG_FILENAME)

for filename in logfiles:
    print(filename)

结果将是6个独立的文件,每个都包含了应用的部分日志历史:

logging_rotatingfile_example.out
logging_rotatingfile_example.out.1
logging_rotatingfile_example.out.2
logging_rotatingfile_example.out.3
logging_rotatingfile_example.out.4
logging_rotatingfile_example.out.5

当前文件总是logging_rotatingfile_example.out,每当文件大小达到限制时,它就被重命名,添加上后缀.1重命名每个现有的备份文件以增加后缀(.1变为.2等)并删除.6文件。

很明显这个例子作为极端将日志长度设得很短。你应该将maxBytes设为一个合适的值。

使用替换格式样式

当日志记录被添加到Python标准库时,格式化具有可变内容的消息的唯一方法是使用%-formatting方法。从那时起,Python已经获得了两种新的格式化方法:string.Template(在Python 2.4中添加)和str.format()(在Python 2.6中添加)。

日志记录(从3.2开始)提供了对这两种额外格式化风格的改进支持。已对Formatter类进行了增强,以接受另一个可选的关键字参数style此默认值为'%',但其他可能的值为'{''$',对应于其他两种格式样式。默认情况下保持向后兼容性(如您所期望的),但通过显式指定样式参数,您可以指定使用str.format()string.Template下面是一个示例控制台会话,以显示可能性:

>>> import logging
>>> root = logging.getLogger()
>>> root.setLevel(logging.DEBUG)
>>> handler = logging.StreamHandler()
>>> bf = logging.Formatter('{asctime} {name} {levelname:8s} {message}',
...                        style='{')
>>> handler.setFormatter(bf)
>>> root.addHandler(handler)
>>> logger = logging.getLogger('foo.bar')
>>> logger.debug('This is a DEBUG message')
2010-10-28 15:11:55,341 foo.bar DEBUG    This is a DEBUG message
>>> logger.critical('This is a CRITICAL message')
2010-10-28 15:12:11,526 foo.bar CRITICAL This is a CRITICAL message
>>> df = logging.Formatter('$asctime $name ${levelname} $message',
...                        style='$')
>>> handler.setFormatter(df)
>>> logger.debug('This is a DEBUG message')
2010-10-28 15:13:06,924 foo.bar DEBUG This is a DEBUG message
>>> logger.critical('This is a CRITICAL message')
2010-10-28 15:13:11,494 foo.bar CRITICAL This is a CRITICAL message
>>>

注意,用于最终输出到日志的日志消息的格式化完全独立于如何构造单独的日志消息。这仍然可以使用%格式,如下所示:

>>> logger.error('This is an%s %s %s', 'other,', 'ERROR,', 'message')
2010-10-28 15:19:29,833 foo.bar ERROR This is another, ERROR, message
>>>

记录调用(logger.debug()logger.info()等)只采用实际记录消息本身的位置参数,其中关键字参数仅用于确定如何处理实际记录调用的选项(例如,指示应记录回溯信息的exc_info关键字参数,或extra关键字参数,以指示要添加到日志的附加上下文信息)。因此,您不能使用str.format()string.Template语法直接进行日志调用,因为内部日志包使用%-formatting来合并格式字符串和变量参数。在保持向下兼容性的情况下不会改变这一点,因为现有代码中的所有日志调用都将使用%-format字符串。

但是,有一种方法,您可以使用{} - 和$ - 格式化来构建您的单个日志消息。回想一下,对于消息,您可以使用任意对象作为消息格式字符串,并且日志包将调用该对象的str()以获取实际的格式字符串。考虑以下两个类:

class BraceMessage:
    def __init__(self, fmt, *args, **kwargs):
        self.fmt = fmt
        self.args = args
        self.kwargs = kwargs

    def __str__(self):
        return self.fmt.format(*self.args, **self.kwargs)

class DollarMessage:
    def __init__(self, fmt, **kwargs):
        self.fmt = fmt
        self.kwargs = kwargs

    def __str__(self):
        from string import Template
        return Template(self.fmt).substitute(**self.kwargs)

可以使用其中任何一个替换格式字符串,以允许{}或$ -formatting用于构建出现在格式化日志输出中的实际“消息”部分,而不是“%(message)s”或“{message}”或“$ message”。It’s a little unwieldy to use the class names whenever you want to log something, but it’s quite palatable if you use an alias such as __ (double underscore – not to be confused with _, the single underscore used as a synonym/alias for gettext.gettext() or its brethren).

上面的类不包括在Python中,虽然它们很容易复制和粘贴到你自己的代码。它们可以如下使用(假设它们在wherever:):

>>> from wherever import BraceMessage as __
>>> print(__('Message with {0} {name}', 2, name='placeholders'))
Message with 2 placeholders
>>> class Point: pass
...
>>> p = Point()
>>> p.x = 0.5
>>> p.y = 0.5
>>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})',
...       point=p))
Message with coordinates: (0.50, 0.50)
>>> from wherever import DollarMessage as __
>>> print(__('Message with $num $what', num=2, what='placeholders'))
Message with 2 placeholders
>>>

虽然上面的例子使用print()来显示格式化的工作原理,但你当然可以使用logger.debug()或类似的方法使用这种方法实际进行日志记录。

需要注意的一点是,使用这种方法不会产生显着的性能损失:实际的格式化不会发生在进行日志调用时,而是当(和if)记录的消息实际上要由处理程序输出到日志。因此,唯一不寻常的事情可能会让你失望,括号是围绕格式字符串和参数,而不只是格式字符串。这是因为__符号只是用于对一个XXXMessage类的构造函数调用的语法糖。

如果愿意,您可以使用LoggerAdapter来实现与上述类似的效果,如以下示例所示:

import logging

class Message(object):
    def __init__(self, fmt, args):
        self.fmt = fmt
        self.args = args

    def __str__(self):
        return self.fmt.format(*self.args)

class StyleAdapter(logging.LoggerAdapter):
    def __init__(self, logger, extra=None):
        super(StyleAdapter, self).__init__(logger, extra or {})

    def log(self, level, msg, *args, **kwargs):
        if self.isEnabledFor(level):
            msg, kwargs = self.process(msg, kwargs)
            self.logger._log(level, Message(msg, args), (), **kwargs)

logger = StyleAdapter(logging.getLogger(__name__))

def main():
    logger.debug('Hello, {}', 'world!')

if __name__ == '__main__':
    logging.basicConfig(level=logging.DEBUG)
    main()

上述脚本应该记录消息Hello, world!当使用Python 3.2或更高版本运行时。

Customizing LogRecord

每个日志记录事件由LogRecord实例表示。当事件被记录并且未被记录器的级别过滤时,创建LogRecord,用关于事件的信息填充,然后传递给该记录器的处理程序(及其祖先,直到并包括在层次结构上进一步向上传播的记录器被禁用)。在Python 3.2之前,只有两个地方完成了这个创建:

这通常意味着,如果您需要对LogRecord做任何特殊处理,您必须执行以下操作之一。

第一种方法将是一个有点笨重的情况下(说)几个不同的图书馆想做不同的事情。每个人都会尝试设置自己的Logger子类,并且最后一个将获胜。

第二种方法在许多情况下效果相当好,但不允许您使用LogRecord的专用子类。库开发人员可以在他们的记录器上设置合适的过滤器,但是他们必须记住每次他们引入一个新的记录器时都这样做(他们只需要添加新的包或模块,

logger = logging.getLogger(__name__)

在模块级)。这可能是一个太多的事情要考虑。开发人员还可以将过滤器添加到附加到其顶级记录器的NullHandler,但是如果应用程序开发人员将处理程序附加到较低级别的库记录器,则不会调用此过滤器 - 不会反映库开发者的意图。

在Python 3.2及更高版本中,LogRecord创建通过工厂完成,您可以指定。工厂只是一个可调用,可以用setLogRecordFactory()设置,并使用getLogRecordFactory()查询。使用与LogRecord构造函数相同的声明调用工厂,因为LogRecord是工厂的默认设置。

这种方法允许自定义工厂控制LogRecord创建的所有方面。例如,您可以返回一个子类,或者只要添加一些额外的属性到创建的记录,使用类似于这样的模式:

old_factory = logging.getLogRecordFactory()

def record_factory(*args, **kwargs):
    record = old_factory(*args, **kwargs)
    record.custom_attribute = 0xdecafbad
    return record

logging.setLogRecordFactory(record_factory)

此模式允许不同的库将工厂链接在一起,并且只要它们不会覆盖彼此的属性或无意地覆盖作为标准提供的属性,就不会有意外。但是,应该记住,链中的每个链接都会增加所有日志记录操作的运行时开销,并且只有在使用Filter不能提供所需结果时,才应使用此技术。

Subclassing QueueHandler - a ZeroMQ example

您可以使用QueueHandler子类将消息发送到其他类型的队列,例如ZeroMQ的“发布”套接字。在下面的示例中,套接字是单独创建的,并传递给处理程序(作为其'队列'):

import zmq   # using pyzmq, the Python binding for ZeroMQ
import json  # for serializing records portably

ctx = zmq.Context()
sock = zmq.Socket(ctx, zmq.PUB)  # or zmq.PUSH, or other suitable value
sock.bind('tcp://*:5556')        # or wherever

class ZeroMQSocketHandler(QueueHandler):
    def enqueue(self, record):
        data = json.dumps(record.__dict__)
        self.queue.send(data)

handler = ZeroMQSocketHandler(sock)

当然还有其他组织方式,例如传递处理程序所需的数据来创建套接字:

class ZeroMQSocketHandler(QueueHandler):
    def __init__(self, uri, socktype=zmq.PUB, ctx=None):
        self.ctx = ctx or zmq.Context()
        socket = zmq.Socket(self.ctx, socktype)
        socket.bind(uri)
        QueueHandler.__init__(self, socket)

    def enqueue(self, record):
        data = json.dumps(record.__dict__)
        self.queue.send(data)

    def close(self):
        self.queue.close()

Subclassing QueueListener - a ZeroMQ example

您还可以将QueueListener作为子类,以从其他类型的队列中获取消息,例如ZeroMQ的“订阅”套接字。这里有一个例子:

class ZeroMQSocketListener(QueueListener):
    def __init__(self, uri, *handlers, **kwargs):
        self.ctx = kwargs.get('ctx') or zmq.Context()
        socket = zmq.Socket(self.ctx, zmq.SUB)
        socket.setsockopt(zmq.SUBSCRIBE, '')  # subscribe to everything
        socket.connect(uri)

    def dequeue(self):
        msg = self.queue.recv()
        return logging.makeLogRecord(json.loads(msg))

也可以看看

模块logging
日志模块的API参考。
模块logging.config
日志模块的配置API。
模块logging.handlers
有用的处理程序包括在日志模块中。

A basic logging tutorial

A more advanced logging tutorial

基于字典的配置示例

下面是一个日志配置字典的例子,它来自于Django项目的文档为了使配置生效,这个字典被传递给dictConfig()

LOGGING = {
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': {
        'verbose': {
            'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
        },
        'simple': {
            'format': '%(levelname)s %(message)s'
        },
    },
    'filters': {
        'special': {
            '()': 'project.logging.SpecialFilter',
            'foo': 'bar',
        }
    },
    'handlers': {
        'null': {
            'level':'DEBUG',
            'class':'django.utils.log.NullHandler',
        },
        'console':{
            'level':'DEBUG',
            'class':'logging.StreamHandler',
            'formatter': 'simple'
        },
        'mail_admins': {
            'level': 'ERROR',
            'class': 'django.utils.log.AdminEmailHandler',
            'filters': ['special']
        }
    },
    'loggers': {
        'django': {
            'handlers':['null'],
            'propagate': True,
            'level':'INFO',
        },
        'django.request': {
            'handlers': ['mail_admins'],
            'level': 'ERROR',
            'propagate': False,
        },
        'myproject.custom': {
            'handlers': ['console', 'mail_admins'],
            'level': 'INFO',
            'filters': ['special']
        }
    }
}

关于该配置的跟多信息,可以参见Django文档的相关章节

使用旋转器和命名器来自定义日志旋转处理

在以下代码段中给出了如何定义命名和旋转器的示例,其中显示了基于zlib的日志文件压缩:

def namer(name):
    return name + ".gz"

def rotator(source, dest):
    with open(source, "rb") as sf:
        data = sf.read()
        compressed = zlib.compress(data, 9)
        with open(dest, "wb") as df:
            df.write(compressed)
    os.remove(source)

rh = logging.handlers.RotatingFileHandler(...)
rh.rotator = rotator
rh.namer = namer

这些不是“真正的”.gz文件,因为它们是裸压缩数据,没有“容器”,如在一个实际的gzip文件中找到的。此代码段仅用于说明目的。

更复杂的多处理示例

以下工作示例显示如何使用日志记录与使用配置文件的多重处理。配置相当简单,但用于说明在实际多处理场景中如何实现更复杂的配置。

在该示例中,主进程派生监听进程和一些工作进程。每个主进程,监听器和工人都有三个单独的配置(工人都共享相同的配置)。我们可以看到主进程中的日志记录,工作者如何记录到QueueHandler以及监听器如何实现QueueListener和更复杂的日志配置,并安排将通过队列接收的事件分派到配置中指定的处理程序。请注意,这些配置纯粹是说明性的,但您应该能够使此示例适应您自己的方案。

这里是脚本 - docstrings和评论希望解释它的工作原理:

import logging
import logging.config
import logging.handlers
from multiprocessing import Process, Queue, Event, current_process
import os
import random
import time

class MyHandler:
    """
    A simple handler for logging events. It runs in the listener process and
    dispatches events to loggers based on the name in the received record,
    which then get dispatched, by the logging system, to the handlers
    configured for those loggers.
    """
    def handle(self, record):
        logger = logging.getLogger(record.name)
        # The process name is transformed just to show that it's the listener
        # doing the logging to files and console
        record.processName = '%s (for %s)' % (current_process().name, record.processName)
        logger.handle(record)

def listener_process(q, stop_event, config):
    """
    This could be done in the main process, but is just done in a separate
    process for illustrative purposes.

    This initialises logging according to the specified configuration,
    starts the listener and waits for the main process to signal completion
    via the event. The listener is then stopped, and the process exits.
    """
    logging.config.dictConfig(config)
    listener = logging.handlers.QueueListener(q, MyHandler())
    listener.start()
    if os.name == 'posix':
        # On POSIX, the setup logger will have been configured in the
        # parent process, but should have been disabled following the
        # dictConfig call.
        # On Windows, since fork isn't used, the setup logger won't
        # exist in the child, so it would be created and the message
        # would appear - hence the "if posix" clause.
        logger = logging.getLogger('setup')
        logger.critical('Should not appear, because of disabled logger ...')
    stop_event.wait()
    listener.stop()

def worker_process(config):
    """
    A number of these are spawned for the purpose of illustration. In
    practice, they could be a heterogeneous bunch of processes rather than
    ones which are identical to each other.

    This initialises logging according to the specified configuration,
    and logs a hundred messages with random levels to randomly selected
    loggers.

    A small sleep is added to allow other processes a chance to run. This
    is not strictly needed, but it mixes the output from the different
    processes a bit more than if it's left out.
    """
    logging.config.dictConfig(config)
    levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
              logging.CRITICAL]
    loggers = ['foo', 'foo.bar', 'foo.bar.baz',
               'spam', 'spam.ham', 'spam.ham.eggs']
    if os.name == 'posix':
        # On POSIX, the setup logger will have been configured in the
        # parent process, but should have been disabled following the
        # dictConfig call.
        # On Windows, since fork isn't used, the setup logger won't
        # exist in the child, so it would be created and the message
        # would appear - hence the "if posix" clause.
        logger = logging.getLogger('setup')
        logger.critical('Should not appear, because of disabled logger ...')
    for i in range(100):
        lvl = random.choice(levels)
        logger = logging.getLogger(random.choice(loggers))
        logger.log(lvl, 'Message no. %d', i)
        time.sleep(0.01)

def main():
    q = Queue()
    # The main process gets a simple configuration which prints to the console.
    config_initial = {
        'version': 1,
        'formatters': {
            'detailed': {
                'class': 'logging.Formatter',
                'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
            }
        },
        'handlers': {
            'console': {
                'class': 'logging.StreamHandler',
                'level': 'INFO',
            },
        },
        'root': {
            'level': 'DEBUG',
            'handlers': ['console']
        },
    }
    # The worker process configuration is just a QueueHandler attached to the
    # root logger, which allows all messages to be sent to the queue.
    # We disable existing loggers to disable the "setup" logger used in the
    # parent process. This is needed on POSIX because the logger will
    # be there in the child following a fork().
    config_worker = {
        'version': 1,
        'disable_existing_loggers': True,
        'handlers': {
            'queue': {
                'class': 'logging.handlers.QueueHandler',
                'queue': q,
            },
        },
        'root': {
            'level': 'DEBUG',
            'handlers': ['queue']
        },
    }
    # The listener process configuration shows that the full flexibility of
    # logging configuration is available to dispatch events to handlers however
    # you want.
    # We disable existing loggers to disable the "setup" logger used in the
    # parent process. This is needed on POSIX because the logger will
    # be there in the child following a fork().
    config_listener = {
        'version': 1,
        'disable_existing_loggers': True,
        'formatters': {
            'detailed': {
                'class': 'logging.Formatter',
                'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
            },
            'simple': {
                'class': 'logging.Formatter',
                'format': '%(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
            }
        },
        'handlers': {
            'console': {
                'class': 'logging.StreamHandler',
                'level': 'INFO',
                'formatter': 'simple',
            },
            'file': {
                'class': 'logging.FileHandler',
                'filename': 'mplog.log',
                'mode': 'w',
                'formatter': 'detailed',
            },
            'foofile': {
                'class': 'logging.FileHandler',
                'filename': 'mplog-foo.log',
                'mode': 'w',
                'formatter': 'detailed',
            },
            'errors': {
                'class': 'logging.FileHandler',
                'filename': 'mplog-errors.log',
                'mode': 'w',
                'level': 'ERROR',
                'formatter': 'detailed',
            },
        },
        'loggers': {
            'foo': {
                'handlers': ['foofile']
            }
        },
        'root': {
            'level': 'DEBUG',
            'handlers': ['console', 'file', 'errors']
        },
    }
    # Log some initial events, just to show that logging in the parent works
    # normally.
    logging.config.dictConfig(config_initial)
    logger = logging.getLogger('setup')
    logger.info('About to create workers ...')
    workers = []
    for i in range(5):
        wp = Process(target=worker_process, name='worker %d' % (i + 1),
                     args=(config_worker,))
        workers.append(wp)
        wp.start()
        logger.info('Started worker: %s', wp.name)
    logger.info('About to create listener ...')
    stop_event = Event()
    lp = Process(target=listener_process, name='listener',
                 args=(q, stop_event, config_listener))
    lp.start()
    logger.info('Started listener')
    # We now hang around for the workers to finish their work.
    for wp in workers:
        wp.join()
    # Workers all done, listening can now stop.
    # Logging in the parent still works normally.
    logger.info('Telling listener to stop ...')
    stop_event.set()
    lp.join()
    logger.info('All done.')

if __name__ == '__main__':
    main()

Inserting a BOM into messages sent to a SysLogHandler

RFC 5424要求发给syslog服务进程的Unicode消息为有如下结构的一组字节:一个可选的纯ASCII部分,然后是一个UTF-8的BOM(字节顺序标记),然后是UTF-8编码的Unicode。(参见该规范的相关章节。)

在Python 3.1中,将代码添加到SysLogHandler以将BOM插入到消息中,但不幸的是,BOM出现在消息的开头,因此不允许任何纯ASCII组件出现在它之前。

由于此行为已损坏,因此将从Python 3.2.4及更高版本中删除不正确的BOM插入代码。尽管如此,它没有被取代,如果希望产生RFC 5424兼容的消息——可选的纯ASCII部分,BOM,然后是Unicode,你可以这样做:

  1. Formatter实例附上一个SysLogHandler实例,格式化字符串类似于:

    'ASCII section\ufeffUnicode section'
    

    当使用UTF-8编码时,Unicode代码点U + FEFF将被编码为UTF-8 BOM - 字节串b'\xef\xbb\xbf'

  2. 将'ASCII section'替换成任何你希望的占位符,只要保证替换后数据总是ASCII即可(UTF-8编码后它们不会改变)。

  3. 用任何你喜欢的占位符替换Unicode部分;如果替换后出现的数据包含ASCII范围之外的字符,那就很好 - 它将使用UTF-8编码。

格式化的消息使用UTF-8编码由SysLogHandler编码。如果遵从上述规则,就可以产生RFC 5424兼容的消息。否则的话,logging不会抱怨,但是你的消息不是RFC 5424兼容的,syslog服务进程会抱怨。

Implementing structured logging

尽管大多数日志消息都是由人来读的,因此不一定能由机器来解析,但是在某些场景下你希望输出消息具有结构化的格式,能由程序来解析的(不需要复杂的正则表达式)。使用logging包很容易实现。有很多方法来实现,下面是个简单方法,它使用JSON来序列化事件,以机器可解析的方式:

import json
import logging

class StructuredMessage(object):
    def __init__(self, message, **kwargs):
        self.message = message
        self.kwargs = kwargs

    def __str__(self):
        return '%s >>> %s' % (self.message, json.dumps(self.kwargs))

_ = StructuredMessage   # optional, to improve readability

logging.basicConfig(level=logging.INFO, format='%(message)s')
logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))

运行上述脚本,打印出:

message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}

请注意根据所用的Python的版本不同,项目的顺序有可能会不同。

如果需要更特定的处理,可以使用自定义的JSON编码器,如下述复杂的例子:

from __future__ import unicode_literals

import json
import logging

# This next bit is to ensure the script runs unchanged on 2.x and 3.x
try:
    unicode
except NameError:
    unicode = str

class Encoder(json.JSONEncoder):
    def default(self, o):
        if isinstance(o, set):
            return tuple(o)
        elif isinstance(o, unicode):
            return o.encode('unicode_escape').decode('ascii')
        return super(Encoder, self).default(o)

class StructuredMessage(object):
    def __init__(self, message, **kwargs):
        self.message = message
        self.kwargs = kwargs

    def __str__(self):
        s = Encoder().encode(self.kwargs)
        return '%s >>> %s' % (self.message, s)

_ = StructuredMessage   # optional, to improve readability

def main():
    logging.basicConfig(level=logging.INFO, format='%(message)s')
    logging.info(_('message 1', set_value={1, 2, 3}, snowman='\u2603'))

if __name__ == '__main__':
    main()

运行上述脚本,打印出:

message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]}

请注意根据所用的Python的版本不同,项目的顺序有可能会不同。

Customizing handlers with dictConfig()

有时候你希望以特殊的方式自定义日志处理器,如果使用dictConfig()的话可以不用继承处理器。例如,假定你希望设置日志文件的所有权。在POSIX中,使用shutil.chown()可以很容易做到这点,但是标准库中的文件处理器并不提供内置的支持。可以自定义处理器的创建过程,如下使用一个简单的函数:

def owned_file_handler(filename, mode='a', encoding=None, owner=None):
    if owner:
        if not os.path.exists(filename):
            open(filename, 'a').close()
        shutil.chown(filename, *owner)
    return logging.FileHandler(filename, mode, encoding)

可以在传递给dictConfig()的日志配置中指明通过调用该函数来创建日志处理器:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'default': {
            'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
        },
    },
    'handlers': {
        'file':{
            # The values below are popped from this dictionary and
            # used to create the handler, set the handler's level and
            # its formatter.
            '()': owned_file_handler,
            'level':'DEBUG',
            'formatter': 'default',
            # The values below are passed to the handler creator callable
            # as keyword arguments.
            'owner': ['pulse', 'pulse'],
            'filename': 'chowntest.log',
            'mode': 'w',
            'encoding': 'utf-8',
        },
    },
    'root': {
        'handlers': ['file'],
        'level': 'DEBUG',
    },
}

在这个例子中,pulse被用作文件的用户和组,以做演示之用。把所有的组装起来,chowntest.py

import logging, logging.config, os, shutil

def owned_file_handler(filename, mode='a', encoding=None, owner=None):
    if owner:
        if not os.path.exists(filename):
            open(filename, 'a').close()
        shutil.chown(filename, *owner)
    return logging.FileHandler(filename, mode, encoding)

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'default': {
            'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
        },
    },
    'handlers': {
        'file':{
            # The values below are popped from this dictionary and
            # used to create the handler, set the handler's level and
            # its formatter.
            '()': owned_file_handler,
            'level':'DEBUG',
            'formatter': 'default',
            # The values below are passed to the handler creator callable
            # as keyword arguments.
            'owner': ['pulse', 'pulse'],
            'filename': 'chowntest.log',
            'mode': 'w',
            'encoding': 'utf-8',
        },
    },
    'root': {
        'handlers': ['file'],
        'level': 'DEBUG',
    },
}

logging.config.dictConfig(LOGGING)
logger = logging.getLogger('mylogger')
logger.debug('A debug message')

可能需要以root来运行:

$ sudo python3.3 chowntest.py
$ cat chowntest.log
2013-11-05 09:34:51,128 DEBUG mylogger A debug message
$ ls -l chowntest.log
-rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log

注意这里使用了Python 3.3,因为shutil.chown()在这一版出现。这个方法应该能在任何支持dictConfig()的Python中工作,即Python 2.7,3.2或更新的版本。对于3.3之前的版本,您需要使用os.chown()

实践中,处理器创建函数可能处于项目中某个功能模块。作为如下配置的替换:

'()': owned_file_handler,

您可以使用

'()': 'ext://project.util.owned_file_handler',

project.util可以替换为函数所在的真正的包名。在上述脚本中,使用'ext://__main__.owned_file_handler'也可以工作。这里,真正的可调用对象由dictConfig()ext://规格中解析出来。

这个例子希望也指明了如何实现其他类型的文件更改的方式。设置特定的POSIX权限位 - 使用os.chmod()以同样的方式。

当然此方法可以用于FileHandler以外的处理器类型,例如循环文件处理器,甚至一个完全不一样的处理器类型。

Using particular formatting styles throughout your application

在Python 3.2中,Formatter获得了一个style关键字参数,为了向后兼容,默认为%,允许规定{$支持str.format()string.Template支持的格式化方法。注意,这控制日志消息的格式化以最终输出到日志,并且与如何构建单个日志消息完全正交。

记录呼叫(debug()info()等)只采用实际记录消息本身的位置参数,其中关键字参数仅用于确定如何处理记录调用的选项(例如,指示应记录回溯信息的exc_info关键字参数,或extra关键字参数,以指示要添加到日志的附加上下文信息)。因此,您不能使用str.format()string.Template语法直接进行日志调用,因为内部日志包使用%-formatting来合并格式字符串和变量参数。在保持向下兼容性的情况下不会改变这一点,因为现有代码中的所有日志调用都将使用%-format字符串。

已经有建议将格式样式与特定的记录器相关联,但是该方法也会遇到向后兼容性问题,因为任何现有代码可能正在使用给定的记录器名称并使用%格式化。

要使日志记录在任何第三方库和代码之间可互操作,在格式化方面的决定需要在单独日志记录调用的级别进行。这开辟了几种方式可以适应其他格式化风格。

Using LogRecord factories

In Python 3.2, along with the Formatter changes mentioned above, the logging package gained the ability to allow users to set their own LogRecord subclasses, using the setLogRecordFactory() function. 您可以使用它来设置LogRecord的自己的子类,它通过覆盖getMessage()方法来执行Right Thing。此方法的基类实现是发生msg args格式化,替代格式;但是,您应该小心支持所有格式化样式,并允许使用%格式作为默认格式,以确保与其他代码的互操作性。还应该小心调用str(self.msg),正如基本实现一样。

有关详细信息,请参阅setLogRecordFactory()LogRecord上的参考文档。

Using custom message objects

还有另一种也许更简单的方法,你可以使用{} - 和$ - 格式化来构建你的单个日志消息。您可以记住(从Using arbitrary objects as messages),当记录时,您可以使用任意对象作为消息格式字符串,并且日志包将调用str()考虑以下两个类:

class BraceMessage(object):
    def __init__(self, fmt, *args, **kwargs):
        self.fmt = fmt
        self.args = args
        self.kwargs = kwargs

    def __str__(self):
        return self.fmt.format(*self.args, **self.kwargs)

class DollarMessage(object):
    def __init__(self, fmt, **kwargs):
        self.fmt = fmt
        self.kwargs = kwargs

    def __str__(self):
        from string import Template
        return Template(self.fmt).substitute(**self.kwargs)

可以使用其中任何一个替换格式字符串,以允许{}或$ -formatting用于构建出现在格式化日志输出中的实际“消息”部分,而不是“%(message)s”或“{message}”或“$ message”。如果你发现有点难以使用类名,每当你想记录的东西,你可以使它更可口,如果你使用别名,如M_ for (如果您使用_进行本地化,则可能是__)。

这种方法的实例如下。首先,使用str.format()格式化:

>>> __ = BraceMessage
>>> print(__('Message with {0} {1}', 2, 'placeholders'))
Message with 2 placeholders
>>> class Point: pass
...
>>> p = Point()
>>> p.x = 0.5
>>> p.y = 0.5
>>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})', point=p))
Message with coordinates: (0.50, 0.50)

其次,使用string.Template格式化:

>>> __ = DollarMessage
>>> print(__('Message with $num $what', num=2, what='placeholders'))
Message with 2 placeholders
>>>

需要注意的一点是,使用这种方法不会产生显着的性能损失:实际的格式化不会发生在进行日志调用时,而是当(和if)记录的消息实际上要由处理程序输出到日志。因此,唯一不寻常的事情可能会让你失望,括号是围绕格式字符串和参数,而不只是格式字符串。这是因为__符号只是一个构造函数调用到上面显示的XXXMessage类之一的语法糖。

Configuring filters with dictConfig()

可以使用dictConfig()配置过滤器,乍看可能不那么明显(所以有这个处方)。既然Filter是标准库中唯一的过滤器类,不太见得它能满足许多需求(它以基类的方式存在),典型地你继承Filter类并重写filter()方法。要达到这点,在配置字典中给过滤器的()键指定一个可调用对象,它用来创建过滤器(很明显类是个可调用对象,但是其它能返回Filter实例的可调用对象也可以)。这是个完整的例子:

import logging
import logging.config
import sys

class MyFilter(logging.Filter):
    def __init__(self, param=None):
        self.param = param

    def filter(self, record):
        if self.param is None:
            allow = True
        else:
            allow = self.param not in record.msg
        if allow:
            record.msg = 'changed: ' + record.msg
        return allow

LOGGING = {
    'version': 1,
    'filters': {
        'myfilter': {
            '()': MyFilter,
            'param': 'noshow',
        }
    },
    'handlers': {
        'console': {
            'class': 'logging.StreamHandler',
            'filters': ['myfilter']
        }
    },
    'root': {
        'level': 'DEBUG',
        'handlers': ['console']
    },
}

if __name__ == '__main__':
    logging.config.dictConfig(LOGGING)
    logging.debug('hello')
    logging.debug('hello - noshow')

这个例子显示了如何以关键字参数的形式,将配置数据传递给可调用对象,以构造实例。运行脚本将打印:

changed: hello

显示过滤器如同配置的那样工作。

额外要注意的两点:

  • 如果你不能直接在配置中引用callable(例如。如果它存在于不同的模块中,并且不能直接在配置字典中导入它),则可以使用ext://...的形式,如Access to external objects例如,您可以在上面的示例中使用文本'ext://__main__.MyFilter'而不是MyFilter
  • 除了过滤器,该技术也可用于配置自定义的处理器和格式化器。参见User-defined objects以了解更多关于logging如何在配置中支持使用用户定义对象的信息,参见上述Customizing handlers with dictConfig()处方。

Customized exception formatting

可能有时候你想做自定义异常格式化 - 为了说明起见,假设你想要每个记录的事件一行,即使当异常信息存在。您可以使用自定义格式化程序类来完成此操作,如以下示例所示:

import logging

class OneLineExceptionFormatter(logging.Formatter):
    def formatException(self, exc_info):
        """
        Format an exception so that it prints on a single line.
        """
        result = super(OneLineExceptionFormatter, self).formatException(exc_info)
        return repr(result)  # or format into one line however you want to

    def format(self, record):
        s = super(OneLineExceptionFormatter, self).format(record)
        if record.exc_text:
            s = s.replace('\n', '') + '|'
        return s

def configure_logging():
    fh = logging.FileHandler('output.txt', 'w')
    f = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|',
                                  '%d/%m/%Y %H:%M:%S')
    fh.setFormatter(f)
    root = logging.getLogger()
    root.setLevel(logging.DEBUG)
    root.addHandler(fh)

def main():
    configure_logging()
    logging.info('Sample message')
    try:
        x = 1 / 0
    except ZeroDivisionError as e:
        logging.exception('ZeroDivisionError: %s', e)

if __name__ == '__main__':
    main()

运行时,这将产生一个完全有两行的文件:

28/01/2015 07:21:23|INFO|Sample message|
28/01/2015 07:21:23|ERROR|ZeroDivisionError: integer division or modulo by zero|'Traceback (most recent call last):\n  File "logtest7.py", line 30, in main\n    x = 1 / 0\nZeroDivisionError: integer division or modulo by zero'|

虽然上述处理是简单的,它指明了如何格式化您的喜好的异常信息的方式。The traceback module may be helpful for more specialized needs.

Speaking logging messages

可能存在希望以可听而不是可见格式呈现日志消息的情况。如果您的系统中有可用的文本到语音(TTS)功能,即使它没有Python绑定,这也很容易做到。大多数TTS系统都有一个可以运行的命令行程序,并且可以使用subprocess从处​​理程序调用它。这里假设TTS命令行程序不期望与用户交互或花费很长时间来完成,并且记录的消息的频率不会高到使消息淹没用户,并且可以接受消息一次一个而不是并发地发出。下面的示例实现等待一个消息在下一个消息被处理之前被讲出,并且这可能导致其他处理器被保持等待。下面是一个简短的示例,显示了该方法,假设espeak TTS包可用:

import logging
import subprocess
import sys

class TTSHandler(logging.Handler):
    def emit(self, record):
        msg = self.format(record)
        # Speak slowly in a female English voice
        cmd = ['espeak', '-s150', '-ven+f3', msg]
        p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
                             stderr=subprocess.STDOUT)
        # wait for the program to finish
        p.communicate()

def configure_logging():
    h = TTSHandler()
    root = logging.getLogger()
    root.addHandler(h)
    # the default formatter just returns the message
    root.setLevel(logging.DEBUG)

def main():
    logging.info('Hello')
    logging.debug('Goodbye')

if __name__ == '__main__':
    configure_logging()
    sys.exit(main())

当运行时,此脚本应该说“你好”,然后在女声中“再见”。

当然,上述方法可以适用于其它TTS系统,甚至可以适用于通过从命令行运行的外部程序处理消息的其它系统。

Buffering logging messages and outputting them conditionally

可能有些情况下,您希望在临时区域中记录消息,并且只有在发生某种情况时才输出消息。例如,您可能想在一个函数中开始记录调试事件,如果该函数完成没有错误,您不想使用收集的调试信息使日志混乱,但如果有错误,您希望所有调试要输出的信息以及错误。

None它使用logging.handlers.MemoryHandler,它允许缓存已记录的事件,直到某些条件发生,此时缓冲的事件被flushed - 传递给另一个处理程序target处理程序)进行处理。默认情况下,MemoryHandler在其缓冲区已满时刷新,或者看到级别大于或等于指定阈值的事件。如果想要定制冲洗行为,可以使用MemoryHandler的更专用子类来使用此配方。

示例脚本有一个简单的函数foo,它只是循环遍历所有日志记录级别,写入sys.stderr来说明要登录的级别,然后实际上在那个级别记录消息。您可以将参数传递给foo,如果为true,则将记录ERROR和CRITICAL级别 - 否则,它仅记录DEBUG,INFO和WARNING级别。

脚本只是安排用装饰器来装饰foo,这将执行所需的条件日志记录。装饰器将记录器作为参数,并在对装饰函数的调用期间附加存储器处理程序。装饰器可以使用目标处理程序,发生刷新的级别和缓冲器的容量来附加地参数化。这些默认为分别写入sys.stderrlogging.ERROR100StreamHandler

None

import logging
from logging.handlers import MemoryHandler
import sys

logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())

def log_if_errors(logger, target_handler=None, flush_level=None, capacity=None):
    if target_handler is None:
        target_handler = logging.StreamHandler()
    if flush_level is None:
        flush_level = logging.ERROR
    if capacity is None:
        capacity = 100
    handler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler)

    def decorator(fn):
        def wrapper(*args, **kwargs):
            logger.addHandler(handler)
            try:
                return fn(*args, **kwargs)
            except Exception:
                logger.exception('call failed')
                raise
            finally:
                super(MemoryHandler, handler).flush()
                logger.removeHandler(handler)
        return wrapper

    return decorator

def write_line(s):
    sys.stderr.write('%s\n' % s)

def foo(fail=False):
    write_line('about to log at DEBUG ...')
    logger.debug('Actually logged at DEBUG')
    write_line('about to log at INFO ...')
    logger.info('Actually logged at INFO')
    write_line('about to log at WARNING ...')
    logger.warning('Actually logged at WARNING')
    if fail:
        write_line('about to log at ERROR ...')
        logger.error('Actually logged at ERROR')
        write_line('about to log at CRITICAL ...')
        logger.critical('Actually logged at CRITICAL')
    return fail

decorated_foo = log_if_errors(logger)(foo)

if __name__ == '__main__':
    logger.setLevel(logging.DEBUG)
    write_line('Calling undecorated foo with False')
    assert not foo(False)
    write_line('Calling undecorated foo with True')
    assert foo(True)
    write_line('Calling decorated foo with False')
    assert not decorated_foo(False)
    write_line('Calling decorated foo with True')
    assert decorated_foo(True)

运行此脚本时,应遵循以下输出:

Calling undecorated foo with False
about to log at DEBUG ...
about to log at INFO ...
about to log at WARNING ...
Calling undecorated foo with True
about to log at DEBUG ...
about to log at INFO ...
about to log at WARNING ...
about to log at ERROR ...
about to log at CRITICAL ...
Calling decorated foo with False
about to log at DEBUG ...
about to log at INFO ...
about to log at WARNING ...
Calling decorated foo with True
about to log at DEBUG ...
about to log at INFO ...
about to log at WARNING ...
about to log at ERROR ...
Actually logged at DEBUG
Actually logged at INFO
Actually logged at WARNING
Actually logged at ERROR
about to log at CRITICAL ...
Actually logged at CRITICAL

可以看到,实际日志记录输出仅在记录严重性为ERROR或更大的事件时发生,但在这种情况下,也会记录较低严重性的任何先前事件。

你当然可以使用传统的装饰方法:

@log_if_errors(logger)
def foo(fail=False):
    ...

Formatting times using UTC (GMT) via configuration

有时你想使用UTC格式化时间,这可以使用类如UTCFormatter来完成,如下所示:

import logging
import time

class UTCFormatter(logging.Formatter):
    converter = time.gmtime

然后您可以在代码中使用UTCFormatter,而不是Formatter如果要通过配置执行此操作,可以使用dictConfig() API,并采用以下完整示例所示的方法:

import logging
import logging.config
import time

class UTCFormatter(logging.Formatter):
    converter = time.gmtime

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'utc': {
            '()': UTCFormatter,
            'format': '%(asctime)s %(message)s',
        },
        'local': {
            'format': '%(asctime)s %(message)s',
        }
    },
    'handlers': {
        'console1': {
            'class': 'logging.StreamHandler',
            'formatter': 'utc',
        },
        'console2': {
            'class': 'logging.StreamHandler',
            'formatter': 'local',
        },
    },
    'root': {
        'handlers': ['console1', 'console2'],
   }
}

if __name__ == '__main__':
    logging.config.dictConfig(LOGGING)
    logging.warning('The local time is %s', time.asctime())

运行此脚本时,应打印以下内容:

2015-10-17 12:53:29,501 The local time is Sat Oct 17 13:53:29 2015
2015-10-17 13:53:29,501 The local time is Sat Oct 17 13:53:29 2015

显示时间如何格式化为本地时间和UTC,每个处理程序一个。

Using a context manager for selective logging

有时候,临时更改日志配置并在做某事后将其还原会很有用。为此,上下文管理器是保存和恢复日志记录上下文的最明显的方式。下面是一个这样的上下文管理器的简单示例,它允许您可选择地更改日志记录级别,并且仅在上下文管理器的范围内添加日志处理程序:

import logging
import sys

class LoggingContext(object):
    def __init__(self, logger, level=None, handler=None, close=True):
        self.logger = logger
        self.level = level
        self.handler = handler
        self.close = close

    def __enter__(self):
        if self.level is not None:
            self.old_level = self.logger.level
            self.logger.setLevel(self.level)
        if self.handler:
            self.logger.addHandler(self.handler)

    def __exit__(self, et, ev, tb):
        if self.level is not None:
            self.logger.setLevel(self.old_level)
        if self.handler:
            self.logger.removeHandler(self.handler)
        if self.handler and self.close:
            self.handler.close()
        # implicit return of None => don't swallow exceptions

如果指定级别值,记录器的级别将设置为上下文管理器覆盖的with块范围内的值。如果指定一个处理程序,它将在进入块时添加到记录器,并在退出块时删除。你也可以要求管理器在块退出时为你关闭处理程序 - 如果你不再需要处理程序,你可以这样做。

为了说明它是如何工作的,我们可以添加以下代码块:

if __name__ == '__main__':
    logger = logging.getLogger('foo')
    logger.addHandler(logging.StreamHandler())
    logger.setLevel(logging.INFO)
    logger.info('1. This should appear just once on stderr.')
    logger.debug('2. This should not appear.')
    with LoggingContext(logger, level=logging.DEBUG):
        logger.debug('3. This should appear once on stderr.')
    logger.debug('4. This should not appear.')
    h = logging.StreamHandler(sys.stdout)
    with LoggingContext(logger, level=logging.DEBUG, handler=h, close=True):
        logger.debug('5. This should appear twice - once on stderr and once on stdout.')
    logger.info('6. This should appear just once on stderr.')
    logger.debug('7. This should not appear.')

我们最初将记录器的级别设置为INFO,因此消息#1出现,消息#2不出现。然后,在以下with块暂时将级别更改为DEBUG,因此显示消息#3。在块退出后,记录器的级别恢复为INFO,因此不会出现消息#4。在下一个with块中,我们再次将级别设置为DEBUG,但也添加了一个写入sys.stdout的处理程序。因此,消息#5在控制台上出现两次(一次通过stderr一次,一次通过stdout)。with语句完成之后,状态为之前的状态,因此消息#6出现(如消息#1),而消息#7不出现(就像消息#2)。

如果我们运行生成的脚本,结果如​​下:

$ python logctx.py
1. This should appear just once on stderr.
3. This should appear once on stderr.
5. This should appear twice - once on stderr and once on stdout.
5. This should appear twice - once on stderr and once on stdout.
6. This should appear just once on stderr.

如果我们再次运行,但管道stderr/dev/null,我们看到以下是写入stdout的唯一消息:

$ python logctx.py 2>/dev/null
5. This should appear twice - once on stderr and once on stdout.

再次,但管道stdout/dev/null,我们得到:

$ python logctx.py >/dev/null
1. This should appear just once on stderr.
3. This should appear once on stderr.
5. This should appear twice - once on stderr and once on stdout.
6. This should appear just once on stderr.

在这种情况下,打印到stdout的消息#5不会按预期显示。

当然,这里描述的方法可以被概括,例如临时附加日志过滤器。注意,上面的代码在Python 2和Python 3中都有效。