• notice
  • Congratulations on the launch of the Sought Tech site

Detailed explanation of log processing logging during python development

When I was developing python scripts or programs before, I was always troubled by running logs. Manual writing was too laborious, and the code format was not good-looking, so I searched on the Internet and found that this article was well written, so I brought it Share it with everyone. In fact, python itself has its own log processing standard module logging, but we don't know it.

Detailed directory of log processing during python development:

  1. Log related concepts

  2. Introduction to the logging module

  3. Use the module-level functions provided by logging to log

  4. Logging module log stream processing flow

  5. Use the four major components of logging to record logs

  6. Several ways to configure logging

  7. Add context information to log output

1. Log related concepts

Logging is a way to track events that occur while some software is running. Software developers can call logging-related methods into their code to indicate that something happened. An event can be described by a message that can contain optional variable data. In addition, events also have the concept of importance, which can also be called severity level (level).

1. The function

of the log Through the analysis of the log, it is convenient for the user to understand the operation status of the system, software, and application; if your application log is rich enough, you can also analyze the past user's operation behavior, type preference, geographical distribution or other more information ; If the log of an application is divided into multiple levels at the same time, it is easy to analyze the health status of the application, find problems in time, quickly locate and solve problems, and remedy losses.
To put it simply, we can know whether a system or software program is running normally by recording and analyzing logs, and can also quickly locate the problem when the application fails. For example, students who are doing operation and maintenance, after receiving alarms or feedback on various problems, usually check various logs first when troubleshooting problems, and most of the questions can be answered in the logs. For another example, students who are doing development can debug programs through various logs output on the IDE console. For experienced O&M drivers or experienced developers, they can quickly locate the root of the problem through logs. It can be seen that the importance of the log cannot be underestimated.

The role of the log can be briefly summarized as the following three points:

  •  program debugging

  •  Understand the running status of the software program, whether it is normal

  •  Software program operation failure analysis and problem location

If the log information of the application is detailed and rich enough, it can also be used for user behavior analysis, such as: analysis of user operation behavior, type washing, geographical distribution and other more information, so as to improve business and increase business benefits .

2. Log levels

Let's think about the following two questions first:

As a developer, what log information do you need when developing an application? What log information is required after the application is live?
As an application operation and maintenance personnel, what log information is needed when deploying the development environment? What log information do I need when deploying a production environment?

In the software development stage or when deploying the development environment, in order to check the running status of the application in as much detail as possible to ensure the stability after going online, we may need to record all the running logs of the application for analysis, which is very costly machine performance. When the application is officially released or deployed in the production environment, we usually only need to record the exception information, error information, etc. of the application, which can not only reduce the I/O pressure on the server, but also prevent us from being blocked when troubleshooting. Drowning in a sea of logs. So, how can we record logs with different levels of detail in different environments without changing the application code? This is the role of the log level. We can specify the log level we need through the configuration file.

The log levels defined by different applications may vary, and the detailed points will include the following levels:

  1. DEBUG

  2. INFO



  5. ERROR


  7. ALERT


3. Log field information and log format As

mentioned in the question at the beginning of this section, a log message corresponds to the occurrence of an event, and an event usually needs to include the following contents:

  •  event time

  •  location of the incident

  •  Severity of the event -- log level

  •  event content

The above are the field information that may be included in a log record, and of course some other information may also be included, such as process ID, process name, thread ID, thread name, and so on. The log format is used to define which fields are included in a log record, and the log format is usually customizable.


When outputting a log, the log content and log level need to be clearly specified by the developer. For other field information, it only needs to be displayed in the log.

4. Realization of log function

Almost all development languages will have built-in log related functions, or there will be relatively good third-party libraries to provide log operation functions, such as: log4j, log4php, etc. They are powerful and easy to use. Python itself also provides a standard library module for logging -- logging.

2. Introduction to logging module

The functions and classes defined by the logging module implement a flexible event logging system for application and library development. The logging module is a standard library module of Python. The key benefit of the logging API provided by the standard library module is that all Python modules can use this logging function. So, your application logs can combine your own log information with information from third-party modules.

1. The log level of the logging module The

logging module defines the following log levels by default, which allows developers to customize other log levels, but this is not recommended, especially when developing libraries for others to use, because it will causing log level confusion.

log level (level)
The most detailed log information, the typical application scenario is problem diagnosis
The level of information detail is second only to DEBUG, and usually only key node information is recorded to confirm that everything is working as we expected
A message logged when something unexpected happens (for example, low disk space), but the application is still running normally
Message logged when something doesn't work properly because of a more serious problem
Information logged when a critical error occurs that prevents the application from continuing

When developing an application or deploying a development environment, you can use logs at the DEBUG or INFO level to obtain as detailed log information as possible for development or deployment debugging; when the application goes online or deploys a production environment, you should use logs at the WARNING, ERROR, or CRITICAL level to Reduce the I/O pressure of the machine and improve the efficiency of obtaining error log information. The specification of the log level is usually specified in the configuration file of the application.


The log levels in the above list increase sequentially from top to bottom, namely: DEBUG < INFO < WARNING < ERROR < CRITICAL, while the amount of log information decreases in sequence;
when specifying a log level for an application Finally, the application will record all log information whose log level is greater than or equal to the specified log level, instead of only recording the log information of the specified level. Applications such as nginx, php, and the python logging module to be improved here are all like this. Similarly, the logging module can also specify the log level of the logger. Only log records with a level greater than or equal to the specified log level will be output, and log records with a level lower than this level will be discarded.

2. Introduction to the usage of logging module

The logging module provides two logging methods:

The first way is to use the module-level functions provided by logging. The
second way is to use the four major components of the Logging log system.

In fact, the module-level logging function provided by logging is just an encapsulation of related classes of the logging system.

Module-level common functions defined by the logging module

logging.debug(msg, *args, **kwargs)
Create a log record with severity level DEBUG
logging.info(msg, *args, **kwargs)
Create a log record with severity level INFO
logging.warning(msg, *args, **kwargs)
Create a log record with severity level WARNING
logging.error(msg, *args, **kwargs)
Create a log record with severity level ERROR
logging.critical(msg, *args, **kwargs)
Create a log record with severity level CRITICAL
logging.log(level, *args, **kwargs)
Create a log record with severity level
One-time configuration of root logger

Among them, the logging.basicConfig(**kwargs) function is used to specify information such as "log level to be recorded", "log format", "log output location", "open mode of log file", and the others are used for recording Functions for various levels of logging.

The four major components of the logging module

Provides an interface for direct use by application code
Used to send log records to the specified destination
Provide finer-grained log filtering function to determine which log records will be output (other log records will be ignored)
Used to control the final output format of log information

Note: The module-level functions provided by the logging module actually record logs through the related implementation classes of these components, but some default values are set when creating instances of these classes.

3. Use the module-level functions provided by logging to record logs

Review the important information mentioned above:

you can use the module-level method defined by the logging module to complete simple log records.
Only log records with a level greater than or equal to the level specified by the logger will be output, and logs with a level smaller than this level will be output. Records will be discarded.

1. The simplest log output

First, try to output a log record with different log levels:

import logging

logging.debug("This is a debug log.")
logging. info("This is an info log.")
logging. warning("This is a warning log.")
logging. error("This is an error log.")
logging. critical("This is a critical log.")

It can also be written like this:

import logging

logging. log(logging. DEBUG, "This is a debug log.")
logging. log(logging. INFO, "This is an info log.")
logging. log(logging. WARNING, "This is a warning log.")
logging. log(logging. ERROR, "This is an error log.")
logging. log(logging. CRITICAL, "This is a critical log.")

Output result:

WARNING:root:This is a warning log.
ERROR:root:This is a error log.
CRITICAL:root:This is a critical log.

2. Then here comes the

question Question 1: Why are the previous two logs not printed?

This is because the log level set by the logger used by the logging function provided by the logging module is WARNING, so only log records of the WARNING level and log records greater than its ERROR and CRITICAL levels are output, while less than its DEBUG and Log records at INFO level are discarded.

Question 2: What do the fields in the printed log information mean? Why is this output?

The meanings of the fields of each line of log records in the above output results are:

log level: logger name: the log content
is output in this way, because the default log format set by the logger used by the logging function provided by the logging module is BASIC_FORMAT, whose value is:


Question 3: Should I output the logging to a file instead of printing to the console?

Because the log output location specified by the processor of the logger setting used by the logging function provided by the logging module defaults to:


Question 4: How do I know this?

Looking at the implementation code of these logging functions, we can find that: when we do not provide any configuration information, these functions will call the logging.basicConfig(**kwargs) method, and will not pass any parameters to the method. Continue to look at the code of the basicConfig() method to find the answers to the above questions.

Question 5: How to modify these default settings?

In fact, it is very simple. Before we call the above logging functions, manually call the basicConfig() method and pass in the content we want to set in the form of parameters.

3. Function description of logging.basicConfig()

This method is used to make some basic configurations for the logging system. The definition of the method is as follows:


The keyword arguments accepted by this function are as follows:

parameter name
Specify the file name of the log output target file. After specifying this setting item, the log confidence will not be output to the console
Specifies the opening mode of the log file, defaults to 'a'. It should be noted that this option is only valid when the filename is specified
Specify the log format string, that is, specify the field information contained in the log output and their order. The format fields defined by the logging module are listed below.
Specifies the date/time format. It should be noted that this option is only valid when the time field %(asctime)s is included in the format
Specifies the log level of the logger
Specify the log output target stream, such as sys.stdout, sys.stderr and network stream. It should be noted that stream and filename cannot be provided at the same time, otherwise a ValueError exception will be raised
Newly added configuration items in Python 3.2. Specifies the style of the format format string, the possible values are '%', '{' and '$', the default is '%'
Newly added configuration items in Python 3.3. If this option is specified, it should be an iterable object that creates multiple Handlers that will be added to the root logger. It should be noted that only one of the three configuration items filename, stream, and handlers can exist, and two or three cannot appear at the same time, otherwise a ValueError exception will be thrown.

4. Format string fields defined by the logging module

Let’s list the fields defined in the logging module that can be used in the format string:

field/property name
use format
The time when the log event occurred -- human readable time, such as: 2003-07-08 16:49:45,896
The time when the log event occurred -- the timestamp, which is the value returned by calling the time.time() function at that time
The relative number of milliseconds when the log event occurred relative to the loading time of the logging module (I don't know why it is used yet)
The milliseconds part of the log event occurrence event
The textual log level for this log record ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL')
The log level as a number for this log record (10, 20, 30, 40, 50)
The name of the logger to use, the default is 'root', because rootLogger is used by default
The text content of the log record, calculated by msg % args
The full path of the source code file that calls the logging function
The filename portion of pathname, including the file suffix
The name part of filename, without the suffix
% (linen) d
The line number of the source code that called the logging function
The function name that called the logging function
process id
Process name, new in Python 3.1
thread ID
thread name

5. Configured log output

First simply configure the log level of the logger


logging.debug("This is a debug log.")
logging. info("This is an info log.")
logging. warning("This is a warning log.")
logging. error("This is an error log.")
logging. critical("This is a critical log.")

Output result:

DEBUG: root: This is a debug log.
INFO: root: This is an info log.
WARNING: root: This is a warning log.
ERROR: root: This is an error log.
CRITICAL: root: This is a critical log.

Log information of all levels is output, indicating that the configuration has taken effect.

On the basis of configuring the log level of the logger, the log output target file and log format under configuration

LOG_FORMAT="%(asctime)s - %(levelname)s - %(message)s"
logging.basicConfig(filename='my.log', level=logging.DEBUG, format=LOG_FORMAT)
logging.debug("This is a debug log.")
logging. info("This is an info log.")
logging. warning("This is a warning log.")
logging. error("This is an error log."
logging. critical("This is a critical log.")

At this point, you will find that there is no output log content in the console, but a log file named 'my.log' will be generated in the same directory as the python code file, and the content of the file is:

2017-05-08 14:29:53,783 - DEBUG - This is a debug log.
2017-05-08 14:29:53,784 - INFO - This is an info log.
2017-05-08 14:29:53,784 - WARNING - This is a warning log.
2017-05-08 14:29:53,784 - ERROR - This is an error log.
2017-05-08 14:29:53,784 - CRITICAL - This is a critical log.

On the basis of the above, let's set the date/time format again

LOG_FORMAT = "%(asctime)s - %(levelname)s - %(message)s"
DATE_FORMAT = "%m/%d/%Y %H:%M:%S %p"

logging.basicConfig(filename='my.log', level=logging.DEBUG, format=LOG_FORMAT, datefmt=DATE_FORMAT)

logging.debug("This is a debug log.")
logging. info("This is an info log.")
logging. warning("This is a warning log.")
logging. error("This is an error log.")
logging. critical("This is a critical log.")

At this point, you will see the following output in the my.log log file:

05/08/2017 14:29:04 PM - DEBUG - This is a debug log.
05/08/2017 14:29:04 PM - INFO - This is a info log.
05/08/2017 14:29:04 PM - WARNING - This is a warning log.
05/08/2017 14:29:04 PM - ERROR - This is an error log.
05/08/2017 14:29:04 PM - CRITICAL - This is a critical log.

After mastering the above content, it has been able to meet the logging function we need in our daily development.

6. Other instructions

A few things to explain: The

logging.basicConfig() function is a one-time simple configuration tool. Does not produce any operations, the setting of multiple calls are not additive operations.

The logger (Logger) has a hierarchical relationship. The logger used by the logging module-level function called above is an instance of the RootLogger class, and its name is 'root', which is the logger at the top of the logger hierarchy. And the instance exists in singleton mode.

If the log to be recorded contains variable data, you can use a format string as the description message of the event (the first parameter of functions such as logging.debug, logging.info), and then use the variable data as the second parameter *args value, such as: logging.warning('%s is %d years old.', 'Tom', 10), the output is WARNING:root:Tom is 10 years old.
logging.debug(), logging. In the definition of methods such as info(), in addition to the msg and args parameters, there is also a **kwargs parameter. They support 3 keyword parameters: exc_info, stack_info, extra, and these keyword parameters are described below.

Description of exc_info, stack_info, extra keyword parameters:

Its value is a boolean value, if the value of this parameter is set to True, exception exception information will be added to the log message. If there is no exception information, add None to the log information.
Its value is also boolean, and the default value is False. If the value of this parameter is set to True, the stack information will be added to the log information.
This is a dictionary (dict) parameter, which can be used to customize the fields contained in the message format, but its key cannot conflict with the fields defined by the logging module.

An example:

add exc_info and stack_info information in the log message, and add two custom word terminals ip and user

LOG_FORMAT = "%(asctime)s - %(levelname)s - %(user)s[%(ip)s] - %(message)s"
DATE_FORMAT = "%m/%d/%Y %H:%M:%S %p"

logging.basicConfig(format=LOG_FORMAT, datefmt=DATE_FORMAT)
logging.warning("Some one delete the log file.", exc_info=True, stack_info=True, extra={'user': 'Tom', 'ip':''})

Output result:

05/08/2017 16:35:00 PM - WARNING - Tom[] - Some one delete the log file.
Stack (most recent call last):
   File "C:/Users/wader/PycharmProjects/LearnPython/day06/log.py", line 45, in <module>
     logging.warning("Some one delete the log file.", exc_info=True, stack_info=True, extra={'user': 'Tom', 'ip':''})

4. Logging module log stream processing flow

Before introducing the advanced usage of the logging module, it is necessary to give a comprehensive and brief introduction to the important components contained in the logging module and their workflow, which will help us better understand the code we write (will trigger what kind of operation).

1. Four major components

of the logging module Before introducing the log stream processing flow of the logging module, let’s first introduce the four major components of the logging module:

component name
corresponding class name
Functional description
Provides an interface that applications can always use
Send log records created by the logger to the appropriate destination output
Provides finer-grained control tools to decide which log records to output and which log records to discard
Determines the final output format of the logging

The logging module completes log processing through these components, and the functions at the logging module level used above are also implemented through the classes corresponding to these components.

The relationship between these components is described in:

  1. The logger (logger) needs to output the log information to the target location through the handler (handler), such as: file, sys.stdout, network, etc.;

  2. Different handlers can output logs to different locations;

  3. The logger can set multiple handlers to output the same log record to different locations;

  4. Each handler (handler) can set its own filter (filter) to achieve log filtering, so as to keep only the logs of interest;

  5. Each handler can set its own formatter to output the same log to different places in different formats.

To put it simply: the logger (logger) is the entrance, the real work is the processor (handler), the processor (handler) can also pass the filter (filter) and the formatter (formatter) to the log content to be output Do processing operations such as filtering and formatting.

2. Introduction to the related classes of the logging module and their common methods

The following introduces the classes related to the four major components of logging:

  1. Logger

  2. Handler

  3. Filter

  4. Formatter

The Logger object of the Logger class

has 3 tasks to do:

1) Expose several methods to the application code, so that the application can record log messages at runtime;
2) Determine based on the log severity level (the default filtering facility) or the filter object Which logs to follow up;
3) Deliver log messages to all interested log handlers.
The most commonly used methods of Logger objects are divided into two categories: configuration methods and message sending methods.

The most commonly used configuration methods are as follows:

Sets the minimum severity level of log messages that the logger will process
Logger.addHandler()和 Logger.removeHandler()
Add and remove a handler object for the logger object
Logger.addFilter()和 Logger.removeFilter()
Add and remove a filter object for the logger object

Note on the Logger.setLevel() method: Among the

built-in levels, the lowest level is DEBUG, and the highest level is CRITICAL. For example, setLevel(logging.INFO), when the function parameter is INFO, then the logger will only process logs of INFO, WARNING, ERROR and CRITICAL levels, while messages of DEBUG level will be ignored/discarded.

After the logger object is configured, you can use the following methods to create log records:

Logger.debug(), Logger.info(), Logger.warning(), Logger.error(), Logger.critical() 
Create a log record corresponding to the level of their method name
Create a log message similar to Logger.error()
Need to get an explicit log level parameter to create a log record


The difference between Logger.exception() and Logger.error() is: Logger.exception() will output stack trace information, and usually just call this method in an exception handler.

Compared with Logger.debug(), Logger.info() and other methods, Logger.log() needs to pass an additional level parameter, which is not so convenient, but this method is still needed when logging custom level logs Finish.

So, how to get a Logger object? One way is to create an instance of the Logger class through the instantiation method of the Logger class, but we usually use the second way - the logging.getLogger() method.
The logging.getLogger() method has an optional parameter name, which indicates the name of the logger to be returned. If this parameter is not provided, its value is 'root'. If the getLogger() method is called multiple times with the same name parameter value, a reference to the same logger object will be returned.

Explanation on the hierarchical structure and effective levels of loggers:

The logger name is a hierarchical structure separated by '.', and the loggers after each '.' are the children of the loggers in front of '.'. For example, there is a logger named foo's logger, the other names are foo.bar, foo.bar.baz and foo.bam are descendants of foo.

Logger has a "effective level (effective level)" concept. If a level is not explicitly set on a logger, then the logger uses the level of its parent; if its parent does not explicitly set the level, it will continue to search for the effective level of the parent's parent, and so on, until it finds a clear setting Up to the level's ancestors. It should be noted that the root logger will always have an explicit level setting (the default is WARNING). When deciding whether to process an event that has occurred, the logger's effective level will be used to decide whether to pass the event to the logger's handlers for processing.

When child loggers finish processing log messages, they will by default pass log messages to the handlers associated with their ancestor loggers. Therefore, we don't have to define and configure handlers for all loggers used in an application. It is enough to configure handlers for a top-level logger and then create child loggers as needed. We can also turn off this delivery mechanism by setting a logger's propagate property to False.

Handler class

The function of the Handler object is (based on the level of the log message) to distribute the message to the location specified by the handler (file, network, mail, etc.). The Logger object can add 0 or more handler objects for itself through the addHandler() method. For example, an application may want to implement the following log requirements:

1) send all logs to a log file;
2) send all logs with a severity level greater than or equal to error to stdout (standard output);
3) send All logs with severity level critical are sent to an email address.
This scenario requires 3 different handlers, and each handler complexly sends logs of a specific severity level to a specific location.
There are only a very small number of methods in a handler that application developers need to care about. For application developers using the built-in handler objects, it seems that the only relevant handler methods are the following configuration methods:

Sets the minimum severity level of log messages that the handler will handle
Set a formatter object for the handler
Handler.addFilter() 和 Handler.removeFilter()
Add and remove a filter object for the handler

It should be noted that application code should not directly instantiate and use Handler instances. Because Handler is a base class, it only defines the interfaces that all handlers should have, and provides some default behaviors that subclasses can directly use or override. The following are some commonly used Handlers:

Send log messages to an output Stream such as std.out, std.err or any file-like object.
Send log messages to a disk file, which grows infinitely in size by default
Send log messages to disk files, and support log files to be cut by size
Send log messages to disk files, and support log files to be cut by time
Send log messages to an HTTP server as GET or POST
Send log messages to a specified email address
This Handler instance ignores error messages and is usually used by library developers who want to use logging to avoid the 'No handlers could be found for logger XXX' message.

Formater class

The Formater object is used to configure the final sequence, structure and content of log information. Unlike the logging.Handler base class, application code can instantiate the Formatter class directly. In addition, if your application needs some special processing behavior, you can also implement a subclass of Formatter to complete it.

The constructor of the Formatter class is defined as follows:

logging.Formatter.__init__(fmt=None, datefmt=None, style='%')

It can be seen that the construction method receives 3 optional parameters:

fmt: specify the message format string, if this parameter is not specified, the original value of message will be used by default
datefmt: specify the date format string, if this parameter is not specified, "" will be used by default %Y-%m-%d %H:%M:%S"
style: a new parameter in Python 3.2, the possible values are '%', '{' and '$', if this parameter is not specified, it will be used by default '%'

Filter class

Filter can be used by Handler and Logger to do finer-grained and more complex filtering functions than level. Filter is a filter base class, which only allows log events under a certain logger level to pass through the filter. The class is defined as follows:

class logging.Filter(name='')

For example, if the value of the name parameter passed when a filter is instantiated is 'AB', then the filter instance will only allow log records generated by loggers whose names are similar to the following rules to pass the filter: 'AB', 'AB,C', 'ABCD ', 'ABD', and logs generated by loggers named 'A.BB', 'BAB' will be filtered out. If the value of name is an empty string, all log events are allowed to pass through the filter.

The filter method is used to specifically control whether the passed record record can pass the filter. If the return value of this method is 0, it means that it cannot pass the filter, and if the return value is not 0, it means that it can pass the filter.


If necessary, the record can also be changed inside the filter(record) method, such as adding, deleting or modifying some attributes.
We can also do some statistical work through the filter, such as counting the number of records processed by a special logger or handler.
3. Logging log flow processing flow

The following figure describes the log flow processing flow:


Let us describe the log flow processing flow in the above figure:

1) (in user code) logging function calls, such as: logger.info (...), logger.debug(...), etc.;

2) Determine whether the log level to be recorded meets the level requirements set by the logger (the log level to be recorded must be greater than or equal to the level set by the logger to meet the requirements), if not, the log record will be discarded and the follow-up will be terminated operation, if satisfied, proceed to the next step;

3) Create a log record (LogRecord class) object according to the parameters mixed in when the log record function is called;

4) Determine whether the filter set on the logger rejects this log record. If a filter on the logger rejects, the log record will be discarded and subsequent operations will be terminated. If the log recorder is set If the filter does not reject this log record or if there is no filter set on the log recorder, then continue to the next step -- hand over the log records to each processor added on the logger;
5) Determine whether the log level to be recorded meets the requirements The level requirements set by the processor (the log level to be recorded must be greater than or equal to the log level set by the processor to meet the requirements). If the record is not satisfied, the record will be discarded by the processor and subsequent operations will be terminated. If it is satisfied, continue next step;

6) Determine whether the filter set on the processor rejects this log record. If a filter on the processor rejects, the log record will be discarded by the current processor and subsequent operations will be terminated. If the current processor The filter set on the above does not reject this log record or there is no filter set on the current processor to continue to the next step;

7) If you can reach this step, it means that this log record has been allowed to be output after passing through layers of checkpoints. At this time, the current processor will convert this log record according to the formatter it is set to (if not set, use the default format) Format, and finally output the formatted result to the specified location (file, network, stream of class file, etc.);

8) If the logger is configured with multiple processors, steps 5-8 above will be executed multiple times;

9) This is the last step of the complete process: determine whether the log message output by the logger needs to be passed to the processor of the upper-level logger (as mentioned before, the logger has a hierarchical relationship), if the propagate attribute value is 1 means that the log message will be output to the position specified by the processor, and will also be passed to the handlers of the parent logger for processing until the propagate attribute of the current logger is 0. If the propagate value is 0, it means that the message will not be sent to the parent The logger's handlers deliver the message, and that's it.

It can be seen that if a log information is to be finally output, it needs to go through the following several filters in sequence:

logger level filtering;
logger filter filtering;
logger processor level filtering;
logger processor filter filtering;

It should be noted that: Regarding the ninth step above, if the propagate value is 1, then the log message will be directly passed to the handlers of the upper-level logger for processing. At this time, the log level of the upper-level logger will not affect the log Messages are grade filtered.

5. Use the four major components of logging to record logs

Now, we should have a more comprehensive understanding of the important components of the logging module and the entire log stream processing process. Let's look at an example below.

1. Requirements

Now there are the following logging requirements:

1) All logs of all levels are required to be written to disk files
2) All log information is recorded in the all.log file, and the log format is: date and time - log Level - log information
3) log information of error level and above is recorded separately in the error.log file, and the log format is: date and time - log level - file name [: line number] - log information
4) All.log is required to be updated every day Early morning log cutting

2. Analysis

1) To record all levels of logs, the effective level of the logger needs to be set to the lowest level -- DEBUG;
2) The logs need to be sent to two different destinations, so two handlers need to be set for the logger ;In addition, both destinations are disk files, so these two handlers are related to FileHandler;
3) all.log requires log cutting according to time, so he needs to use logging.handlers.TimedRotatingFileHandler; and error.log Log cutting is not required, so FileHandler can be used;
4) The formats of the two log files are different, so the formatters need to be set separately for the two handlers;

3. Code implementation

import logging
import logging. handlers
import datetime

logger = logging. getLogger('mylogger')
logger. setLevel(logging. DEBUG)

rf_handler = logging.handlers.TimedRotatingFileHandler('all.log', when='midnight', interval=1, backupCount=7, atTime=datetime.time(0, 0, 0, 0))
rf_handler.setFormatter(logging.Formatter("%(asctime)s - %(levelname)s - %(message)s"))

f_handler = logging. FileHandler('error. log')
f_handler.setFormatter(logging.Formatter("%(asctime)s - %(levelname)s - %(filename)s[:%(lineno)d] - %(message)s"))

logger. addHandler(rf_handler)
logger. addHandler(f_handler)

logger. debug('debug message')
logger. info('info message')
logger. warning('warning message')
logger. error('error message')
logger. critical('critical message')

all.log file output

2017-05-13 16:12:40,612 - DEBUG - debug message
2017-05-13 16:12:40,612 - INFO - info message
2017-05-13 16:12:40,612 - WARNING - warning message
2017-05-13 16:12:40,612 - ERROR - error message
2017-05-13 16:12:40,613 - CRITICAL - critical message

error.log file output

2017-05-13 16:12:40,612 - ERROR - log.py[:81] - error message
2017-05-13 16:12:40,613 - CRITICAL - log.py[:82] - critical message

Six, several ways to configure logging

As a developer, we can configure logging in the following three ways:

1) Use Python code to explicitly create loggers, handlers and formatters and call their configuration functions respectively;
2) Create a log configuration file, and then use fileConfig() function to read the content of the file;
3) Create a dict containing configuration information, and then pass it to a dictConfig() function;

7. Add contextual information to log output

In addition to the parameters passed to the logging function, sometimes we want to include some additional contextual information in the log output. For example, in a network application, it may be desirable to record client-specific information in the log, such as the IP address and username of the remote client. Here we introduce the following implementation methods: introduce context information by

passing an extra parameter to the logging function
use LoggerAdapters to introduce context information
use Filters to introduce context information


Technical otaku

Sought technology together

Related Topic


Leave a Reply