defcreate_logger(app): """Creates a logger for the given application. This logger works similar to a regular Python logger but changes the effective logging level based on the application's debug flag. Furthermore this function also removes all attached handlers in case there was a logger with the log name before. """ Logger = getLoggerClass()
logger = getLogger(app.logger_name) # just in case that was not a new logger, get rid of all the handlers # already attached to it. del logger.handlers[:] logger.__class__ = DebugLogger logger.addHandler(debug_handler) logger.addHandler(prod_handler)
# Disable propagation by default logger.propagate = False
pid -> the pid of the worker managing the request app -> the id (it is a integer, starting from 0) of the app, it makes sense when multiple apps are hosted in the same instance. It is -1 when no app managed the request (like when serving static files) or when the ‘app’ concept does not apply (like with php or cgi’s) req: N/M -> N is the number of managed requests by the current worker for the specific app, M is the grand total (sum of all requests of all workers)
then you have REMOTE_ADDR followd by the (optional) REMOTE_USER (very similar to apache)
vars are the number of CGI vars in the request, and their size (from the uwsgi protocol point of view). The size is never higher than the –buffer-size (higher requests are discarded)
The time of the request follows
Then you have REQUEST_METHOD + REQUEST_URI
Then the response size and the time required for generating it
“via” is the techology used to send the response, currently can be sendfile, routing or offloading.
The response status follows, as well as the number of response headers.
“core” is the low-level concept for uWSGI concurrency context in a process (can be a thread or a greenlet or a fiber or a goroutine and so on…) while switches count is incremented whenever an app “yield” its status (this has various meanings based on the lower concurrency model used)