How we use logging
When it comes to logging in your Go services, the immediate package to reach for is the standard library log package. As it happens, you’ll outgrow the log package needs almost immediately, because the log package is able to send the relevant logs only to one io.Writer stream, by default sent to os.Stderr.
There may be other needs that you may feel you want to address:
- Structured logging that can be consumed by external services like Logstash, ELK
- Split the log destination based on message severity (debug output, info, errors)
For people somewhat familiar with linux, you might asume that there are only two log levels available - standard output (stdout), and error output (stderr). In fact, the linux kernel comes with 7 different error levels:
- 0 (EMERG) - The system is unusable.
- 1 (ALERT) - Actions that must be taken care of immediately.
- 2 (CRIT) - Critical conditions.
- 3 (ERROR) - Non-critical error conditions.
- 4 (WARNING) - Warning conditions that should be taken care of.
- 5 (NOTICE) - Normal, but significant events.
- 6 (INFO) - Informational messages that require no action.
- 7 (DEBUG) - Kernel debugging messages, output by the kernel if the developer enabled debugging at compile time.
When you pipe your service output from docker to a remote syslog instance, the output is “tagged” with this particular error level. This makes it easy if you want to filter the output with queries like “show me all stderr output”. Particularly, stdout is level=6, and stderr is level=3.
We have chosen apex/log over the standard library log package to give us a structured logger, that splits the output based on the severity of the message itself. While it (or Go) doesn’t provide output that can fit directly into the provided log levels, we can easily adapt it to send log messages into stdout or stderr respectively.
We created an internal logger
package, which you can import from your
main.go file, just like you would a database driver.
import (
_ "yourproject.example.com/service/internal/logger"
)
The package itself is quite simple, as it only defines the required log options based on the environment, and the log message severity.
package logger
import (
"os"
"strings"
"github.com/apex/log"
"github.com/apex/log/handlers/cli"
)
// Logger is a custom split stdout/stderr cli logger
type Logger struct {
Stdout log.Handler
Stderr log.Handler
}
// HandleLog splits log outputs to stdout and stderr
func (l *Logger) HandleLog(e *log.Entry) error {
// Send error/fatal to stderr
if e.Level == log.ErrorLevel || e.Level == log.FatalLevel {
return l.Stderr.HandleLog(e)
}
// Everything else (info) to stdout
return l.Stdout.HandleLog(e)
}
func init() {
if !strings.HasPrefix(os.Getenv("ENVIRONMENT"), "prod") {
log.SetLevel(log.DebugLevel)
}
log.SetHandler(&Logger{
Stdout: cli.New(os.Stdout),
Stderr: cli.New(os.Stderr),
})
}
We define a custom logger with two outputs. We send the ErrorLevel
and
FatalLevel
messages to os.Stderr, and everything else we send to
os.Stdout. This is enough for us to differentiate between log output of
an informational nature (e.g. access logs, debug output,…) and error
output that should be actionable, mainly errors that occur during any
possible request or gRPC call.
Finally, apex/log
package by default drops all messages that are below
log.InfoLevel. This means if you have debug output, you have to
specifically enable it with log.SetLevel()
. In the example above, we
set it to log.DebugLevel, if we are not running in the production
environment, by reading the ENVIRONMENT
variable.
When using docker log
, you can pipe the stdout to >/dev/null
and
inspect only the errors, or use an external utility like
stderred to colorize stderr lines
in red. As we send this log output to syslog/logstash/elk, we can filter
the output there by filtering for “level: 3” (stderr).
Additionally, the apex/log package already provides several usable
handlers which you
can import. Our example imports the cli
handler, which just formats the
log messages for command line interface output, but you could just as
well send the log messages to any of the implemented services, or even
implement your own.
Using apex/log from code is simple, just import apex/log
and:
if err != nil {
// we can't act on a pubsub failure, just log it
log.WithError(err).Infof("couldn't send notification to channel %s", channel)
}
In this particular case, we take an error which we can’t handle, and pass
it to the logger with the info severity. We could have used Errorf
, but
given the context it doesn’t make sense. The pub/sub data is ephemeral
and messages get lost as the client doesn’t guarantee delivery. The state
itself is external to the pub/sub transport, so it will eventually be
consistent.
body, err := ex.Restore([]byte(m.MessageBody))
if err != nil {
log.WithError(err).Errorf("error restoring image data for message %d", m.MessageID)
pb.MessageBody = ""
return pb
}
pb.MessageBody = string(body)
Unlike the previous context, in this particular case we log an error. We have enough information here to inspect the particular problematic case any time after the error occurred, as the data comes from a database, and isn’t likely to change.
We resort to log.Debug*
rarely, as it mainly aids in testing or
inspection of non-production code. For example, one of our services logs
a pretty cryptic amount of debug output:
social/types.go: log.Debugf("body pre sanitize: %d", len(body))
social/types.go: log.Debugf("body post sanitize: %d", len(body))
social/types.go: log.Debugf("body post extract: %d", len(bodyBytes))
social/server_user.go: log.Debug("request without user data")
social/server_user.go: log.Debugf("request from user: %d", user.ID)
social/server_post_create.go: log.Debugf("incoming body length: %d", len(r.Post.ContentBody))
social/server_post_create.go: log.Debugf("final body length: %d", len(post.ContentBody))
And finally, in regards to structured output, whenever there is a larger scope where we need to take care of logging, we create a special logging context at start:
logctx := log.WithField("filename", filename)
This way, any following logctx call will have this field attached to the log message. A larger snippet in use is as follows:
logctx := log.WithField("filename", filename)
metaFilename := fmt.Sprintf("%s.json", filename)
metaFile, err := ioutil.ReadFile(metaFilename)
if err != nil {
logctx.WithError(err).Infof("error reading meta file")
continue
}
meta := imageMeta{}
meta.focusMeta = make(map[string]string)
if err := json.Unmarshal(metaFile, &meta); err != nil {
logctx.WithError(err).Infof("error decoding metadata")
continue
}
im, err := loadImage(filename)
if err != nil {
logctx.WithError(err).Infof("error loading image")
continue
}
And whenever the scope is smaller, like in this example from one of our
task scheduler projects, we attach a larger set of fields using
log.WithFields()
:
err = jobCheck(out, err)
entry := log.WithFields(log.Fields{
"last": job.Generated,
"link": job.Link,
"file": path.Base(filename),
"duration": fmt.Sprintf("%.4fs", duration.Seconds()),
})
if err != nil {
entry.WithError(err).Errorf(prefix, "error")
} else {
entry.Debugf(prefix, "ok")
}
In this particular case, we route successful output to the Debug log level, so we can have a relatively silent production environment, while development allows us to inspect individual job details in non-error situations. We care about job duration in development, but generally don’t log this information in production as we have other metrics that track performance of our services.
And finally, we resort to the great
fatih/faillint utility, where we
actively discourage the usage of the standard library log package in
favor of apex/log, and effectively disabling hand crafted output with
fmt.Print*
functions:
log=github.com/apex/log
fmt.{Print,Printf,Println}
Overall, this allows us to be sure that any output is going through the desired apex/log package, and preemptively breaks CI builds that may use either the stdlib log package or print output using the fmt package.
While I have you here...
It would be great if you buy one of my books:
- Go with Databases
- Advent of Go Microservices
- API Foundations in Go
- 12 Factor Apps with Docker and Go
Feel free to send me an email if you want to book my time for consultancy/freelance services. I'm great at APIs, Go, Docker, VueJS and scaling services, among many other things.
Want to stay up to date with new posts?
Stay up to date with new posts about Docker, Go, JavaScript and my thoughts on Technology. I post about twice per month, and notify you when I post. You can also follow me on my Twitter if you prefer.