We have written a lightweight tool called honeytail. Honeytail will tail your existing log files, parse the content, and send it up to Honeycomb.
If you already have structured data in an existing log, this is the easiest method to get that data in to Honeycomb.
The quality of your dataset within Honeycomb depends entirely upon the quality of the data going into the log file. To get the most useful insight out of Honeycomb, you must provide high quality data in your log file. In addition to as much detail about each event as you can include, it is best to always include some host-level information to give each event in the log context, for example the host on which the log exists.
Honeytail is designed to run as a daemon so that it can continuously consume new content as it appears in the log files as well as detect when a log file rotates. It must be configured with your Team Write Key and the name of the Dataset to which you want to write data. You specify one of the available parser modules depending on how your log data is structured. Once running, honeytail will take care of uploading all the data in your log file and picking up new data as it comes in.
Honeytail is open source—we encourage auditing the software you will run on your servers. We also happily consider pull requests with new log format parsers and other improvements.
honeytail will tail existing log files, parse the content, and send it up to Honeycomb.
You can view its source here.
Download and install the latest honeytail by running:
wget -q https://honeycomb.io/download/honeytail/linux/honeytail_1.378_amd64.deb && \
echo '197401aa6877f09139d6927e2dbc038ec8ae91243e1b522d8131bc4bd66c4e78 honeytail_1.378_amd64.deb' | sha256sum -c && \
sudo dpkg -i honeytail_1.378_amd64.deb
wget -q https://honeycomb.io/download/honeytail/linux/honeytail-1.378-1.x86_64.rpm && \
echo 'df4dbd1c57cd30b7c7ca3d4d95606e77f1455153e5e0574cf8c30f12105b8290 honeytail-1.378-1.x86_64.rpm' | sha256sum -c && \
sudo rpm -i honeytail-1.378-1.x86_64.rpm
wget -q -O honeytail https://honeycomb.io/download/honeytail/linux/1.378 && \
echo '2bdbab65fd8f96715617661a806bc312731059986a8f7f40d6bd68c31b467532 honeytail' | sha256sum -c && \
chmod 755 ./honeytail
The packages install honeytail, its config file /etc/honeytail/honeytail.conf, and some start scripts. The binary is just honeytail, available if you need it in an unpackaged form or for ad-hoc use.
You should modify the config file and uncomment and set:
ParserName to the appropriate one of: json, nginx, mongo, mysql, arangodbWriteKey to your Write Key, available from the account pageLogFiles to the path for the log file you want to ingest or - for stdinDataset to the name of the dataset you wish to create with this log fileThe docs pages for JSON, NGINX, MongDB, and MySQL have more detail on additional options to set for each parser. The other available options are all described in the config file and below.
Launch honeytail by hand with honeytail -c /etc/honeytail/honeytail.conf or using the standard sudo initctl start honeytail (upstart) or sudo systemctl start honeytail (systemd) commands.
honeytail will automatically start back up after rebooting your system. To disable this, put the word manual in /etc/init/honeytail.override (upstart) or run systemctl disable honeytail (systemd).
If you have a number of old log files that you’d like to load in to Honeycomb once, you want to use the --backfill flag for honeytail.
Note: honeytail does not unzip log files, so you’ll need to do this before backfilling.
Here’s an example honeytail invocation to pull in multiple existing logs and as much as the current log as possible.
honeytail --writekey=YOUR_WRITE_KEY --dataset='My App' --parser=json \
--file=/var/log/app/myapp.log.* --file=/var/log/app/myapp.log \
--backfill
Let’s break down the various parts of this command.
--parser=json: For the purposes of this example, all logs are already JSON formatted. Take a look at the timestamp section of the JSON connector to make sure your historical logs have their times interpreted correctly.--file=/var/log/app/myapp.log.*: Honeycomb understands file globs and will ingest all of the files in series.--file=/var/log/app/myapp.log: Specify the --file (or its short form, -f) as many times as necessary to include additional files that don’t match a glob. Ingest as much of the current file as exists.--backfill: This flag tells honeytail to read the specified files in their entirety, stop when finished reading, and to respond to rate limited responses (HTTP 429) by slowing down the rate at which it sends events.Honeytail will read all the content in all the old logs and then stop. When it finishes, you’re ready to send new log lines. By default, honeytail will keep track of its progress through a file, and if interrupted, will pick back up where it left off. By launching honeytail pointing at the main app log, it will find the state file it created while reading in the backlog and start up where it left off.
Here’s the second honeytail invocation, where it will tail the current log file and send in recent entries:
honeytail --writekey=YOUR_WRITE_KEY --parser=json --dataset='My App' --file=/var/log/app/myapp.log
Note: We enforce a rate limit in order to protect our servers from abuse. This can be raised on a case-by-case basis; please contact us to lift your limit.
Below, find some general debugging tips when trying to send data to Honeycomb. As always, we’re happy to help with any additional problems you might have!
“Datasets” are created when we first begin receiving data under a new “Dataset Name” (used/specified by all of our SDKs and agents).
If you don’t see an expected dataset yet, our servers mostly likely haven’t yet received anything from you.
To figure out why, the simplest step is to add a --debug flag to your honeytail call.
This should output information about whether lines are being parsed, failing to send to our servers,
or—whether honeytail is receiving any input at all.
Another useful thing to try may be to add --status_interval=1 to your flags, which will output a line like the below, each second (newlines added for legibility):
INFO[0002] Summary of sent events avg_duration=295.783µs
count_per_status=map[400:10]
errors=map[]
fastest=259.689µs
response_bodies=map[request body is too large:10]
slowest=348.297µs
total=10
The total here is the number of events sent to Honeycomb; the rest are stats characterizing how those events were sent and received.
(A total=0 value would clue us into the fact that honeytail just isn’t sending any events at all.)
In the line above, we see that events were, in fact, invalid and being rejected by the server.
When using honeytail, the --dataset (-d for short) argument will determine the name of the dataset created on Honeycomb’s servers.
If you’re writing into an existing dataset, the quickest way to check for new data is the SAMPLES link in the dataset header:

Clicking SAMPLES ⬇ will trigger a small screen to pop down from the header, containing the ten events most recently received for that dataset.
If you don’t see your new events appear, try the --debug or --status_interval=1 (change 1 to 5 to see the summary every 5 seconds).
honeytail doesn’t seem to be progressing on my log fileAre you trying to send data from an existing file? honeytail’s default behavior is to watch files and process newly-appended data.
If you’re attempting to send data from an existing file, make sure to use the --backfill flag—this will make sure honeytail
begins reading the file from the beginning and exits when finished.
Our JSON parser makes a best-effort attempt to parse and understand timestamps in your JSON logs. Take a look at the Timestamp parsing section of the JSON docs to see timestamp formats understood by default.
If you suspect your timestamp format is unconventional, or the time field is keyed by an unconventional field name,
providing --json.timefield and --json.format arguments will nudge honeytail in the right direction.
Let’s say you have an incredible volume of log content and your website gets hit frequently enough that you will still get excellent data quality even if you’re only looking at 1/20th the traffic. Honeytail can sample the log file and for each 20 lines, only send one of them. It does so randomly, so you won’t see every 20th line being sent - instead each line will have a 5% chance of being sent.
When these log lines reach Honeycomb, they will include metadata indicating that each one represents 20 similar lines, so all your graphs will show accurate total counts.
honeytail --writekey=YOUR_WRITE_KEY --dataset='Webtier' --parser=nginx --file=/var/log/nginx/access.log \
--samplerate 20 --nginx.conf /etc/nginx/nginx.conf --nginx.format main
It’s not unusual for a log to omit interesting information like the name of the machine on which the process is running. After all, you’re on that machine, right? Why would you add the hostname? Log transports like rsyslog will prepend logs with the hostname sending them, but if you’re sending logs from each host, this data may not exist. Honeytail lets you add in extra fields to each event sent up to Honeycomb with the --add_field flag.
For this example, let’s assume that you have ngnix running as a web server in both your production and staging environments. Your shell sets $ENV with the environment (prod or staging). Here is how to run honeytail to consume your nginx log and insert the hostname and environment along with each log line:
honeytail --writekey=YOUR_WRITE_KEY --dataset='Webtier' --parser=nginx --file=/var/log/nginx/access.log \
--nginx.conf /etc/nginx/nginx.conf --nginx.format main \
--add_field hostname=$(hostname) --add_field env=$ENV
Sometimes you will have fields in your log file that you don’t want to send to Honeycomb or that you want to obscure before letting them leave your servers. For this example, let’s say that you have in your log a large text field with the contents of an email. It is large enough that you don’t want it sent up to Honeycomb. Also in this log you have a some sensitive information like a person’s birthday. You want to be able to ask questions about the most common birthdays, but you don’t want to expose the actual birthdays outside your infrastructure.
Honeytail has two flags that will help you accomplish these goals. --drop_field will remove a field before sending the event to Honeycomb and --scrub_field will subject the value of a field to a SHA256 hash before sending it along. You will still be able to do inclusion and frequency analysis on the hashed fields (as there will be a 1-1 mapping of value to hashed value) but the actual value will be obscured.
Here is your honeytail invocation:
honeytail --writekey=YOUR_WRITE_KEY --dataset='My App' --parser=json --file=/var/log/app/myapp.log \
--drop_field email_content --scrub_field birthday
honeytail ConfigThe honeytail binary supports reading its config from a config file as well as command line arguments.
To get started, if you’ve already been using a few command line arguments, add an additional flag: --write_current_config.
This will write your current config to STDOUT so you can use it as a starting point.
$ honeytail -p mysql -k YOUR_WRITE_KEY -d YOUR_DATSET -f ./mysql-slow.log --write_current_config
[Required Options]
; Parser module to use. Use --list to list available options.
ParserName = mysql
; Team write key
WriteKey = YOUR_WRITE_KEY
; Log file(s) to parse. Use '-' for STDIN, use this flag multiple times to tail multiple files, or use a glob (/path/to/foo-*.log)
LogFiles = ./mysql-slow.log
; Name of the dataset
Dataset = YOUR_DATASET
This can be particularly useful for versioning or productionizing honeytail use—or for providing additional configuration when using advanced honeytail features like scrubbing sensitive fields or parsing custom URL structures.
Once the config file is saved, simply run honeytail with a -c argument in lieu of all of the other flags:
$ honeytail -p mysql -k YOUR_WRITE_KEY -d YOUR_DATSET -f ./mysql-slow.log \
--scrub_field=field_name_1 --scrub_field=field_name_2 \
--write_current_config > ./scrubbed_mysql.conf
$ honeytail -c ./scrubbed_mysql.conf
honeytail can break URLs up into their component parts, storing extra information in additional columns. This behavior is turned on by default for the request field on nginx datasets, but can become more useful with a little bit of guidance from you.
There are several flags that adjust the behavior of honeytail as it breaks apart URLs.
When using the nginx parser, honeytail looks for a field named request. When using a different parser (such as the JSON parser), you should specify the name of the field that contains the URL with the --request_shape flag.
Using this flag creates a few generated fields. Given a request field containing a value like:
GET /alpha/beta/gamma?foo=1&bar=2 HTTP/1.1
… will produce nginx events for Honeycomb that look like:
| field name | value | description |
|---|---|---|
| request | GET /alpha/beta/gamma?foo=1&bar=2 HTTP/1.1 | the full original request |
| request_method | GET | the HTTP method, if it exists |
| request_protocol_version | HTTP/1.1 | the HTTP version string |
| request_uri | /alpha/beta/gamma?foo=1&bar=2 | the unmodified URL (not including the method or version) |
| request_path | /alpha/beta/gamma | just the path portion of the URL |
| request_query | foo=1&bar=2 | just the query string portion of the URL |
| request_shape | /alpha/beta/gamma?foo=?&bar=? | a normalized version of the URL |
| request_pathshape | /alpha/beta/gamma | a normalized version of the path portion of the URL |
| request_queryshape | foo=?&bar=? | a normalized version of the query portion of the URL |
(The generated fields will all be prefixed by the field name specified by --request_shape— in the above example request. Use the --shape_prefix field to prepend an additional string to these generated fields.)
If the URL field contains just the URL, the request_method and request_protocol_version fields will be omitted.
The path portion of the URL (from the beginning / up to the ? that separates the path from the query) can be grouped by common patterns, as is common for REST interfaces.
For example, given a URL fragments like:
/books/978-0812536362
/books/978-9995788940
We can break the fragments into a field containing the generic endpoint (/books/:isbn) and a separate field for the ISBN number itself by specifying a --request_pattern flag:
honeytail ... \ # other arguments
--parser=nginx --request_pattern=/books/:isbn
This will produce, among other fields:
| request_path | request_shape | request_path_isbn | (other fields) |
|---|---|---|---|
| /books/978-0812536362 | /books/:isbn | 978-0812536362 | … |
| /books/978-9995788940 | /books/:isbn | 978-9995788940 | … |
You can specify multiple --request_pattern flags and they’ll be considered in order. The first one to match a URL will be used. Patterns should represent the entire path portion of the URL - include a “*” at the end to match arbitrary additional segments.
For example, if we have a wider variety of URL fragments, like:
/books/978-0812536362
/books/978-3161484100/borrow
/books/978-9995788940
/books/978-9995788940/borrow
We can provide our additional --request_pattern flags and track a wider variety of request_shapes:
honeytail ... \ # other arguments
--parser=nginx --request_pattern=/books/:isbn/borrow --request_pattern=/books/:isbn
We’ll see our request_path_isbn populated as before, as the :isbn parameter is respected in both patterns:
| request_path | request_shape | request_path_isbn | (other fields) |
|---|---|---|---|
| /books/978-0812536362 | /books/:isbn | 978-0812536362 | … |
| /books/978-3161484100/borrow | /books/:isbn/borrow | 978-3161484100 | … |
| /books/978-9995788940 | /books/:isbn | 978-9995788940 | … |
| /books/978-9995788940/borrow | /books/:isbn/borrow | 978-9995788940 | … |
A URL’s query string can be broken apart similarly, with the --request_query_keys flag, with generated fields named like <field>_query_<keyname>.
If, on top of our previous examples, our URL fragments had query strings like:
/books/978-0812536362?borrower_id=23597
Providing --request_query_keys=borrower_id would return us a Honeycomb event with a request_query_borrower_id field with a value of 23597.
If you would like to automatically create a field for every key in the query string, you can use the flag --request_parse_query=all. This will automatically create a new field <field>_query_<key> for every query parameter encountered in the query string. For any publicly accessible web server, it is likely that this will quickly create many useless columns because of all the random traffic on the internet.
For more detail and examples see our urlshaper package on GitHub.