Guidance on implementing timestamps in your events

When one designs application/service logs, quality timestamps are essential to avoiding situations where it might become unclear when an event actually occurred. Unfortunately, many default timestamps from devices and software services provide poor quality (incomplete and/or confusingly formatted) timestamps.

For timestamps that are both human-readable and also work well to specify (or extract) an exact “moment in time”, the ISO 8601 format is recommended. (Also reference RFC 3339.)

In addition, it is best to have this timestamp at the very beginning of the event. (If events are in a format such as json or csv, it is preferrable, but not necessary, to have the timestamp in a field near the beginning of the event.)

The below are examples of ISO 8601 timestamps with explicit offset; the second one includes fractional seconds.

2020-07-16T12:23:00-05:00  <the rest of your event data> 
or

2020-07-16T12:23:00.089-05:00  <the rest of your event data> 

The key is to provide a full representation of date and time from year down to the second (or sub-second) along with an explicit time-zone offset so that an exact “moment in time” can be evaluated. Some logs provide timestamps without a date, or if a date, sometimes not a year. Many do not specify timezone or offset.

An explicit time-zone offset is essential and preferred over an explicit time-zone “text” representation (such as “CDT” or “CST” or “US/Central”) because the former provides clarity and the latter can leave the actual “moment in time” ambiguous due in large part to the nature of daylight savings: Consider Indiana and Arizona. Consider what happens when daylight savings begins and ends.

There is some debate about whether one should use a UTC-based (zero offset) timestamp or to use local time (with explicit offset) in a timestamp. Our guidance on that comes down to the context and use of your logs. For “hardware” (think IoT) and computer/network infrastructure, it may be best (and expected) to use UTC timestamps.

For applications serving business processes that may span cities/states/time-zones, it can be helpful to use a local time zone timestamp (with offset) and even to add context beyond the offset.  (For more on why/when to use UTC or not, or when to add more context to your timestamp, read … https://engineering.q42.nl/why-always-use-utc-is-bad-advice/ )

For example, the following adds a bit more context by providing time zone abbreviation: (Different time zones can share the same offset.)

2020-07-16T12:23:00-05:00 US/Central
or

2020-07-16T12:23:00-05:00 America/Chicago

Alternative to using the ISO 8601 format with offset… If you use a human-friendly date-time timestamp (especially one that doesn’t lend itself well to “moment in time” clarity), a good option is to provide an epoch-time timestamp in addition to the human-friendly timestamp. (The epoch time value for the above date would be 1594920180. Epoch-time timestamps always represent time with no offset (UTC/GMT) — it provides “moment in time” clarity and is low-compute in terms of timestamp recognition.)

Here’s a sample record from a DUO (2 factor-authentication) log entry to demonstrate. They provide a human-friendly, local-time timestamp “ctime” (in this case, with no time-zone information), an ISO 8601 timestamp “isotimestamp” (in this case, in UTC, i.e., with a +00:00 offset), and an epoch time timestamp “timestamp” so that one can ‘triangulate’ for confidence in the “moment in time” even if the event were to be viewed out of context.

   ctimeWed Jul 22 09:59:59 2020
   email:
   event_typeauthentication
   eventtypeauthentication
   factoryubikey_passcode
   hostapi-cd3ecedb
   isotimestamp2020-07-22T14:59:59.990997+00:00
   ood_softwarenull
   reasonvalid_passcode
   resultsuccess
   timestamp1595429999

You might be asking, “But what if I can’t control my timestamp?”

There are lots of reasons you should leave well-established, standard logging formats alone.  Splunk is “pre-trained” for many established log sources and Splunk is designed to make all kinds of inferences and assumptions – to make a best guess and to assign an internal-to-Splunk time value for every event. Splunk also has lots of “dials and knobs” to tweak timestamp recognition. When we work with you to assess ingestion of your data, we will do our best to configure Splunk to interpret your timestamp values well. Our intent with this article, however, is to help you understand both why and how to get your timestamps as explicit and “independent of context and assumptions” as possible.

Related articles

Cybersecurity, Logging Practices for Application Developers