Log File Intelligence - log4net meets Splunk
At DrDoctor we are slowly adopting Splunk as our central reporting repository. We already have most of our application specific events going into it and we are already seeing some great benefits.
In this post I’m going to show the various steps I went through to get our log4net files being ingested in a useful format. Monitoring a file is easy, extracting useful fields is sometimes a challenge especially with log files.
Setting the format string
The first step was to change the format string in the log4net.config file. The main aim here was to make my life easier for when the log files are going into Splunk. By prefixing all the log4net tokens with a name means that I can write some simple, but very reliable regexes in Splunk to turn these into fields.
|
|
Those who are familiar with the log4net configuration options with notice that there are two tokens that don’t exist in the format string above. They are !release! and !version!, these are two very useful values to capture alongside our error messages as we can then start to track when new types of errors are discovered or introduced.
I’m using a custom PowerShell script in our deployment system, Octopus Deploy to set these values during the deployment phase, the release number reflects the Octopus Deploy release and the version number reflects the build number from TeamCity. Arguably we probably don’t need both, but I’m not entirely sure what I’m going to need yet so I’m going to stick with both for now.
Setting up Splunk
In our environment and most others I would assume, we are using the Splunk universal forwarder to send data to Splunk. The first step then is to add a new entry to the inputs.conf file to keep an eye on our logs directory.
|
|
Well that was easy, now that the Universal Forwarder is tracking the log files directory we should start seeing log entries appearing in Splunk.
This is a good start, but it would be more useful if we could start seeing the breakdown of the various entries. This is the point where we need to extract the various fields from the raw events.
To do this we need to make use of the Splunk field extractions. To extract more fields scroll down and click the link “Extract New Fields”
Then click “I prefer to enter the regex myself”
All the extractions follow a similar pattern, here is the regex to extract the log level:
|
|
Enter that into the regex input, then click the Preview button, in the sample events you will see all the different logging levels highlighted, you will also notice a new tab called Level appear.
Here is the complete list:
|
|
Go through the steps above for each one.
Log Intelligence
Now we can start doing some fancy queries.
Example one: number of errors by Release and host
|
|
Example two: number of errors over time
|
|
Example three: number of errors by application
|
|
Next steps
There are many possibilities, here are a couple of ideas:
- Build a dashboard from the various queries above
- Create some Splunk Alerts to trigger when a threshold of errors have been triggered
🍪 I use Disqus for comments
Because Disqus requires cookies this site doesn't automatically load comments.
I don't mind about cookies - Show me the comments from now on (and set a cookie to remember my preference)