menu

AI & Data Analytics | Application Log Data in Context

Amena Siddiqi
SHARE ON:

Application log messages are a crucial source of diagnostic information for applications teams, but Splunk and other log analytics solutions are an expensive and inefficient way to capture and process this data. Integrating log analysis fully into application performance monitoring (APM) means you can search across log data, see which methods they were generated by in the context of the transaction call stack and, of course, identify every business transaction and user that was impacted.

Application context is key for log analytics

When log data is gathered through log scraping or other methods by log management tools dedicated to this task, it is collected without the context of the code it was generated by, and the business transaction it relates to. To troubleshoot errors or exception handling using this log data, performance investigators would probably have to use three or four different tools, and may even have to manually open up log files. They would have to visually scan and time correlate a log message with an APM tool that’s monitoring HTTP requests to see which transaction it is related to.

Unlike traditional log analytics tools, that collect log messages after the application has written them to disk with standard app libraries, SteelCentral AppInternals intercepts the log message in memory before it is written to a file. With AppInternals, there is no need to specially configure Splunk or other log collector to capture these messages.

capture logs

In addition, the AppInternals log data is presented in context of the end user transaction so you can clearly see who was affected, and when, and what they were trying to do. Because it captures all the parameters in a web request, if the problem is, for example, with a shopping cart “checkout” transaction, AppInternals will not only identify the “checkout” transaction, but will also capture the contents of the shopping cart and provide a detailed breakdown of performance for every “checkout” request, whether it was successful or not.

“99% of our log data is rubbish”

Here’s a true story from one of our on-site engineers working with a new client. Prior to implementing AppInternals, the developers and architects at the client site were using ELK to look at web logs, application logs and database logs. On the application estate, the log files were generating over 30GB of logs per instance per day and over 200GB per day on each database and web node. The environment was shared with other application teams and there were literally terabytes of logs flying across the network from all angles. This resulted in low data retention, making it impossible to go back in time in order to perform forensic analysis—they were lucky to get a day’s worth at best! They told us, “99% of our logs are rubbish and nothing to do with what we are looking for or need. It’s typical that the one percent of useful data we are looking for has just rolled out.”application log analysis

With SteelCentral AppInternals they were able to intercept the same log message that Kibana was ingesting, with the additional benefit that the messages were attached to the actual end user transaction itself. And the best part? The AppInternals data, being transaction-driven, focused only on the useful “one percent.”

Live, log and prosper

Application logs generate a significant volume of data. Based on what we’ve seen at our customer sites, they can easily represent well over half of all log data collected. Since typical log analytics tools license based on log volume processed this can become very expensive very quickly.

With AppInternals, application log analysis and storage is part of the core APM functionality that comes at a fixed license cost independent of the amount of data processed. AppInternals collects all transactions along with their user metadata and application log messages, and indexes them in its big data store. Taking a customer example as a benchmark, we were able to store about 40 days worth of raw transactional data on a 4TB disk as result of our efficient storage data structure. This presents a much more cost-effective solution for application performance analysis.

If you’re using ELK or Splunk for log analysis related to APM, ask yourself:

  • How much disk you are using?
  • How far back in time is your data retention?
  • What is the percentage of log lines that actually matter?
  • How much network overhead is this causing?

Conclusion

SteelCentral AppInternals captures all the important details of every single user-driven transaction and a whole lot more. And the icing on the cake is, we capture the exact same log message that you would expect from a log analysis tool, except our log entries are rich in value (not noise) and are attached to the transaction itself.

If you missed the other blogs in this series on AI and Data Analytics, you can find them below.

instant access trial

To learn more about the AppInternals product, watch our on-demand webinaron “Deep Dive on User-Centric APM, Big Data and AI” or try it out for yourself in our sandbox.

 

1 Response to “AI & Data Analytics | Application Log Data in Context”

Leave a Reply

Your email address will not be published. Required fields are marked *

Steve Pashia 29-Oct-2018 at 1:46 pm

A great article that provides meaningful insights on the relevance of performance log analytics, the challenges of using logs for performance analysis and how Riverbed approaches this frequent digital performance monitoring gap.

Reply