AI and Data Analytics | What’s More Valuable Than Big Data?
“The most valuable commodity I know of is information.”
– Gordon Gekko (Wall Street)
It’s no secret that data analytics and artificial intelligence (AI) are the hottest commodities of the Age of Big Data. Data scientists have consistently topped the lists of fastest growing jobs for the last few years and interest in artificial intelligence is at fever pitch. IDC has estimated that by next year, 40% of digital transformation initiatives will use AI services. While there’s been many a discussion on why more data beats better algorithms coaxing information out of data with statistics and machine learning remains the key to realizing greater business and IT efficiencies, and beating the competition.
At Riverbed, we’ve taken a unique “big data” approach to application performance monitoring (APM). Where other vendors have shied away from the magnitude of the big data problem in APM, opting instead for triggered or sampling-based approaches, Riverbed has taken the lead in developing this technology. If this intrigues you,you can learn more on this topic in this white paper.
Show me the (analytics) gold!
Of course, having complete data sets for application performance management is great, but isn’t it only as exciting as the information you can get out of it? Precisely!
What follows are some examples of the artificial intelligence and data visualizations we’ve built into SteelCentral APM to help you extract critical information from this treasure trove of data.
These intuitive analytics tools will help you
- Quickly find and fix code-level issues that other APM solutions will miss
- Focus dev efforts on optimizing code that has the biggest overall impact
The emergence of AIops: finding the needle in the haystack
One of the ways that data scientists extract information out of data is pattern recognition. When all is well in the world of your application, transaction response times should be more or less randomly distributed around some optimal mean. When patterns show up in transaction data, this usually points to a systemic underlying resource allocation issue or something that results from calls to a slow-running service or code component. Similarly, a recurring pattern in usage or user behavior can be helpful in identifying a bot or other external factor impacting application performance.
Artificial intelligence can help you surface issues beyond the usual suspects that you may not have thought to look for. Riverbed AppInternals has built-in machine learning algorithms for clustering and correlations that can help you find groups of related transactions and performance metrics. These powerful machine learning techniques not only identify issues automatically, they also often point directly to the root cause of the issue (such as the specific SQL statement in the example below).
Another way to find things that are not behaving as they should is with automatic anomaly detection. An APM solution can usually establish a normal baseline for performance, ideally fine-tuned to specific transaction types, and proactively alert you to anything that is exhibiting abnormal behavior before end user SLAs are breached. With the full and complete transaction record for every single executed transaction, Riverbed APM has the capability to not only surface anomalies but to always have the drill-down data on hand for troubleshooting the very first time the problem occurs. This is especially crucial for intermittent or transient problems which are frequent culprits in distributed containerized cloud and microservices environments where state changes occur on short time scales and the same user transaction can follow different paths through the infrastructure.
The big picture: extracting business and app intelligence
With the right visualization and complete data, AppInternals can immediately surface code components with global versus local impact.
Performance Graph is one such innovative visualization that immediately shows the business transactions that are consuming the most processing time overall (on the left hand side) and connects this to the piece of code that is contributing the most to that processing time (on the right hand side).
Prism is another innovative visualization. Prism shows you the proportion of one type of transaction compared to another. One way to use prism analysis is to define a multi-step business process, such as a shopping cart workflow, to get a longitudinal view of the traditional marketing funnel. You can then see immediately if any step (such as “choose payment method”) is dropping off compared to the other steps, quickly identifying a potential problem before it impacts business results. There are many other creative applications of this visual which we’ll explore in future blogs.
Finally, using intuitive queries applied to transaction records with full user data, method and SQL detail, you’re only two clicks from an answer to virtually any performance question. Perform free form analysis using simple “and”, “or” queries assisted by auto-complete. AppInternals indexes metadata across billions of transactions allowing you to quickly find the critical transaction or information you’re looking for, to generate reports, compare historical performance, or drill down for further analysis.
Over the next few weeks, I’ll be diving deeper into each of these areas to explore the different ways our customers are using these artificial intelligence and data visualization capabilities to identify the application optimization projects that have global impact across the application and the biggest overall impact on the business.
Application log data in context
Log analysis is another hot area for analytics. But Splunk or other big data analytics solutions are an expensive way for DevOps teams to get at application log data, such as errors or exception handling. In addition, this data is collected out of context of the end user experience for the business transaction it relates to. By contrast, integrating log analysis fully into APM means you can search across logs, see which methods they were generated by in the context of the transaction call stack and, of course, identify every business transaction and user that was impacted. Pretty cool, huh?
Stay tuned for future blogs that explore each of these analytics features in more detail!
With SteelCentral AppInternals, we’re employing cutting edge technologies such as patented data compression and auto-tuning to capture and store every single application transaction with full call stack and metadata detail. All with negligible impact on the application. Even for large enterprise-scale environments that see billions of transactions a day! To learn more about the AppInternals product, watch this on-demand webinar on “Deep Dive on User-Centric APM, Big Data and AI” or try it out for yourself in our instant access sandbox.