Comparing Four End User Experience Monitoring Methods
It’s easy to see why. Gartner’s 2016 Magic Quadrant for APM lists Digital Experience Monitoring as the first of three functional dimensions of Application Performance Monitoring. This makes sense. After all, developers and IT Operations teams need to consider how application parameters like availability, latency, response time, and usability appear to the end user. And they must determine how many end users are affected when they troubleshoot app problems.
So, it’s no surprise that every Application Performance Management vendor now touts End User Experience Monitoring as part of its solution.
The differences in approaches to End User Experience Monitoring
The result is a confusing End User Experience Monitoring market. Understand the differences between these four approaches to EUEM to ensure you choose a product that addresses your needs.
1. Synthetic monitoring
Synthetic monitoring runs a script that simulates users’ interactions with key applications. IT programs the scripts to run from various locations at regular intervals. For this reason, some refer to this method as “robotic testing.” Synthetic monitoring products proactively identify major execution or availability issues that could affect user experience.
This approach determines application baselines and identifies availability issues, even for applications that are not used around the clock. It also works well for applications that access third party services using APIs.
Creating and maintaining the scripts on which synthetic monitoring relies can be time-consuming. More importantly, synthetic monitoring only emulates user experience. It does not measure actual end user experience.
So, while synthetic monitoring can identify application performance issues in general, it cannot identify or help resolve any particular end user’s complaint. This limitation presents a problem for the service desk. If a user calls the service desk with a problem, this solution tells them nothing about what the end user was actually doing or experiencing.
3. Real User Monitoring
Real User Monitoring (RUM) relies on network-based packet capture from the network, browser, or application for End User Experience Monitoring. This method collects network-based response time and error metrics that affect end user experience, such as HTTP/HTTPS or other network transactions on the wire, such as TCP. Unlike synthetic monitoring, RUM collects metrics that reflect actual (or real) end user experience. Hence the name.
To use this approach, IT must identify the optimum points in the network to aggregate and filter traffic for analysis. Although hardware based approaches becomes more expensive as network speeds increase, the packet aggregation and brokering equipment gear can also be used for security and network management.
While RUM solutions collect data that relate to end user experience, they do not provide visibility into the actual screen render time within the browser or application. A web or network request traverses the wire in a millisecond. But it can take 10 seconds or more for the screen to render if there is heavy client-side processing or a large volume of data.
4. Device-based End User Experience Monitoring
Device Performance Monitoring (DPM) solutions address part of what’s required for End User Experience Monitoring. DPM products use light-weight agents to monitor the health and performance of end users PCs, laptops, and virtual desktops. They track operating system metrics like resource utilization and health. Some DPM products also can identify installed applications and identify app crashes. These metrics certainly relate to end user experience. But they don’t provide any visibility into how end users are actually experiencing the applications they use.
Device-based End User Experience Monitoring products like SteelCentral Aternity take the next step, by doing all of this and more.
Aternity monitors the performance of applications as they render on the screens of the user’s device. Further, Aternity monitors the performance of business activities performed by the end user. These are company-defined user interactions with applications in the context of a business process. For example, “look up a patient record,” or “process a claim,” or “check inventory.” Aternity automatically generates baselines for what constitutes acceptable performance for these activities. It generates alerts when performance deviates from baseline. Unlike DPM products, Aternity presents a true picture of end user experience, by correlating these three streams of data together – device health and performance, application performance as seen by the end user, and user behavior.