menu
Virtualizing your Data Collection « US English

Virtualizing your Data Collection

SHARE ON:

Virtualization has come a long way in the last forty plus years. Starting from humble beginnings on mainframes and minicomputers like the IBM CP-40 to the early days of desktop virtualization with tools like DESQview from Quarterdeck Software to the world today with entirely virtualized data centers and companies like VMware whose installed base has more than 50 million virtual machines deployed, things have changed drastically. As more and more virtualized servers are deployed around the organization the ability to monitor the traffic on those servers becomes even more critical.

While the need to monitor virtual environments is not much different than the need to monitor physical environments, the tools available to monitor those environments can differ widely. When you had multiple physical servers, each responsible for one task, it was easy to monitor intra-system communications. SPAN’s, TAP’s, or flow data from the links between these physical servers provided all the information you needed. With virtualization you could have two, five, ten, or more servers running on one single physical appliance. How much of the communications between these servers happens entirely internal to the physical server? Being able to monitor what is happening within that virtual host can be critically important for troubleshooting, monitoring, and capacity planning purposes.Getting useful information out of your virtualized environment and feeding that data into a single monitoring solution that can aggregate the data from both your virtual and physical infrastructure is important. Being able to get that data back out with useful analysis and metrics is even more critical.

Riverbed SteelCentral application-aware NPM suite introduces new functionality that allows you to greatly expand the scope of your virtual machine monitoring allowing for greatly increased visibility further down into the virtualization stack. Instead of hoping the data you need is sent outside a physical host or relying on the high-level metrics provided by flow data from virtual switches, the combination of SteelCentral NetShark Virtual Edition and SteelCentral Flow Gateway (either virtual or physical) allows you to gather data from more places but still monitor that data from a central location.

Detailed metrics and metadata (called SteelFlow-Net) on conversations happening within a virtualized host can be generated by deploying NetShark-V on your virtualization servers. This information can then be sent to a Flow Gateway  where it is aggregated, de-duplicated, compressed, and securely sent to a SteelCentral NetProfiler where detailed analysis and analytics are run.

Figure 1: Screenshot of FG overview page showing various sources

By deploying NetShark-Vs onto your virtualization servers you get the benefits of extremely deep analysis, including such things as Layer 7 application identification, performance and end-user experience metrics, and packet capture and analysis, combined with the benefits of centralized reporting and analysis without the downsides of relying on the less detailed views provided by other flow data. The ability to send that SteelFlow-Net data to a local Flow Gateway for pre-processing helps to save bandwidth while increasing the sheer number of sources from which packet data can be collected from a previous limit of 100 devices to the thousands.

 

 

top.name