Elsewhere on the internet this week, I was involved in a discussion about the value of WAN Optimization and if it's needed if you have "enough bandwidth." Through the discussion it became apparent the issues are more subtle than expected, and I thought the discussion might benefit from a more detailed investigation with experiments. First Ill summarize the line of thought I was trying to communicate, then Ill go into a "proof by example."
At any given point, communication across a network is bottlenecked by something, it may be bandwidth, but if its not, itll be by the latency due to application level protocol chattiness, the latency effects on TCP, or TCP flow dynamics (mostly at high transfer rates) assuming its not bottlenecked by the server or clients of course! Now in the Cascade product line, we have tools to help you suss out which it is, but lets ignore them for now. You can view the bottlenecks like the flow in a river:
Bottlenecks to transfer data, excerpted from Private Cloud Computing: Consolidation, Virtualization, and Service-Oriented Infrastructure with permission of the authors. See the book for a more detailed explination of some of these concepts.
Now, suppose you were first bottlenecked by Limited Bandwidth, you dynamite that curve in the stream, and it flows free. You see that now one of the others will constrain you. In the real world, say you did this by buying a big fat link, so youre in the position the poster in the other thread raised rougly restated as what is the value of WAN optimization if you have plenty of bandwidth? The answer, of course, is to break down the other barriers.
To prove this out, I set up a network with a client/server seperated by a relatively fat pipe with a lot of latency and two Riverbed Steelhead Appliances. I did a simple Microsoft Windows file copy across. With no optimization, it was slow, as expected, around 10 seconds. I then enabled only compression on the Steelheads, and as desired saw a minimal improvement, only 4% faster. This was to show that bandwidth wasnt the problem as the question requires. For the second test, I disabled all bandwidth savings, but enabled application level optimization and TCP optimization. The transfer was immediately 2.25x faster. But clearly we were being held back by something else now and suspecting it was bandwidth, I enabled deduplication and compression; the warm optimized transfer was an additional 2x faster.
What you can conclude from this is that the test was initially not constrained by bandwidth, but after removing much of the application/TCP level latency-related slowdown, the transfer was faster, but suddenly became constrained by bandwidth. By then enabling the bandwidth streamlining features, this was overcome, to net out a 5.2x speedup for the test. This is common of course, which is why optimizing both is the default Steelhead configuration. Hopefully this answered their question, and it was fun to get my hands dirty in the lab again. Make sense?
©2015 Riverbed Technology. All rights reserved. Riverbed and any Riverbed product or service name or logo used herein are trademarks of Riverbed Technology. All other trademarks used herein belong to their respective owners. The trademarks and logos displayed herein may not be used without the prior written consent of Riverbed Technology or their respective owners.
My Library Connect:
Riverbed, at more than $1 billion in annual revenue, is the leader in application performance infrastructure, delivering the most complete platform for the hybrid enterprise to ensure applications perform as expected, data is always available when needed, and performance issues can be proactively detected and resolved before impacting business performance. Riverbed enables hybrid enterprises to transform application performance into a competitive advantage by maximizing employee productivity and leveraging IT to create new forms of operational agility. Riverbed’s 26,000+ customers include 97% of the Fortune 100 and 98% of the Forbes Global 100.