Financial Services and the Sharks: Swimming Together « US English

Financial Services and the Sharks: Swimming Together


Financial exchange companies live and die by their ability to deliver the information they need to the appropriate places. Those destinations include things such as:

  • Ensuring customer orders are delivered and confirmations returned
  • Identifying unidirectional packet delivery jitter, i.e. looking for congestion or packet buffering along the packet delivery route
  • Market data gap detection, i.e. looking for holes in the delivery of market data to customers

Ensuring customer orders are delivered and confirmations returned

As customers place orders those orders are sent from the customer along through to an exchange to be executed. If the order does not get to the exchange or is delayed getting to the exchange by even a short amount of time, the ability of the exchange to properly fill the order at the price quoted may be compromised.

There are two specific types of analysis you can do here:

  1. TCP-level order delivery – confirming that the order packet(s) were delivered and if they were not, did a retransmit get there in time to execute the order
  2. TCP-level confirmation delivery – once an order has been successfully executed, did the confirmation packet get delivered or are we waiting for a retransmit

While both of these are looking at the same kind of data, that is, ensuring that TCP-based packets are delivered from the sender to the recipient and if the packets are not properly delivered looking for the retransmitted packet(s). The longer the delay between the original packet and the retransmitted packet, the greater the chance that an order will miss the window to be placed or that the order will not be properly be confirmed – potentially causing a new order to be placed incorrectly.

When monitoring packet delivery, timestamp granularity is extremely important. If you have a granularity of only one second, packets arriving one or two milliseconds later will appear to have arrived one full second later. When analyzing these packets, the lack of granularity results in much less accuracy and the ability to identify out of order or retransmitted packets drops off drastically. With millisecond or better granularity, it becomes much easier to generate accurate and useful analysis of the data.

Identify unidirectional packet delivery jitter

For certain types of data, ensuring consistent delivery of network packets is extremely important. The most obvious place where this comes up is unified communications (UC). If there is a large amount of variation in the time it takes for a packet to get from host to host, the call or conference can appear choppy and be difficult to understand. While UC may be one of the most frequently thought of places where packet jitter is an issue, the delivery of packets in trading environments, such as a stock exchange, are another place packet jitter can be a significant problem.

By looking at the differences between the network packet timestamp (such as the timestamp in the TCP or UDP header) and the timestamp the financial protocol places in the packet (such as the FIX UTCTimestampData field), we can learn some interesting things. If everything is working the way it should, the difference between the two timestamps should be identical from packet to packet. So if there is a 23ms delay for packet number one then the delay between timestamps for every other packet should be 23ms as well.

As long as the difference between the timestamps is consistent, the packets will be delivered in the proper order and the incoming data will be reliable. If there are variations from packet to packet or even one or two packets that are outliers, it indicates a problem with buffering or congestion. Somewhere in the path of those packets, a device is becoming overwhelmed and buffering some of those packets delaying their delivery. If you see this happening frequently then it indicates a potential capacity problem on your network.

Riverbed SteelCentral NetShark is able to analyze the traffic on the network and look inside the packet contents to pull out the proper timestamp information. Even though different organizations may use slight variations on a protocol (i.e. two different exchanges may implement FIX slightly differently) the header information is consistent and thus these small variations are not an impediment to this sort of analysis.

Market data gap detection

There are many consumers and sellers of market data. These providers install large pipes to absorb data from the various sources and then resell that information to various consumers who use the data for their businesses. There are many benefits to using this approach including the ability to order exactly the data you want for the users you want and save money on having to aggregate the data yourself.

When providers send market data to their customers there are a variety of ways that data can be transmitted including:

  • UDP multicast transmission
  • TCP unicast transmission

Each of these approaches has benefits and costs. The most efficient approach from the providers standpoint is the use of UDP multicast packet transmission. A single multicast stream is sent and those who subscribe to the data feed receive the data and process it. This approach is relatively easy to implement and relatively inexpensive both from a network and operational management approach. A single stream uses little bandwidth and does not require extensive user control.

Using a TCP-based unicast approach is the next  best solution. While not as efficient as UDP multicast TCP is a more reliable protocol so the data is guaranteed to get to the client. Unfortunately there are additional costs in the form of configuration and management and if a provider has lots of customers all using TCP unicast streams the bandwidth costs can start to increase rapidly.

Based on this, why would a provider ever use a TCP unicast transmission stream as opposed to UDP multicast? The answer comes in the form of the provider’s ability to guarantee delivery of the data to their customers. If the UDP multicast stream is working smoothly and no packets are being dropped then everything is great the customer gets what they have paid for and everyone is happy. Unfortunately as soon as packets start getting lost the customer is going to be missing data and the value of the delivery drops off drastically leading to unhappy customers and unhappy providers.

You can analyze the market feed data and identify issues by looking for gaps in the information contained within the packets. Both the data provider and the customer have an interest in looking for these gaps. When a provider has a customer that is not able to get their stream over UDP, the customer may request (or require) that the exchange provide a TCP unicast stream. As more and more customers have issues, the cost of providing more and more TCP unicast streams increases and greatly diminishes profits. From the customers perspective, if they are experiencing gaps, not only are they not getting a service they paid for but they may be missing crucial information that impacts their ability to do business and their bottom line.

SteelCentral NetShark provides the ability to do real-time market data gap analysis by analyzing the market data protocols (such as ITCH) and looking for breaks in the message sequences. Because a single packet can contain multiple messages, it is important to be able to actually look at the internal header information so you know exactly how many messages were lost (in addition to the number of actual packets lost). Even though many providers are compressing the actual market data (quite successfully with rates up to 90% reduction) the headers themselves are not compressed and thus the NetShark is still able to analyze the packets.

While being able to generate a graph or table showing gaps in market data is useful, even more useful is the ability to generate an alert when such an event occurs. Fortunately this can be accomplished by taking advantage of NetShark’s and Packet Analyzer’s ability to create watches. Watches look for specific occurrences (such as a gap in market data) and can generate an alert for an administrator or manager that a problem has occurred.