Layers and latency: Configuring your network for best performance

light blut neural network

New networking generations, and the frequent revolutions in broadband performance, are always stated in terms of bandwidth. But raw throughput is not the whole story of network performance. The hidden factor is latency, and it’s a significant one. Not all applications are sensitive to it. Some will be just fine if the bandwidth they require starts flowing a little bit less immediately. But other applications will be drastically affected by this, and may not function adequately at all. In this feature, we will look at applications that tolerate latency, those that don’t, and which aspects of your network affect this important part of its performance.

One of the most obvious examples of an application that is not very tolerant of latency is Voice over IP, although streaming video and other streaming media are also susceptible. Voice over IP is a particularly telling example because the actual bandwidth requirements of streaming audio are not that great. A few hundred kilobytes per second will give you near-CD quality audio, and this level won’t even be needed for voice calls. But a delay between sending and receiving will be perceived as poor service, even if the stream itself is continuous. This can break the natural flow of conversation, and make voice communications awkward.

Another, less obvious example is a shared Microsoft Access database, for example used by a stock control or customer relationship management system. A single transaction could call up a thousand directory calls, and even more file requests. A small amount of latency can delay every single one of these requests, and potentially increase the time a transaction takes by orders of magnitude. A SQL database isn’t as susceptible to this, and in general, Web-oriented software is designed to cope with latency, as Internet connections will have much greater latency than a local area network, due to the physical distances involved.

However, Web latency had a restrictive impact on what was possible over the Internet. For the first decade of the Web, the system was really only workable for delivering page-based media, although streaming audio and video began to be viable as broadband arrived. The revolution was when Web scripting code appeared that could make the necessary requests for data in the background, masking the latency experienced while receiving the data requested. First flash performed this task, and then asynchronous JavaScript extensions (AJAX). The latter has become the primary system that has allowed an application-like experience when using the Web, heralding in the Web 2.0 era.

But the Web can still be detrimentally affected by latency. When Google switched from providing 10 search results to 30, this increased the average page load time from 0.4 seconds to 0.9 seconds, and purportedly reduced traffic and ad revenue by 20 percent. Although this wasn’t the fault of the network itself, it shows the potential real loss of revenue that a seemingly small amount of latency can cause. Similarly, Amazon’s experiments have shown that a mere 100ms increase in load time can decrease sales by one percent.

A very direct connection can be made between latency and financial results when it comes to stock trading. Stock prices change in real time, and if a particular stock is being traded heavily, its price may have moved in the time between a trader executing a buy or sell command and the transaction actually taking place, when latency is too high. This has been a factor in stock exchanges since the advent of the telegraph, when the proximity of the telegraph station to the stock exchange building could make a significant financial difference. But now a few hundred milliseconds could be the determining factor, rather than a 10-minute walk.

There are applications where latency will not be something that users notice or care about. A file download, for example, won’t have latency problems. If a user is obtaining megabytes per second over their Internet connection, when this stops for a moment before carrying on at the same speed, then performance will still be rated as fast. The only issue here is when an extended delay causes the download to fail entirely, but no application would survive this kind of outage anyway. Similarly, a minor delay in a HTTP Web page request might not have as much impact as the Google and Amazon examples above, and it certainly won’t make the page as unusable as a break in a VoIP call.

So what causes some networks to have more latency than others? There are generally considered to be four factors in network performance: bandwidth, latency, jitter, and loss. The bandwidth is the number of packets of data that arrive in a given time period, but the latency is the time it takes for a specific packet to arrive after it has been sent. Jitter is when the latency varies, and loss is when some packets never arrive at all.

A network is made up of two fundamental elements. These are the wires and the interfaces connecting them. While wiring can have an effect on the maximum throughput available, and a very long piece of cable will take longer for a signal to travel down, the interfaces are what cause the most latency. This is because the wiring is merely a passive conduit, and hence has a uniform impact on all traffic; the interface must take the packets of data and redirect them along another wire depending on the destination addresses they have included. This process, which is analogous to a postal sorting office, takes time, as anyone who has waited all day for their letters to arrive will agree.

So one way to reduce latency is to ensure that all the interfaces are as fast as possible. A switch, bridge or router will be rated according to the speed of the backplane, usually quoted as Gbits/sec. The number of packets per second this equates to depends on the size of the packet. Even if all your devices are connected to the network with Gigabit Ethernet line speed, a slow switch could mean that only a few devices can operate at this speed before the switch itself becomes a bottleneck, meaning both latency and bandwidth limitations will occur.

Another key element here is the type of switches used to route data between connections. A store-and-forward switch waits for entire packets to arrive before sending them on to their destination. A cut-through switch just examines the first few bytes of a frame to get the destination, then sends it onwards to that destination. In theory, cut-through is considerably less latent than store-and-forward. But it doesn’t evaluate the validity of a packet, so will forward an invalid packet where store-and-forward will drop it, which has implications for data integrity, although the host will still invalidate the packet when it is received. Most switches use the store-and-forward system, but sophisticated cut-through switches could be preferable when extremely low latency is a necessity.

Networks are usually conceptualized as having seven layers, with the physical switching just described being layer two and the network layer sitting around the middle at layer three. This is the layer responsible for packet forwarding and routing between segments. However, it is the layer above that, layer four, where protocols run that can affect the experience of latency. The transfer connect protocol (TCP) and user datagram protocol (UDP) run at this level. While Web pages will be served by TCP, streaming media will often be served by UDP, because the protocol doesn’t require prior communications to set up special transmission channels or data paths. So fewer packet requests will be required.

As with pretty much every other aspect of computing, mobile has rewritten the networking rulebook again. A local area network may only experience noticeable latency issues when it reaches a certain size or becomes a WAN, but Wi-Fi and 3G are already significantly more latent even under optimal conditions. A HSPA 3G connection may have a latency of 100ms, but 4G LTE can reduce this to less than 20ms. This was one of the aims of the new standard, other than the obvious headline boost in throughput. Wi-Fi sits somewhere between 3G and a wired network when it comes to latency, but interference means the level of jitter is much higher than Ethernet, although 5GHz 802.11n or 802.11ac has reduced latency compared to the 2.4GHz variety.

However, while a network with reduced latency across the board would fix any issues universally, this probably will not be cost effective or even possible, particularly with wireless connections. Instead, it will be necessary to implement a quality of service (QoS) regime. This can often be implemented on a switch or router. Different levels of service are defined, and then these can be applied to protocols and applications.

The QoS regime examines packets to see which protocol or application they use, and then applies the appropriate rule, giving the packet lesser or greater priority. For example, voice communications and email can be given higher priority than Web browsing, so VoIP runs more smoothly and messages are sent more quickly than employees can check Facebook.

Different applications are more or less susceptible to latency, and there are numerous factors affecting this. Choosing an appropriate type of network, and selecting appropriately fast switching and routing gear, are only some of the ways to cater for latency-sensitive applications. Configuring quality of service, so that latency-sensitive applications receive the attention they need while applications that are less fussy are downplayed, will mean the users of your network get the service they want and need.

This article originally appeared on ITProPortal.com.

New networking generations, and the frequent revolutions in broadband performance, are always stated in terms of bandwidth. But raw throughput is not the whole story of network performance.

Locked Content

Click on the button below to get access

Unlock Now

Or sign in to access all content on Comcast Business Community

Learn how Comcast Business can help
keep you ready for what's next.