Ethernet WAN: The new private cloud accelerator

federated-mar15-ethernet-wan

The cloud has been one of the enablers of the Web 2.0 revolution.

When web pages became more like applications around the turn of the millennium, keeping data remotely rather than locally became a much more viable option, at least in terms of the ease with which it could be accessed.

But another element was required – ubiquitous availability. Fortunately, virtualization was becoming increasingly sophisticated around the same time, and this provided the server flexibility for dependable cloud hosting.

But one final piece of the puzzle still regularly causes problems. For seamless use of cloud-based services, reliable connectivity is a necessity as well. In this feature, we look at how an Ethernet WAN can solve this issue, particularly for private cloud applications.

Cloud formations

A large proportion of the tasks we perform, both for business and personal activities, now rely on some form of cloud technology. However, companies are increasingly looking to the private cloud, or hybrid public-private cloud solutions.

Need for customization, greater security, and compliance issues all make private cloud a more attractive option than the public variety. A report from Technology Business Research (TBR) has recorded that whilst the public cloud has grown at an already phenomenal 20 percent year-on-year – a trend set to continue for some years to come – the private cloud is expected to grow at an even more phenomenal 40 to 50 percent.

The private cloud market has burgeoned from $8 billion (£5.2 billion) in 2010 to $32 billion (£20.8 billion) in 2013, and it is expected to reach $69 billion (£45 billion) by 2018. According to TBR, 70 percent of businesses with 1,000+ employees are looking at managed private cloud services, whilst the remainder will be going down the self-built route. The idea of keeping mission-critical data and applications in an ubiquitously available location has enormous potential for business productivity.

But with any private cloud, whether entirely private or hybrid, it is absolutely key that the locations hosting the various elements of the provision are connected with the fastest, lowest-latency connections available.

In the past, the Wide Area Network (WAN) has been based on technologies that are significantly different to the Local Area Network, and also significantly slower. Frame relay, ATM and leased lines have been the main traditional connection options for the WAN.

But none have anything like the performance of the LAN, unless significant amounts are spent, and achieving performance even vaguely close is likely to be prohibitively expensive for all but the largest enterprises.

ATM running over SDH/SONET can operate at up to 38.5Gbits/sec, but the cost of this kind of connection is in the realm of telecommunications providers only. An individual company’s WAN connection on a legacy system is likely to be a few hundred megabytes at most, and probably much less.

The varying needs of cloud services

The poor performance of a traditional WAN has major implications. Some private cloud services are more susceptible to a slow connection than others, and in different ways.

Cloud storage, for example, won’t be particularly sensitive to latency. Users won’t be too upset if their file transfer takes a few milliseconds longer to start this time compared to the last time they accessed the service.

However, bandwidth could be a different matter, particularly if the files stored are large. If files take too long to download, users will tend to keep local copies on multiple machines, which can then get out of synchronization, negating the value of the cloud storage service, because users will no longer have ubiquitous access to the latest versions of the files. They may end up using the wrong version, which could entail extra expense to remedy.

Audiovisual applications, on the other hand, can require both significant bandwidth and low levels of latency. Video conferencing will particularly necessitate both when more than a couple of people are involved simultaneously.

Since every participant will be sending a stream as well as receiving everyone else’s, potentials for slowdown soon add up. Video is incredibly hungry for bandwidth. In 2014, 78 percent of Internet bandwidth was video, and Cisco Systems Inc is predicting this will rise to 84 percent by 2018.

Although this is primarily fuelled by consumption of video for entertainment, its use in business is increasing too. A survey by Wainhouse Research showed that 94 percent of businesses found video conferencing increased productivity, with a similar number saying it improved the impact of discussions, facilitated decision-making, and reduced travel costs.

So video is set to be an increasing burden on the WAN connection. Although real-time voice applications are nowhere near as bandwidth-hungry as video, they are even more susceptible to latency. Just a few drop-outs can render a conversation unintelligible.

Other software as a service (SaaS) is also likely to be very latency sensitive, even when its raw need for data throughput is less. Although AJAX-style programming can mean the software itself is able to run locally, with minimal network calls, there will still be the need for regular saving of files, and delays on this could have a major impact on the user’s perception of how responsive a cloud-based application feels.

On the other hand, a cloud-based virtual desktop will have a greater need of both bandwidth and low latency whenever the user performs activities that change the screen contents, such as scrolling through a web page, but much less when very little is happening on the display window. So requirements can be very spiky, and this needs to be taken into account.

Of course, the issues are not with just what kind of cloud services a single user is calling upon, but the overall usage behavior of the entire network. A blend of different services will be in use, and a decision will therefore need to be made whether the WAN connection caters for the average requirements, a worst case scenario of heavy utilization, or somewhere in between.

Monitoring of usage will be a necessity to see how often the top spikes occur and how they affect the various applications in use.

If times of saturated bandwidth are likely to have a negative effect on mission-critical cloud-based applications, the WAN will need to be specified for this top end of requirements, and it would be beneficial to be able to vary this dynamically to meet regular peak periods.

The Ethernet WAN advantage

Whatever the usage pattern of your WAN, the more bandwidth and lower the latency you can get, the better.

This is where Ethernet has been having a significant impact. Not only can it supply bandwidth more cheaply than the traditional technologies, but also potentially with lower latency thanks to the ability to keep the Ethernet protocol intact from end to end across the network.

The incumbent WAN technologies have different packet structures and frame sizes compared to Ethernet, requiring translation, which will cause extra latency. The interfaces between Ethernet and WAN technologies like SDH/SONET or traditional leased lines are also significantly more expensive than those for an Ethernet WAN, since the latter can use a lot of generic Ethernet components that will be cheap due to their mass production.

An Ethernet WAN has other advantages, too. The Carrier Ethernet 2.0 standard has brought with it technologies that are particularly beneficial for the private cloud. One of the key methods for getting the most out of the connection speed available is to shape traffic using priorities for different data types.

Carrier Ethernet 2.0 introduced support for multiple classes of service (Multi-CoS), which makes this possible. This is also enabled across different vendor infrastructures that have been connected via the E-Access service type.

This service type was introduced, again with Carrier Ethernet 2.0, to allow a service to span the infrastructure of multiple vendors, providing the ability to guarantee a level of service as the connection traverses multiple networks.

In tandem with this, Carrier Ethernet 2.0 has expanded the ability to monitor and manage the connection. This includes fault detection and correction, as well as performance monitoring.

It’s also possible to increase service levels dynamically, when more bandwidth is required to meet the needs of the users and applications being accessed over the connection. This may not even necessitate a change in hardware, with the cabling and interfaces already able to handle the extra data flow.

The service framework for Carrier Ethernet 2.0 specifically allows for this dynamic allocation both up and down in an automated fashion.

The need for WAN acceleration

Now that so many companies are turning to the private cloud to provide a more dynamic and flexible provision of services, there is an increasing need for faster, lower latency WAN connections.

Legacy technologies are not able to provide the low-cost bandwidth that is necessary for these services to achieve their full potential across the range of company sizes that are hoping to take advantage of this technology.

For private cloud adoption to achieve the growth that is being predicted, an alternative technology is necessary, and an Ethernet WAN is the most likely candidate.

It can provide the bandwidth and cost characteristics necessary to accelerate the capabilities of private cloud services.

This article was originally published on IT Pro Portal.

The cloud has been one of the enablers of the Web 2.0 revolution. When web pages became more like applications around the turn of the millennium, keeping data remotely rather than locally became a much more viable option, at least in terms of the ease with which it could be accessed.

Locked Content

Click on the button below to get access

Unlock Now

Or sign in to access all content on Comcast Business Community

Learn how Comcast Business can help
keep you ready for what's next.