Cloud architectures enable rapid service delivery with a cost efficient use of resources.
There is no doubt that cloud is the future and service providers need to transform themselves. Cloud architectures essentially are designs of software applications where the underlying computing infrastructure is only used when it is needed, so it will enable rapid service delivery with a cost efficient use of resources.
Why Does Efficient Use of Resources Matter?
Efficient cloud architecture applications draw the necessary resources on demand to perform a specific job and then relinquish resources after the job is done. The applications should be able to scale elastically based on resource needs. The value to the service provider is to be able to break away from the old model of capacity planning on expensive purpose-built hardware appliances, which constrict rapid software roll-outs and get to the nirvana of speed of innovation and always available services—with web scale and mobile centric user experiences.
For an application that is software based, moving to the cloud is just a natural evolution. Therefore, the computing, networking, and storage resources can be provisioned and released elastically in an on-demand, self-service manner. These principles allow software deployment in private cloud infrastructures using VMware, Vsphere, OpenStack or in public cloud infrastructures such as Amazon Web Services (AWS), Google Cloud, or Microsoft Azure.
Why Should UC Service Providers Care?
They should care because businesses today demand rapid innovation of virtual-centric, high-scale solutions at competitive prices. Service providers can leverage cost savings by using underlying infrastructure efficiently and introduce services rapidly—a direction the rest of their competitive landscape is going in. The old model—of enterprises waiting for weeks or months for a new feature to roll out—is now being replaced by a rapid timescale of days or even minutes.
Compare the typical roll out of a new UC feature to cloud-based companies that deploy hundreds of times a day. Frequent deployments mean recovering from mistakes quickly to realizing a change in service in minutes. This is unheard of in typical service provider roll-out practices, but would be welcome by businesses—with key aspects to balance speed being safety, visibility, fault isolation/fault tolerance, automated recovery, and scale.
An application should also be decomposed into independently deployable components such that service providers can experiment with rolling out new services to a small percentage of their customer base, get feedback, and make informed business decisions. Smaller components prevent failure in one of those components to cascade across the system. It should have the resiliency and fault tolerance to automatically detect and recover when a failure is identified. These aspects are enabled with a microservices-based, cloud-native architecture—a shift from monolithic application architectures.
Enabling scalability to meet increasing demands is another driver for all this transformation. In the past, demand was handled by scaling vertically with larger servers. Capacity planning exercises were typically based on peak usage forecasting, which was expensive and slowed down new roll-outs, enhancements, and upgrades.
Virtualizing servers into smaller servers helps alleviate poor utilization of large servers. As the unit of application deployment shifts away from monolithic applications, another major shift is happening by moving virtual servers to containers.
This article was originally published on Broadsoft.com.