The world today is more connected than it has ever been. With the rapid expansion in access to computers and the internet and the explosion of the mobile platform, network infrastructures are under more pressure than ever before. In recent months, we’ve seen calls for the telecoms industry in the UK to be shaken up entirely, with the proposed splitting of BT from its OpenReach division set to address consumers’ apparently insatiable demand for higher availability, speed, and connectivity.
As a result, service providers are being forced to constantly enhance their networks to keep pace with the persistent growth in the demand for capacity. To support these initiatives, many are forced to consistently go out to market in search of new advancements in networking technology and associated next-generation management tools and assess whether they can be applied to the benefit of their networks.
At a basic level, service providers can, of course, opt to meet any elevated capacity demands through the addition of cabling and active equipment. However, consumers are becoming increasingly mobile, as demands for ‘always-on’ connectivity and ever-greater speeds continue to grow. We’re also starting to see the number of connected services in the home grow. By putting mobile at the heart of the connected home, the individual will be linked to the likes of utility providers, energy giants, car services company, insurance companies and white goods manufacturers through their network, and will get access to better services, such as optimized lighting and heating, as a result. While this will inevitably dramatically overhaul the consumer experience in the home, it will also place increasing pressure on the reliability of the infrastructure sitting behind the services.
As a result, networks need to become more dynamic as increasing numbers of subscribers move across the infrastructure. Adding capacity at one node is, therefore, a pointless tactic, because it only addresses the specific demands of its attached users; the additional capacity will be lost as and when users move to an adjacent cell.
Therefore, service providers are faced with a dilemma: either add capacity to all the impacted nodes to meet anticipated network demand or 'move' bandwidth around the network. The former pushes service providers to create ‘over capacity’, where capacity at each node is more than necessary and surplus capacity may be spread across the network. The latter option, on the other hand, is a complex process that demands a comprehensive understanding of recent network capacity trends across both timeframes and locations.
Additional issues can also arise from network-intensive businesses and more mature domestic consumers, both of whom are increasingly moving their data and IT services into the cloud. This requires service providers to manage such data much more efficiently to ensure their networking priorities can meet agreed business service levels agreements (SLAs), and that they can provide high customer satisfaction to their demanding user community.
One mooted solution, the ‘self-optimization’ of networks, is a practice that can enable the management of network capacity, but this tends to be reactive rather than proactive. However, it can potentially create an environment within a complex network where capacity is in a state of constant flux, as it attempts to manage real-time demands by interacting with other active network equipment.
So by adopting a proactive understanding of the shifting demands placed on network capacity by timeframe and location, service providers can dramatically improve their ability to efficiently manage capacity and accurately expand network capacity to match actual demands, rather than considering more generic capacity needs.
This is where a potentially revolutionary methodology comes in – that of Software Defined Networking (SDN) whereby the network can be dynamically and automatically configured to respond to changing conditions and demands across the network.
The initial steps towards SDN
There have already been some successful deployments of SDN in the technology’s early stages, for example in data centres and for certain aspects of service providers’ business routing. It revolves around the concept of a routing table where specific services can be routed based on pre-defined parameters – for instance, an organization’s cloud-based Customer Relationship Management (CRM) service could be routed over a faster network route during normal office hours (a period of higher demand), and then the route changed after this time to a standard network.
This benefits the organization because they can be safe in the knowledge that their mission-critical systems can operate at high speeds at peak times, and benefits the service provider as they’re able to offer a premium service at a higher cost. In this situation, the service provider is providing one service with two or more possible routing table entries, and the option to switch the entries between one another on the relevant time parameter (i.e. between peak and off-peak).
In these initial stages, both situations are easy to manage, as there is a limited and readily controlled set of network parameters. The whole concept starts to become more complex, however, as we begin to switch routes for a large number of services within a larger and more complex network topology.
The role of analytics
Given the complexity SDN can cause, many service providers have begun to use network analytics in order to model and accurately contrast the capacity of their network with the amount used at node level within a specific timeframes. The resultant analytics models are subsequently used as the framework for any SDN implementation, because they provide service providers and app developers with the insight and ability to develop alternative routing models which match ever-changing network demands. In the future, there’s no doubt that there will be a role for the integration of real-time analytics and SDN, because it will allow the majority of network optimisation to be performed in real-time. Of course, this would have to include a pre-determined set of boundaries for such optimisation in order to prevent over-compensation – for example, in the event of an outage or a network burst.
Increased capacity and enhanced control
The introduction of SDN into service providers’ networks will, if appropriately planned, provide them with unheard of levels of flexible capacity. But as with any new technology and methodology, this comes with a caveat – that service providers can only achieve this flexibility by fully understanding and modelling network capacity, and subsequently developing appropriate SDN scenarios to ensure the ongoing integrity of capacity.
Not only will SDN provide service providers with the opportunity to refine their networks, but it will also allow for the implementation of network scenarios in anticipation of scheduled events (such as major sporting or entertainment events) and unscheduled events and put in place specific network configurations to respond to any such situations.
Overall, SDN is going to dramatically change the way we manage our networks, especially as they become more complex, and we will see SDN deployments continue to evolve in accordance with the evolution of their underlying networks. In the future it will be used in ways we can’t even imagine just yet, as the SDN tools mature and organisations, especially those in the cloud, adapt applications to take advantage of SDN.