Edge Computing Infrastructure: The 5 Most Important Factors

When paramedics reach the scene of an accident or emergency, resources and equipment can be very limited. Paramedics work to stabilize the patient as best they can for rapid transport to the hospital, while ambulances act as an expensive taxi service and contribute little to equipment for in-depth medical diagnosis or treatment. With the centralization of hospitals and increasing traffic congestion, the average time to reach the hospital is increasing, costing precious time and lives. But what if that ambulance was connected to the rim?

What if a small device, connected to a mobile phone, could deliver remote diagnostic tests such as ultrasound, saving valuable time and allowing the hospital to make the necessary preparations and have a rapid treatment ready on arrival? What if a specialist could provide shoulder support remotely, guide the medics to steer the probe and interpret the images at that moment and there in real time? Such a use case calls for higher network and compute characteristics (latency and bandwidth) that cannot be achieved with a central or regional cloud.

This is exactly the conversation we had a few years ago with a medical device company that wanted to improve the capabilities of ambulances. This kind of use case is not only exciting, it is also life-saving. And it’s just one use case in the 5G-enabled enterprise market that’s projected to be worth up to $700 billion by 2030.

Welcome to the innovative emerging world of edge computing

Cloud and edge: the technology that enables 5G to deliver on its promises

By leveraging cloud capabilities, edge computing brings computing power and storage closer to where the data is generated and consumed. Whether that means deploying the enterprise on-premises or over a mobile network depends on the application requirements, but – like the real estate market – it’s all about location, location, location. It’s also a complete reversal of the trend towards centralization we’ve seen in recent years to reduce costs and stay in control.

To be clear, edge computing is not an entirely new concept. Distributed cloud and other similar technologies are already being used by players, including major media streaming providers around the world. But with the arrival of 5G comes a whole new level of network features, and in turn, a whole new world of opportunity. And without bringing consumption and processing capabilities closer together, the full promise of 5G for customers and consumers simply cannot be realized. As we often say, “Without edge computing, 5G is just faster 4G.”

So how can CSPs deliver the end-to-end capabilities, bring the network and edge together, and position themselves to make the most of these early opportunities? For starters, here are five key interdependent areas to consider when it comes to defining and deploying your edge computing solutions.

Infrastructure

When it comes to the edge infrastructure layer, we’re talking about where the compute, storage and application hosting is: bringing the cloud to the edge. Unlike other network infrastructure, edge infrastructure is not about bringing in a set of servers or static machines on which to install your application. It’s about introducing a way to manage things – similar to how you would manage cloud capabilities today, but at the edge, in a distributed environment.

Because reliability will be critical for edge applications, the infrastructure needs to be footprint-flexible, efficient and automated. Depending on the application requirements, the infrastructure can be on-premises or in the CSP networks, where the telco workloads and 3PP OTT applications are hosted with limited local management. To support diverse applications, it is essential that the infrastructure also supports multi- and hybrid cloud.

Orchestration

Edge orchestration comes down to resource distribution and configuration. You can’t just have hundreds of places in one country with edge workloads, and have all applications deployed in all locations at all times – it would be resource-intensive and way too expensive. Since the edge is naturally smaller than a centralized location, it is a resource-constrained environment. This makes it vital to map the topology taking into account the capabilities of all the different sites in the network, identify the best location for an application and continuously monitor it for optimal use.

We call this “Smart Workload Placement” – using algorithms to weigh up how to deliver the best capabilities where they are needed most, and find that sweet spot where the cost of deploying an application in a multi cloud infrastructure are offset by the benefits it would deliver. Dynamic resource allocation and ensuring that data flows and information get to the right place are critical to the effective functioning of applications at the edge, especially in a multi-cloud environment.

Learn more about enterprise service orchestration and how it can transform technology.

user plane

Separation of control and user plane functions (CUPS) was a technology introduced in 4G and has since become more advanced with the advent of 5G, which makes up the three core packet functions. While the control plane deals with access and mobility and session management functions and can be centralized, the user plane function is essentially the gateway between the network and the application – the connection point where the network meets the Internet.

The user plane is therefore a key feature to have distributed on the edge. If you bring the application to a particular location, make sure that gateway is close by and instruct the network to get the data for that application. To do this successfully, operators need a highly flexible user plane feature that is scalable to meet the demands of an application and can be deployed on site with plug-and-play solutions.

Learn more about the role of the edge user plane and how Ericsson’s Local Packet Gateway fits into it.

Traffic routing

Traffic routing is an important area, as this is where the network itself plays a role. While infrastructure and orchestration focuses on the application hosting and environment, traffic routing brings in the information and awareness that resides within the network and what CSPs have – the user’s location and what the user is trying to consume. For example, if a user sends a request to consume data through a video streaming application, that information is within the CSP network – you see the user’s IP session request being routed from the user’s location to the user plane function near where the streaming service’s server is located. However, at the edge, where not all traffic is always routed to the operator’s network, there are several options available to route the user’s IP session to the edge.

We can either bring all the traffic to the edge and then decide where it goes, or we can just bring some of the traffic to the edge and manage the rest more centrally. Three main mechanisms emerge as the relevant technology for edge traffic routing: distributed anchor, session breakout and multiple session. So how do CSPs decide which technology is best for them?

Ultimately it will depend on the application and intended use. Being a very simple mechanism that can be delivered on top of existing 4G and 5G networks, distributed anchor would be a wise choice for many who want to deploy on their existing networks. Session breakout is a more complicated option, requiring the development of complex features across multiple products, and is very specific to 5G – the mechanism doesn’t exist in 4G at all. Multiple sessions, which show promise but are not yet established technology due to dependence on the device ecosystem, are also likely to be specific to 5G. By then, however, it is likely that 5G will be firmly established as a dominant technology.

We ultimately want to achieve this separation of traffic. For those who have not yet invested deeply in session breakout technology, the distributed anchor should be easily deployed directly to evolve towards the multi-session mechanism in the future, bypassing session breakout completely. But you should thoroughly understand the costs and benefits of each technology before making a decision — or talk to an expert partner who can help you.

Edge lighting

There are two angles when it comes to edge lighting: edge lighting and edge lighting. Exposure to the edge involves exposing assets such as edge discovery information and UE IP to network identity translation information – essential information for systems that need to find where sites are located and how to connect to them.

Exposure at the edge is about uncovering opportunities at the edge for the applications located there. These may include location information, quality of service information, or user equipment information. By exposing these capabilities, you don’t have to go back to a central location to access those capabilities in low-latency scenarios. It is also important to note that this information must all be presented in a format in which it can be identified by the network and translated by applications into a usable, consumable format.

Ericsson's Edge Exposure Server – part of the existing Ericsson Cloud Core Exposure Server offering.

Ericsson’s Edge Exposure Server – part of the existing Ericsson Cloud Core Exposure Server offering

With 25 percent of all emerging 5G use cases expected to rely on edge within the next year, the business potential for CSPs moving early to take advantage in this emerging area — particularly in enterprise use cases — is huge. cases for industries such as gaming, industry, healthcare and more. The questions are simple: What does your multi-cloud edge deployment strategy look like? What role will you play in this new ecosystem? And who will you choose to help you on that journey?

read more

Learn more about edge computingsuccessful implementation strategies and Ericsson’s related offerings.

See what Erik Ekudden, CTO of Ericsson and Randeep Sekhon, CTO of Bharti Airtel had to say cloud innovation and the opportunities of edge in India and around the world in this CTO Focus blog post.

Find out how Ericsson drives openness to ecosystem innovation

Leave a Comment

Your email address will not be published. Required fields are marked *