3 Where the Challenges of Edge Computing Could Lie

How much computing power should we put on the edge of the network?

In the past, when networks weren’t supposed to be so smart, it wasn’t just a question. The answer was nothing. But now that it is often possible to bring large amounts of computing equipment to the extremes of the network, the correct answer is not always so easy.

The arguments in favor are simple. When packets travel shorter distances, the response time is faster. With computing, storage, and networking deployed at the edge, network lags and response times don’t slow down every journey between users and resources, and users and applications get better response times.

At the same time, as more work is done at the edge, the need for bandwidth between remote sites and central data centers or the cloud will decrease: less bandwidth and lower cost.

But despite all the promise, there are some issues that cannot be eliminated, and sometimes other factors play a role in making traditional architecture the better choice. Here are some of these considerations, divided into three categories: cost, complexity, and legal concerns.

cost

Many domestic devices can cost more

The edge computing model replaces one large central cluster of many local machines. Sometimes there is no change in cost because local devices reduce central load by equal amount. One edge machine replaces one instance in the central block.

Often, though, the model creates a new surplus that increases costs – eg storage. Instead of one central copy of each file, the edge network may maintain a separate copy at each edge node. If your edge mesh is small, some extra copies might be great for adding redundancy. But when you have 200 or more edge nodes, your storage costs can be 200 times greater. This can be restricted by storing data only in the nodes in which each user actively participates, but the multiplication problem does not completely disappear. At some point, the cost of duplication begins to affect the overall costs.

Duplication creates complexity for copying programs and often increases bandwidth. It can work well with static content when local machines like a content delivery network are running and doing little real work. But the more computing is added to the mix, the higher the costs by synchronizing all copies.

Duplication also increases bandwidth fees. If there are n number of copies made at the edges, these n copies may increase bandwidth costs by a factor of n. Ideally, end nodes act like smart caches that reduce overall bandwidth. But many architectures are not perfect, and replication ends up sending multiple copies across the network, increasing bandwidth charges along the way.

In other words, the more edge computing becomes more like computing and less like caching, the more likely costs are to rise.

complication

Timing issues can be thorny

Depending on the workloads, synchronization of databases between multiple edge sites can become an issue. Many applications – such as monitoring the Internet of Things or keeping a single user’s notes – don’t need to work hard in sync because they don’t generate content.

Basic tasks like these are ideal for high-end computing. But once users start competing for global resources, deployments become more difficult. Google, for example, puts atomic clocks in its data centers around the world and uses them to separate complex scripts in its Spanner database. Although the needs of the organization may not match the needs of Google, this synchronization issue will require an additional layer of infrastructure and expertise.

Challenges of mobile phone users

When it comes to edge computing, some users are worse than others, and mobile users can present the biggest problems. As they move from site to site, they may connect to a different edge node, again causing sync issues. Even employees who work at home may from time to time change their location because “work at home” really means “work from anywhere”.

Every time this happens, web applications have to shift focus, and the terminal nodes have to be resynchronized. If there is any remaining user state cached in the old access node, it must be moved and re-stored in the new node. The time and bandwidth this takes can affect the expected cost and performance benefits.

business intelligence requirements

Even with data processing at the edge, much of it eventually has to go to a central server where it can be used to generate daily, weekly or monthly reports, for example. If this means that there will be times when peak bandwidth is required, this could reduce the expected savings that would result from edge propagation reducing bandwidth needs. Keep this in mind when calculating cost benefits.

legal

tax issues

Some states charge a sales tax on online purchases, and others don’t. Some have indirect taxes that apply only to that state. In many cases, applicable taxes depend on the physical location of the devices where the computing takes place. Edge computing—because an organization deploys it in many jurisdictions—can increase confusion about which laws apply. It is this line of business complexity that online retailers need to assess before committing to deploying edge computing.

Data Residence Regulations

Users’ locations and data are subject to data protection laws.

Some countries support General Data Protection Regulation (GDPR)Some of them have their own laws. There are also laws like HIPAA that specifically address how medical records are handled. This means that organizations have to analyze the set of rules that apply to each edge node and see how to meet them, especially if the users and servers are in different jurisdictions. Sometimes the best answer may be to have edge nodes in the same jurisdiction where the users are but with attention to users who migrate.

Join Network World communities at Facebook And the LinkedIn To comment on topics that concern us.

Copyright © 2022 IDG Communications, Inc.

Leave a Comment