Categories
Misc

Considerations for Deploying AI at the Edge

There are a number of factors businesses should consider to ensure an optimized edge computing strategy and deployment.

The growth of edge computing has been a hot topic in many industries. The value of smart infrastructure can mean improvements to overall operational efficiency, safety, and even the bottom line. However, not all workloads need to be, or even should be, deployed at the edge.

Enterprises use a combination of edge computing and cloud computing when developing and deploying AI applications. AI training typically takes place in the cloud or at data centers. When customers evaluate where to deploy AI applications for inferencing they consider aspects such as latency, bandwidth, and security requirements. 

Edge computing is tailored for real-time, always-on solutions that have low latency requirements. Always-on solutions are sensors or other pieces of infrastructure that are constantly working or monitoring their environments. Examples of “always-on” solutions include networked video cameras for loss prevention, medical imaging in ERs for surgery support, or assembly line inspection in factories. Many customers have already embraced edge computing for AI applications. Read about how they are creating smarter, safer spaces.

As customers evaluate their AI strategy and where edge computing makes sense for their business, there are a number of factors they must consider to ensure an optimized deployment. These considerations include latency, scalability, remote management, security, and resiliency.

Lower Latency

Most are familiar with the idea of bringing compute to the data, not the other way around. This has a number of advantages. One of the most important is latency. Instead of losing time sending data back to the data center or the cloud, businesses process data in real time and derive intelligent insights instantly. 

For example, retailers deploying AI loss prevention solutions cannot wait seconds or longer for a response to come back from their system. They need instant alerts for an abnormal behavior so that it can be flagged and dealt with immediately. Similarly, AI solutions designed for warehouses that detect when a worker is in the wrong place or not wearing the appropriate safety equipment needs an instantaneous response.

Scalability

Creating smart spaces often means that companies are looking to run AI at tens to thousands of locations. A major challenge to this scale is limited cloud bandwidth for moving and processing information. Bandwidth cap often leads to higher costs and smaller infrastructure. With edge computing solutions, data collection, and processing happens on the local network, bandwidth is tied to the local area network (LAN) and has a much broader range of scalability. 

Remote Management

When scaling to hundreds of remote locations, one of the key considerations organizations must address is how to manage all edge systems. While some companies today have tried to employ skilled employees or contractors to handle provisioning and maintenance of edge locations, most quickly realize they need to centralize the management in order to scale. Edge management solutions make it easy for IT to provision edge systems, deploy AI software, and manage the ongoing updates needed.

Security

The security model for edge computing is almost entirely turned on its head compared to traditional security policies in the data center. While organizations can generally maintain physical security of their data center systems, edge systems are almost always freely accessible to many more people. This requires both physical hardware security processes to be put in place as well as software security solutions that can protect both the data and application security on these edge devices.

Resilience

Edge systems are generally managed with little or no skilled IT professionals on hand to assist in the event of a crash. Thus, edge management requires resilient infrastructure that can remediate issues on its own, as well as secure remote access when human intervention is required. Migrating applications when a system fails and restarting those applications automatically are core features of many cloud-native management solutions, making ongoing management of edge computing environments seamless.

There are numerous benefits driving customers to move their compute systems nearer to their data sources in remote locations. Customers need to consider how edge computing is different from data center or cloud computing they have traditionally relied on, and invest in new ways of provisioning, securing, and managing these environments. 

Learn more about the Top Considerations for Deploying AI at the Edge by downloading the free white paper. 

Leave a Reply

Your email address will not be published. Required fields are marked *