Categories
Misc

Breaking Data Silos by Integrating MLOps Platforms and Edge Solutions

A new approach to data The convergence of AI and IoT has shifted the center of gravity for data away from the cloud and to the edge of the network. In retail stores, factories,…

A new approach to data

The convergence of AI and IoT has shifted the center of gravity for data away from the cloud and to the edge of the network. In retail stores, factories, fulfillment centers, and other distributed locations, thousands of sensors are collecting petabytes of data that power insights for innovative AI use cases. Because the most valuable insights are generated at the edge, organizations have quickly adopted new technologies and processes to better capitalize on this new center of gravity.

One of the major technologies adopted is edge computing, the process of bringing the computing power for an application to the same physical location where sensors are collecting information. When this computing method is used to power AI applications at the edge, it’s referred to as edge AI.

To ensure that these edge locations harvesting valuable insights do not exist in isolated silos, organizations are increasingly working to integrate their edge computing solutions into their existing workflows to develop, test, and optimize applications. By having a seamless path from the development process to the deployment process, teams are able to simultaneously have strong visibility into how applications are operating in production environments while also taking advantage of the data and insights collected by the applications at edge locations. 

This process will only become more important as AI models are quickly and constantly retrained and iterated on based on new data collected at edge locations.

Machine learning operations and edge AI

Machine learning operations (MLOps) is a system of processes to streamline the development, deployment, monitoring, and ongoing management of machine learning models. It allows organizations to quickly scale the development process for applications and enables rapid iterations between data science and IT teams. MLOps platforms organize that philosophy into a set of tools that can be used cross-functionally in an organization to speed up the rate of innovation. 

Graphic illustrating the four phases of the data science lifecycle
Figure 1. The four phases of the data science lifecycle: develop, deploy, monitor, and manage

Integrating MLOps platforms and edge computing solutions allows for a seamless and rapid workflow for data scientists and IT teams to collaboratively develop and deploy applications in production environments. With a complete workflow, teams can significantly increase the rate of innovation as they constantly iterate, test, deploy, and retain based on insights and information collected at edge sites. And for organizations diligently working to capitalize on the new data paradigm, innovation is paramount.

Integrating Domino Data Lab and NVIDIA Fleet Command

The Domino Data Lab Enterprise MLOps Platform and NVIDIA Fleet Command are now integrated to provide data scientists and IT teams with a consistent, simplified flow from model development to deployment.

Domino Data Lab provides an enterprise MLOps platform that powers model-driven business to accelerate the development and deployment of data science work while increasing collaboration and governance. It allows data scientists to experiment, research, test, and validate AI models before deploying them into production. 

NVIDIA Fleet Command is a managed platform for container orchestration that streamlines provisioning and deployment of systems and AI applications at the edge. It simplifies the management of distributed computing environments with the scale and resiliency of the cloud, turning every site into a secure, intelligent location.

From development to deployment

The integration with NVIDIA Fleet Command provides Domino Data Lab users an easy avenue to deploy models they are working on to edge locations. The integration bridges the gap between the data scientist team developing applications and IT teams deploying them, allowing both teams access to the entire application lifecycle.  

“The integration with NVIDIA Fleet Command is the last piece in the puzzle to give data scientists access to the complete workflow for developing and deploying AI applications to the edge,” says Thomas Robinson, VP of Strategic Partnerships and Corporate Development at Domino Data Lab. “Full visibility into production deployments is critical for teams to take advantage of the data and insights generated at the edge, ultimately producing better applications faster.”

Data scientists can use the Domino Data MLOps Platform to quickly iterate on models they are working on. Through the same interface, users have the ability to load their new models onto Fleet Command, making them available to deploy to any connected location. Once deployed, administrators have remote access to the applications for monitoring and troubleshooting, providing critical feedback that can be used in the next iteration of the model. 

Graphic demonstrating the development to workflow between Domino Data Lab Enterprise MLOps Platform and NVIDIA Fleet Command.
Figure 2. Development to workflow between Domino Data Lab Enterprise MLOps Platform and NVIDIA Fleet Command

A data scientist working on a quality inspection application for a beverage manufacturing plant is one example of this integration used in production environments. The application is used to visually catch dents and defects on cans to prevent them from reaching consumers. The challenge is that the packaging on the cans changes frequently as new designs are tested, seasonal products are released, and event-based packages go to market. The application needs to be able to learn new designs quickly and frequently while still maintaining precise levels of success. This requires a high rate of innovation in order to keep up with the frequent changes in packaging. To achieve this, the data scientist uses Domino Data Lab Enterprise MLOps Platform and NVIDIA Fleet Command to create a fast and seamless flow from the development and iteration efforts to the deployment and monitoring efforts. By doing so, they are able to increase the rate of innovation by easily deploying new models with limited disruption in service as products change. Additionally, model monitoring ensures that the data scientist catches any issues with the quality or predictive power of their models. 

Watch an end-to-end demo of model development, deployment, and monitoring in the oil and gas space using Domino and NVIDIA Fleet Command. 

Get started with Domino on NVIDIA Fleet Command

Deploying applications on NVIDIA Fleet Command is currently available to Domino users. The Domino Enterprise MLOps Platform is also accessible on NVIDIA LaunchPad, which provides free short-term access to a catalog of hands-on labs. Quickly test AI initiatives and get practical experience with scaling data science workloads.

Learn more and get started.

Leave a Reply

Your email address will not be published.