If you’re wondering how an AI server is different from an AI workstation, you’re not the only one. Assuming strictly AI use cases with minimal graphics workload, obvious…
If you’re wondering how an AI server is different from an AI workstation, you’re not the only one. Assuming strictly AI use cases with minimal graphics workload, obvious differences can be minimal to none. You can technically use one as the other. However, the results from each will be radically different depending on the workload each is asked to perform. For this reason, it’s important to clearly understand the differences between AI servers and AI workstations.
Setting AI aside for a moment, servers in general tend to be networked and are available as a shared resource that runs services accessed across the network. Workstations are generally intended to execute the requests of a specific user, application, or use case.
Can a workstation act as a server, or a server as a workstation? The answer is “yes,” but ignoring the design purpose of the workstation or server does not usually make sense. For example, both workstations and servers can support multithreaded workloads, but if a server can support 20x more threads than a workstation (all else being equal), the server will be better suited for applications that create many threads for a processor to simultaneously crunch.
Servers are optimized to scale in their role as a network resource to clients. Workstations are usually not optimized for massive scale, sharing, parallelism, and network capabilities.
Specific differences: Servers and workstations for AI
Servers often run an OS that is designed for the server use case, while workstations run an OS that is intended for workstation use cases. For example, consider Microsoft Windows 10 for desktop and individual use, whereas Microsoft Windows Server is run on dedicated servers for shared network services.
The principle is the same for AI servers and workstations. The majority of AI workstations used for machine learning, deep learning, and AI development are Linux-based. The same is true for AI servers. Because the intended use of workstations and servers is different, servers can be equipped with processor clusters, larger CPU and GPU memory resources, more processing cores, and greater multithreading and network capabilities.
Note that because of the extreme demands placed on servers as a shared resource, there is generally an associated greater demand on storage capacity, flash storage performance, and network infrastructure.
The GPU: An essential ingredient
The GPU has become an essential element in modern AI workstations and AI servers. Unlike CPUs, GPUs have the ability to increase the throughput of data and number of concurrent calculations within an application.
GPUs were originally designed to accelerate graphics rendering. Because GPUs can simultaneously process many pieces of data, they have found new modern uses in machine learning, video editing, autonomous driving, and more.
Although AI workloads can be run on CPUs, the time-to-results with a GPU may be 10x to 100x faster. The complexity of deep learning in natural language processing, recommender engines, and image classification, for example, benefits greatly from GPU acceleration.
Performance is needed for initial training of machine learning and deep learning models. Performance is also mandatory when real-time response (as for conversational AI) is running in inference mode.
Enterprise use
It’s important that AI servers and workstations work seamlessly together within an enterprise–and with the cloud. And each has a place within an enterprise organization.
AI servers
In the case of AI servers, large models are more efficiently trained on GPU-enabled servers and server clusters. They can also be efficiently trained using GPU-enabled cloud instances, especially for massive datasets and models that require extreme resolution. AI servers are often tasked to operate as dedicated AI inferencing platforms for a variety of AI applications.
AI workstations
Individual data scientists, data engineers, and AI researchers often use a personal AI or data science workstation in the process of building and maintaining AI applications. This tends to include data preparation, model design, and preliminary model training. GPU-accelerated workstations make it possible to build complete model prototypes using an appropriate subset of a large dataset. This is often done in hours to a day or two.
Certified hardware compatibility along with seamless compatibility across AI tools is very important. NVIDIA-Certified Workstations and Servers provide tested enterprise seamlessness and robustness across certified platforms.
In the old days of 10 Mbps Ethernet, long before Time-Sensitive Networking became a thing, state-of-the-art shared networks basically required that packets would collide. For the…
In the old days of 10 Mbps Ethernet, long before Time-Sensitive Networking became a thing, state-of-the-art shared networks basically required that packets would collide. For the primitive technology of the time, this was eminently practical… computationally preferable to any solution that would require carefully managed access to the medium.
After mangling each other’s data, two competing stations would wait (randomly wasting even more time), before they would try to transmit again. This was deemed ok because the minimum-size frame was 64 bytes (512 bits) and a reasonable estimate of how long this frame would consume the wire was based on the network speed (10 million bits per second means that each bit takes ~0.1 microseconds) so 512 bits equals 51.2 microseconds, at least.
Ethernet technology has evolved from 10 Mbps in the early 80s to 400Gbps as of today with future plans for 800Gbps and 1.6 Tbps (Figure 1).
It should be clear that wanting your networks to go faster is an ongoing trend! As such, any application that must manage events across those networks requires a well-synchronized, commonly understood, network-spanning sense of time, at time resolutions that get progressively narrower as networks become faster.
This is why the IEEE has been investigating how to support time-sensitive network applications since at least 2008, initially for audio and video applications but now for a much richer set of much more important applications.
Three use cases for time-sensitive networking
The requirements for precise and accurate timing extend beyond the physical and data link layers to certain applications that are highly dependent on predictable, reliable service from the network. These new and emerging applications leverage the availability of a precise, accurate, and high-resolution understanding of time.
5G, 6G, and beyond
Starting with the 5G family of protocols from 3GPP, some applications such as IoT or IIoT do not necessarily require extremely high bandwidth. They do require tight control over access to the wireless medium to achieve predictable access with low latency and low jitter. This is achieved by the delivery of precise, accurate, and high-resolution time to all participating stations.
In the time domain style of access, each station asks for and is granted permission to use the medium, after which a network scheduler informs the station of when and for how long they may use the medium.
5G and future networks deliver this accurate, precise, and high-resolution time to all participating stations to enable this kind of new high-value application. Previously, the most desirable attribute of new networks was speed. These new applications actually need control rather than speed.
Successfully enabling these applications requires that participating stations have the same understanding of the time in absolute terms so that they do not either start transmitting too soon, or too late, or for the wrong amount of time.
If a station were to transmit too soon or for too long, it may interfere with another station. If it were to begin transmitting too late, it may waste some of its precious opportunity to use the medium, as in situations where it might transmit for less time than it had been granted permission to transmit.
I should state that 5G clearly isn’t Ethernet, but Ethernet technology is how the 5G radio access network is tied together, through backhaul networks extending out from the metropolitan area data centers. The time-critical portion of the network extends from this Ethernet backhaul domain, both into the data centers and out into the radio access network.
What kinds of applications need this level of precision?
Applications such as telemetry need this precision. Future measurements can implicitly recover from a missed reading just by waiting for the next reading. For example, a meter reading might be generated once every 30 minutes.
What about robots that must have their position understood with submillisecond resolution? Missing a few position reports could lead to damaging the robot, damaging nearby or connected equipment, damaging the materials that the robot is handling, or even the death of a nearby human.
You might think this has nothing to do with 5G, as it’s clearly a manufacturing use case. This is a situation where 5G might be a better solution because the Precision Time Protocol (PTP) is built into the protocol stack from the get-go.
PTP (IEEE 1588-2008) is the foundation of a suite of protocols and profiles that enable highly accurate time to be synchronized across networked devices to high precision and at high resolution.
Time-sensitive networking technology enables 5G (or subsequent) networks to serve thousands or tens of thousands of nodes. It delivers an ever-shifting mix of high-speed, predictable latency, or low-jitter services, according to the demands of the connected devices.
Yes, these might be average users with mobile phones, industrial robots, or medical instruments. The key thing is that with time-sensitive networking built in, the network can satisfy a variety of use cases as long as bandwidth (and time) is available.
PTP implementations in products containing NVIDIA Cumulus Linux 5.0 and higher, regularly deliver deep submillisecond (even submicrosecond) precision, supporting the diverse requirements of 5G applications.
Media and entertainment
The majority of the video content in the television industry currently exists in serial digital interface (SDI) format. However, the industry is transitioning to an Internet Protocol (IP) model.
In the media and entertainment industry, there are several scenarios to consider such as studio (such as combining multiple camera feedback and overlays), video production, video broadcast (from a single point to multiple users), and multiscreen.
Time synchronization is critical for these types of activities.
In the media and broadcast world, consistent time synchronization is of the upmost importance to provide the best viewing experience and to prevent frame alignment, lip syncing, and video and audio syncing issues.
In the baseband world, reference black or genlock was used to keep camera and other video source frames in sync and to avoid introducing nasty artifacts when switching from one source to another.
However, with IP adoption and more specifically, SMPTE-2110 (or SMPTE-2022-6 with AES67), you needed a different way to provide timing. Along came PTP, also referred to as IEEE 1588 (PTP V2).
PTP is fully network-based and can travel over the same data network connections that are already being used to transmit and receive essence streams. Various profiles, such as SMPTE 2059-2 and AES67, provide a standardized set of configurations and rules that meet the requirements for the different types of packet networks.
Spectrum fully supports PTP 1588 under SMPTE 2059-2 and other profiles.
Automotive applications
New generations of car-area networks (CANs) have evolved from shared/bus architectures toward architectures that you might find in 5G radio-access networks (RANs) or in IT environments: switched topologies.
When switches are involved, there is an opportunity for packet loss or delay variability, arising from contention or buffering, which limits or eliminates predictable access to the network that might be needed for various applications in the automobile.
Self-driving cars must regularly, at fairly a high frequency, process video and other sensor inputs to determine a safe path forward for the vehicle. The guiding intelligence in the vehicle depends on regularly accessing its sensors, so the network must be able to guarantee that access to the sensors is frequent enough to support the inputs to the algorithms that must interpret them.
For instance, the steering wheel and brakes are reading friction, engaging antilock and antislip functions, and trading off regenerative energy capture compared to friction braking. The video inputs, and possibly radar and lidar (light detection and ranging), are constantly scanning the road ahead. They enable the interpretation algorithms to determine if new obstacles have become visible that would require steering, braking, or stopping the vehicle.
All this is happening while the vehicle’s navigation subsystem uses GPS to receive and align coarse position data against a map, which is combined with the visual inputs from the cameras to establish accurate positioning information over time, to determine the maximum legally allowed speed and to combine legal limits with local conditions to determine a safe speed.
These varied sensors and associated independent subsystems must be able to deliver their inputs to the main processors and their self-driving algorithms on a predictable-latency/low-jitter basis, while the network is also supporting non-latency-critical applications. The correct, predictable operation of this overall system is life-critical for the passengers (and for pedestrians!).
Beyond the sensors and software that support the safe operation of the vehicle, other applications running on the CAN are still important to the passengers, while clearly not life-critical:
Operating the ventilation or climate-control system to maintain a desirable temperature at each seat (including air motion, seat heating or cooling, and so on)
Delivering multiple streams of audio or video content to various passengers
Gaming with other passengers or passengers in nearby vehicles
Important mundane maintenance activities like measuring the inflation pressure of the tires, level of battery charge, braking efficiency (which could indicate excessive wear), and so on
Other low-frequency yet also time-critical sensor inputs provide necessary inputs to the vehicle’s self-diagnostics that determine when it should take itself back to the maintenance depot for service, or just to recharge its batteries.
The requirement for all these diverse applications to share the same physical network in the vehicle (to operate over the same CAN) is the reason why PTP is required.
Engineers will design the CAN to have sufficient instantaneous bandwidth to support the worst-case demand from all critical devices (such that contention is either rare or impossible), while dynamically permitting all devices to request access in the amounts and with the latency bounds that each requires, which can change over time. Pun intended.
In a world of autonomous vehicles, PTP is the key to enabling in-car technology, supporting the safe operation of vehicles while delivering rich entertainment and comfort.
Conclusion
You’ve seen three examples of applications where control over access to the network is as important as raw speed. In each case, the application defines the requirements for precise/accurate/high-resolution timing, but the network uses common mechanisms to deliver the required service.
As networks continue to get faster, the time resolution for discriminating events scale linearly as the reciprocal of the bandwidth.
Powerful PTP implementations, such as that in NVIDIA Cumulus Linux 5.0-powered devices, embody scalable protocol mechanisms that will adapt to the faster networks of the future. They will deliver timing accuracy and precision that adjusts to the increasing speeds of these networks.
Future applications can expect to continue to receive the predictable time-dependent services that they need. This will be true even though the networks continue to become more capable of supporting more users, at faster speeds, with even finer-grained time resolution.
For more information, see the following resources:
Artificial intelligence (AI) is becoming pervasive in the enterprise. Speech recognition, recommenders, and fraud detection are just a few applications among hundreds being driven…
Artificial intelligence (AI) is becoming pervasive in the enterprise. Speech recognition, recommenders, and fraud detection are just a few applications among hundreds being driven by AI and deep learning (DL)
To support these AI applications, businesses look toward optimizing AI servers and performance networks. Unfortunately, storage infrastructure requirements are often overlooked in the development of enterprise AI. Yet for the successful adoption of AI, it is vital to consider a comprehensive storage deployment strategy that considers AI growth, future proofing, and interoperability.
This post highlights important factors that enterprises should consider when planning data storage infrastructure for AI applications to maximize business results. I discuss cloud compared to on-premise storage solutions as well as the need for higher-performance storage within GPU-enabled virtual machines (VMs).
Why AI storage decisions are needed for enterprise deployment
The popular phrase, “You can pay me now—or pay me later” implies that it’s best to think about the future when making current decisions. Too often, storage solutions for supporting an AI or DL app only meet the immediate needs of the app without full consideration of the future cost and flexibility.
Spending money today to future-proof your AI environment from a storage standpoint can be more cost-effective in the long run. Decision-makers must ask themselves:
Can my AI storage infrastructure adapt to a cloud or hybrid model?
Will choosing object, block, or file storage limit flexibility in future enterprise deployments?
Is it possible to use lower-cost storage tiers or a hybrid model for archiving, or for datasets that do not require expensive, fast storage?
The impact of enterprise storage decisions on AI deployment is not always obvious without a direct A/B comparison. Wrong decisions today can result in lower performance and the inability to efficiently scale-out business operations in the future.
Main considerations when planning AI storage infrastructure
Following are a variety of factors to consider when deploying and planning storage. Figure 1 shows an overview of data center, budget, interoperability, and storage type considerations.
Data center
Budget
Interoperability
Storage type
DPU
Existing vs. new
Cloud and data center
Object/Block/File
Network
All Flash/HDD/Hybrid
VM environments
Flash/HDD/Hybrid
Table 1. Storage considerations for IT when deploying AI solutions on GPU-accelerated AI applications
AI performance and the GPU
Before evaluating storage performance, consider that a key element of AI performance is having high-performance enterprise GPUs to accelerate training for machine-learning, DL, and inferencing apps.
Many data center servers do not have GPUs to accelerate AI apps, so it’s best to first look at GPU resources when looking at performance.
Large datasets do not always fit within GPU memory. This is important because GPUs deliver less performance when the complete data set does not fit within GPU memory. In such cases, data is swapped to and from GPU memory, thus impacting performance. Model training takes longer, and inference performance can be impacted.
Certain apps, such as fraud detection, may have extreme real-time requirements that are affected when GPU memory is waiting for data.
Storage considerations
Storage is always an important consideration. Existing storage solutions may not work well when deploying a new AI app.
It may be that you now require the speed of NVMe flash storage or direct GPU memory access for desired performance. However, you may not know what tomorrow’s storage expectations will be, as demands for AI data from storage increase over time. For certain applications, there is almost no such thing as too much storage performance, especially in the case of real-time use cases such as pre-transaction fraud detection.
There is no “one-size-fits-all” storage solution for AI-driven apps.
Performance is only one storage consideration. Another is scale-out ability. Training data is growing. Inferencing data is growing. Storage must be able to scale in both capacity and performance—and across multiple storage nodes in many cases. Simply put, a storage device that meets your needs today may not always scale for tomorrow’s challenges.
The bottom-line: as training and inference workloads grow, capacity and performance must also grow. IT should only consider scalable storage solutions with the performance to keep GPUs busy for the best AI performance.
Data center considerations
The data processing unit (DPU) is a recent addition to infrastructure technology that takes data center and AI storage to a completely new level.
Although not a storage product, the DPU redefines data center storage. It is designed to integrate storage, processing, and networks such that whole data centers act as a computer for enterprises.
It’s important to understand DPU functionality when planning and deploying storage as the DPU offloads storage services from data center processors and storage devices. For many storage products, a DPU interconnected data center enables a more efficient scale-out.
Storage performance for remote storage access is as if the storage is directly attached to the AI server. The DPU helps to enable scalable software-defined storage, in addition to networking and cybersecurity acceleration.
Budget considerations
Cost remains a critical factor. While deploying the highest throughput and lowest latency storage is desirable, it is not always necessary depending on the AI app.
To extend your storage budget further, IT must understand the storage performance requirements of each AI app (bandwidth, IOPs, and latency).
For example, if an AI app has a large dataset but minimal performance requirements, traditional hard disk drives (HDD) may be sufficient while lowering storage costs substantially. This is especially true when the “hot” data of the dataset fits wholly within GPU memory.
Another cost-saving option is to use hybrid storage that uses flash as a cache to accelerate performance while lowering storage costs for infrequently accessed data residing on HDDs. There are hybrid flash/HDD storage products that perform nearly as well as all-flash, so exploring hybrid storage options can make a lot of sense for apps that don’t have extreme performance requirements.
Older, archived, and infrequently used data and datasets may still have future value, but are not cost-effective residing on expensive primary storage.
HDDs can still make a lot of financial sense, especially if data can be seamlessly accessed when needed. A two-tiered cloud and on-premises storage solution can also make financial sense depending on the size and frequency of access. There are many of these solutions on the market.
Interoperability factors
Evaluating cloud and data center interoperability from a storage perspective is important. Even within VM-driven data centers, there are interoperability factors to evaluate.
Cloud and data center considerations
Will the AI app run on-premises, in the cloud, or both? Even if the app can be run in either place, there is no guarantee that the performance of the app won’t change with location. For example, there may be performance problems if the class of storage used in the cloud differs from the storage class used on-premises. Storage class must be considered.
Assume that a job retraining a large recommender model completes within a required eight-hour window using data center GPU-enabled servers that use high-performance flash storage. Moving the same application to the cloud with equivalent GPU horsepower may cause training to complete in 24 hours, well outside the required eight-hour window. Why?
Some AI apps require a certain class of storage (fast flash, large storage cache, DMA storage access, storage class memory (SCM) read performance, and so on) that is not always available through cloud services.
The point is that certain AI applications will yield similar results regardless of data center or cloud storage choices. Other applications can be storage-sensitive.
Just because an app is containerized and orchestrated by Kubernetes in the cloud, it does not guarantee similar data center results. When viewed in this way, containers do not always provide cross–data center and cloud interoperability when performance is considered. For effective data center and cloud interoperability, ensure that storage choices in both domains yield good results.
VM considerations
Today, most data center servers do not have GPUs to accelerate AI and creative workloads. Tomorrow, the data center landscape may look quite different. Businesses are being forced to use AI to be competitive, whether conversational AI, fraud detection, recommender systems, video analytics, or a host of other use cases.
GPUs are common on workstations, but the acceleration provided by GPU workstations cannot easily be shared within an organization.
The paradigm shift that enterprises must prepare for is the sharing of server-based, GPU-enabled resources within VM environments. The availability of solutions such as NVIDIA AI Enterprise enables sharing GPU-enabled VMs with anyone in the enterprise.
Put simply, it is now possible for anyone in an enterprise to easily run power-hungry AI apps within a VM in the vSphere environment.
So what does this mean for VM storage? Storage for GPU-enabled VMs must address the shared performance requirement of both the AI apps and users of the shared VM. This implies higher storage performance for a given VM than would be required in an unshared environment.
It also means that physical storage allocated for such VMs will likely be more scalable in capacity and performance. Within a heavily shared VM, it can make sense to use dedicated all-flash storage-class memory (SCM) arrays connected to the GPU-enabled servers through RDMA over Converged Ethernet for the highest performance and scale-out.
Storage type
An in-depth discussion on the choice of object, block, or file storage for AI apps goes beyond the scope of this post. That said, I mention it here because it’s an important consideration but not always a straightforward decision.
Object storage
If a desired app requires object storage, for example, the required storage type is obvious. Some AI apps take advantage of object metadata while also benefiting from the infinite scale of a flat address space object storage architecture. AI analytics can take advantage of rich object metadata to enable precision data categorization and organization, making data more useful and easier to manage and understand.
Block storage
Although block storage is supported in the cloud, truly massive cloud datasets tend to be object-based. Block storage can yield higher performance for structured data and transactional applications.
Block storage lacks metadata information, which prevents the use of block storage for any app that is designed to provide benefit from metadata. Many traditional enterprise apps were built on a block storage foundation, but the advent of object storage in the cloud has caused many modern applications to be designed specifically for native cloud deployment using object storage.
File storage
When an AI app accesses data across common file protocols, the obvious storage choice will be file-based. For example, AI-driven image recognition and categorization engines may require access to file-based images.
Deployment options can vary from dedicated file servers to NAS heads built on top of an object or block storage architecture. NAS heads can export NFS or SMB file protocols for file access to an underlying block or object storage architecture. This can provide a high level of flexibility and future-proofing with block or object storage used as a common foundation for file storage access by AI and data center network clients.
Storage type decisions for AI must be based on a good understanding of what is needed today as well as a longer-term AI deployment strategy. Fully evaluate the pros and cons of each storage type. There is frequently no one-size-fits-all answer, and there will also be cases where all three storage types (object, block, and file) make sense.
Key takeaways on enterprise storage decision making
There is no single approach to addressing storage requirements for AI solutions. However, here are a few core principles by which wise AI storage decisions can be made:
Any storage choice for AI solutions may be pointless if training and inference are not GPU-accelerated.
Prepare for the possibility of needing IT resources and related storage that is well beyond current estimates.
Don’t assume that existing storage is “good enough” for new or expanded AI solutions. Storage with higher cost, performance, and scalability may actually be more effective and efficient, over time, compared to existing storage.
Always consider interoperability with the cloud as on-premises storage options may not be available with your cloud provider.
Strategic IT planning should consider the infrastructure and storage benefits of DPUs.
As you plan for AI in your enterprise, don’t put storage at the bottom of the list. The impact of storage on your AI success may be greater than you think. For more information about setting up your enterprise for success with AI storage, see the following resources
South Korean startup Lunit, developer of two FDA-cleared AI models for healthcare, went public this week on the country’s Kosdaq stock market. The move marks the maturity of the Seoul-based company — which was founded in 2013 and has for years been part of the NVIDIA Inception program that nurtures cutting-edge startups. Lunit’s AI software Read article >
Thanks to earbuds you can have calls anywhere while doing anything. The problem: those on the other end of the call hear it all, too, from your roommate’s vacuum cleaner to background conversations at the cafe you’re working from. Now, work by a trio of graduate students at the University of Washington who spent the Read article >
<Incoming Transmission> Epic Games is bringing a new Fortnite reward to GeForce NOW, available to all members. Drop from the Battle Bus in Fortnite on GeForce NOW between today and Thursday, Aug. 4, to earn “The Dish-stroyer Pickaxe” in game for free. <Transmission continues> Members can earn this item by streaming Fortnite on GeForce NOW Read article >
The acceleration of digital transformation within data centers and the associated application proliferation is exposing new attack surfaces to potential security threats. These new…
The acceleration of digital transformation within data centers and the associated application proliferation is exposing new attack surfaces to potential security threats. These new attacks typically bypass the well-established perimeter security controls such as traditional and web application firewalls, making detection and remediation of cybersecurity threats more challenging.
Defending against these threats is becoming more challenging due to modern applications not being built entirely within a single data center—whether physical, virtual, or in the cloud. Today’s applications often span multiple servers in public clouds, CDN networks, edge platforms, and as-a-service components for which the location is not even known.
On top of this, each service or microservice may have multiple instances for scale-out purposes, straining the ability of traditional network security functions to isolate them from the outside world to protect them.
Finally, the number of data sources and locations is large and growing both because of the distributed nature of modern applications and the effects of scale-out architecture. There is no longer a single gate in the data center, such as an ingress gateway or firewall, that can observe and secure all data traffic.
The consequence of these changes is the much larger sheer volume of data that must be collected to provide a holistic view of the application and to detect advanced threats. The number of data sources that must be monitored and the diversity in terms of data types is also growing, making effective cybersecurity data collection extremely challenging.
Detection requires a large amount of contextual information that can be correlated in near real time to determine the advanced threat activity in progress.
F5 is researching techniques to augment well-established security measures for web, application, firewall, and fraud mitigation. Detecting such advanced threats, which require contextual analysis of several of these data points through large-scale telemetry and with near real-time analysis, requires machine learning (ML) and AI algorithms.
ML and AI are used to detect anomalous activity in and around applications, as well as cloud environments, to tackle the risks upfront. This is where the NVIDIA BlueField-2 data processing unit (DPU) real-time telemetry and NVIDIA GPU-powered Morpheus cybersecurity framework come into play.
NVIDIA Morpheus provides an open application framework that enables cybersecurity developers to create optimized AI pipelines for filtering, processing, and classifying large volumes of real-time data. Morpheus offers pretrained AI models that provide powerful tools to simplify workflows and help detect and mitigate security threats.
Cybersecurity poses unique requirements for AI/ML processing
From a solution perspective, a robust telemetry collection strategy is a must and the telemetry data must have specific requirements:
A secure—encrypted and authenticated—means of transmitting data to a centralized data collector.
The ability to ingest telemetry with support for all the commonly used data paradigms:
Asynchronously occurring security-relevant events
Application logs
Statistics and status-related metrics
Entity-specific trace records
A well-defined vocabulary that can map the data collected from diverse data sources into a canonical consumable representation
Finally, all this must be done in a highly scalable way, agnostic to the source location, which may be from a data center, the edge, a CDN, a client device, or even out-of-band metadata, such as threat intelligence feeds.
NVIDIA Morpheus-optimized AI pipelines
With a unique history and expertise in building networking software capable of harnessing the benefits of hardware, F5 is one of the first to join the NVIDIA Morpheus Early Access program.
Morpheus is an open application framework that enables cybersecurity developers to create optimized AI pipelines for filtering, processing, and classifying large volumes of real-time data.
F5 is leveraging Morpheus, which couples BlueField DPUs with NVIDIA certified EGX servers, to provide a powerful solution to detect and eliminate security threats.
Morpheus allows F5 to accelerate access to embedded analytics and provide security across the cloud and emerging edge from their Shape Enterprise Defense application. The joint solution brings a new level of security to data centers and enables dynamic protection, real-time telemetry, and an adaptive defense for detecting and remediating cybersecurity threats.
The new ‘Level Up with NVIDIA’ webinar series offers creators and developers the opportunity to learn more about the NVIDIA RTX platform, interact with NVIDIA experts, and ask…
The new ‘Level Up with NVIDIA’ webinar series offers creators and developers the opportunity to learn more about the NVIDIA RTX platform, interact with NVIDIA experts, and ask questions about game integrations.
Kicking off in early August, the series features one 60-minute webinar each month, with the first half dedicated to NVIDIA experts discussing the session’s topic and the remaining time dedicated to Q&A.
We’ll focus on the NVIDIA RTX platform within popular game engines, explore what NVIDIA technologies and SDKs are in Unreal Engine 5 and Unity, and how you can successfully leverage the latest tools in your games.
Join us for the first webinar in the series on August 10 at 10 AM, Pacific time, with NVIDIA experts Richard Cowgill and Zach Lo discussing RTX in Unreal Engine 5.
Learn about NVIDIA technologies integrated into Unreal Engine, get insights into available ray tracing technologies, and see how you can get the most out of NVIDIA technologies across all game engines.
Imagine that you have trained your model with PyTorch, TensorFlow, or the framework of your choice, are satisfied with its accuracy, and are considering deploying it as a service….
Imagine that you have trained your model with PyTorch, TensorFlow, or the framework of your choice, are satisfied with its accuracy, and are considering deploying it as a service. There are two important objectives to consider: maximizing model performance and building the infrastructure needed to deploy it as a service. This post discusses both objectives.
You can squeeze better performance out of a model by accelerating it across three stack levels:
Hardware acceleration
Software acceleration
Algorithmic or network acceleration.
NVIDIA GPUs are the leading choice for hardware acceleration among deep learning practitioners, and their merit is widely discussed in the industry.
The conversation about GPU software acceleration typically revolves around libraries like cuDNN, NCCL, TensorRT, and other CUDA-X libraries.
Algorithmic or network accelerationrevolves around the use of techniques like quantization and knowledge distillation that essentially make modifications to the network itself, applications of which are highly dependent on your models.
This need for acceleration is driven primarily by business concerns like reducing costs or improving the end-user experience by reducing latency and tactical considerations like deploying on models on edge devices having fewer compute resources.
Serving deep learning models
After the models are accelerated, the next step is to build a serving service to deploy your model, which comes with its own unique set of challenges. This is a nonexhaustive list:
Will the service work on different hardware platforms?
Will it handle other models that I have to deploy simultaneously?
Will the service be robust?
How do I reduce latency?
Models are trained with different frameworks and tech stacks; how do I cater to this?
How do I scale?
These are all valid questions and addressing each of them presents a challenge.
Solution overview
This post discusses using NVIDIA TensorRT, its framework integrations for PyTorch and TensorFlow, NVIDIA Triton Inference Server, and NVIDIA GPUs to accelerate and deploy your models.
NVIDIA TensorRT
NVIDIA TensorRT is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications.
With its framework integrations with PyTorch and TensorFlow, you can speed up inference up to 6x faster with just one line of code.
NVIDIA Triton Inference Server
NVIDIA Triton Inference Server is an open-source inference-serving software that provides a single standardized inference platform. It can support running inference on models from multiple frameworks on any GPU or CPU-based infrastructure in the data center, cloud, embedded devices, or virtualized environments.
Figure 1 shows the steps that you must go through.
Before you start following along, be ready with your trained model.
Step 1: Optimize the models. You can do this with either TensorRT or its framework integrations. If you choose TensorRT, you can use the trtexec command line interface. For the framework integrations with TensorFlow or PyTorch, you can use the one-line API.
Step 2: Build a model repository. Spinning up an NVIDIA Triton Inference Server requires a model repository. This repository contains the models to serve, a configuration file that specifies the details, and any required metadata.
Step 3: Spin up the server.
Step 4: Finally, we provide simple and robust HTTP and gRPC APIs that you can use to query the server!
Throughout this post, use the Docker containers from NGC. You may need to create an account and get the API key to access these containers. Now, here are the details!
Accelerating models with TensorRT
TensorRT accelerates models through graph optimization and quantization. You can access these benefits in any of the following ways:
trtexec CLI tool
TensorRT Python/C++ API
Torch-TensorRT (integration with PyTorch)
TensorFlow-TensorRT (integration with TensorFlow)
While TensorRT natively enables greater customization in graph optimizations, the framework integration provides ease of use for developers new to the ecosystem. As choosing the route a user might adopt is subject to the specific needs of their network, we would like to lay out all the options. For more information, see Speeding Up Deep Learning Inference Using NVIDIA TensorRT (Updated).
For TensorRT, there are several ways to build a TensorRT engine. For this post, use the trtexec CLI tool. If you want a script to export a pretrained model to follow along, use the export_resnet_to_onnx.py example. For more information, see the TensorRT documentation.
docker run -it --gpus all -v /path/to/this/folder:/trt_optimize nvcr.io/nvidia/tensorrt:-py3
trtexec --onnx=resnet50.onnx
--saveEngine=resnet50.engine
--explicitBatch
--useCudaGraph
To use FP16, add --fp16 in the command. Before proceeding to the next step, you must know the names of your network’s input and output layers, which is required while defining the config for the NVIDIA Triton model repository. One easy way is to use polygraphy, which comes packaged with the TensorRT container.
polygraphy inspect model resnet50.engine --mode=basic
ForTorch-TensorRT, pull the NVIDIA PyTorch container, which has both TensorRT and Torch TensorRT installed. To follow along, use the sample. For more examples, visit the Torch-TensorRT GitHub repo.
# is the yy:mm for the publishing tag for NVIDIA's Pytorch
# container; eg. 21.12
docker run -it --gpus all -v /path/to/this/folder:/resnet50_eg nvcr.io/nvidia/pytorch:-py3
python torch_trt_resnet50.py
To expand on the specifics, you are essentially using Torch-TensorRT to compile your PyTorch model with TensorRT. Behind the scenes, your model gets converted to a TorchScript module, and then TensorRT-supported ops undergo optimizations. For more information, see the Torch-TensorRT documentation.
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet50', pretrained=True).eval().to("cuda")
# Compile with Torch TensorRT;
trt_model = torch_tensorrt.compile(model,
inputs= [torch_tensorrt.Input((1, 3, 224, 224))],
enabled_precisions= { torch_tensorrt.dtype.float32} # Runs with FP32; can use FP16
)
# Save the model
torch.jit.save(trt_model, "model.pt")
For TensorFlow-TensorRT, the process is pretty much the same. First, pull the NVIDIA TensorFlow container, which comes with TensorRT and TensorFlow-TensorRT. We made a short scripttf_trt_resnet50.py as an example. For more examples, see the TensorFlow TensorRT GitHub repo.
# is the yy:mm for the publishing tag for the NVIDIA Tensorflow
# container; eg. 21.12
docker run -it --gpus all -v /path/to/this/folder:/resnet50_eg nvcr.io/nvidia/tensorflow:-tf2-py3
python tf_trt_resnet50.py
Again, you are essentially using TensorFlow-TensorRT to compile your TensorFlow model with TensorRT. Behind the scenes, your model gets segmented into subgraphs containing operations supported by TensorRT, which then undergo optimizations. For more information, see the TensorFlow-TensorRT documentation.
# Load model
model = ResNet50(weights='imagenet')
model.save('resnet50_saved_model')
# Optimize with tftrt
converter = trt.TrtGraphConverterV2(input_saved_model_dir='resnet50_saved_model')
converter.convert()
# Save the model
converter.save(output_saved_model_dir='resnet50_saved_model_TFTRT_FP32')
Now that you have optimized your model with TensorRT, you can proceed to the next step, setting up NVIDIA Triton.
Setting up NVIDIA Triton Inference Server
NVIDIA Triton Inference Server is built to simplify the deployment of a model or a collection of models at scale in a production environment. To achieve ease of use and provide flexibility, using NVIDIA Triton revolves around building a model repository that houses the models, configuration files for deploying those models, and other necessary metadata.
Look at the simplest case. Figure 4 has four key points. The config.pbtxt file (a) is the previously mentioned configuration file that contains, well, configuration information for the model.
There are several key points to note in this configuration file:
Name: This field defines the model’s name and must be unique within the model repository.
Platform: (c)This field is used to define the type of the model: is it a TensorRT engine, PyTorch model, or something else.
Input and Output: (d)These fields are required as NVIDIA Triton needs metadata about the model. Essentially, it requires the names of your network’s input and output layers and the shape of said inputs and outputs. In the case of TorchScript, as the name of input and output layers are absent, use input__0. Datatype is set to FP32, and the input format is specified as (Channel, Height, Width) of 3, 224, 224.
There are minor differences between TensorRT, Torch-TensorRT, and TensorFlow-TensorRT workflows in this set, which boils down to specifying the platform and changing the name for the input and output layers. We made sample config files for all three (TensorRT, Torch-TensorRT, or TensorFlow-TensorRT). Lastly, you add the trained model (b).
Now that the model repository has been built, you spin up the server. For this, all you must do is pull the container and specify the location of your model repository. For more Information about scaling this solution with Kubernetes, see Deploying NVIDIA Triton at Scale with MIG and Kubernetes.
With your server up and running, you can finally build a client to fulfill inference requests!
Setting up NVIDIA Triton Client
The final step in the pipeline is to query the NVIDIA Triton Inference Server. You can send inference requests to the server through an HTTP or a gRPC request. Before diving into the specifics, install the required dependencies and download a sample image.
In this post, use Torchvision to transform a raw image into a format that would suit the ResNet-50 model. It isn’t necessarily needed for a client. We have a much more comprehensive image client and a plethora of varied clients premade for standard use cases available in the triton-inference-server/client GitHub repo. However, for this explanation, we are going over a much simpler and skinny client to demonstrate the core of the API.
Okay, now you are ready to look at an HTTP client (Figure 5). Download the client script:
Second, pass the image and specify the names of the input and output layers of the model. These names should be consistent with the specifications defined in the config file that you built while making the model repository.
These code examples discuss the specifics of the Torch-TensorRT models. The only differences among different models (when building a client) would be the input and output layer names. We have built NVIDIA Triton clients with Python, C++, Go, Java, and JavaScript. For more examples, see the triton-inference-server/client GitHub repo.
Conclusion
This post covered an end-to-end pipeline for inference where you first optimized trained models to maximize inference performance using TensorRT, Torch-TensorRT, and TensorFlow-TensorRT. You then proceeded to model serving by setting up and querying an NVIDIA Triton Inference Server. All the software, including TensorRT, Torch-TensorRT, TensorFlow-TensorRT, and Triton discussed in this tutorial, are available today to download as a Docker container from NGC.
Linear regression is one of the simplest machine learning models out there. It is often the starting point not only for learning about data science but also for building quick and…
Linear regression is one of the simplest machine learning models out there. It is often the starting point not only for learning about data science but also for building quick and simple minimum viable products (MVPs), which then serve as benchmarks for more complex algorithms.
In general, linear regression fits a line (in two dimensions) or a hyperplane (in three and more dimensions) that best describes the linear relationship between the features and the target value. The algorithm also assumes that the probability distributions of the features are well-behaved; for example, they follow the Gaussian distribution.
Outliers are values that are located far outside of the expected distribution. They cause the distributions of the features to be less well-behaved. As a consequence, the model can be skewed towards the outlier values, which, as I’ve already established, are far away from the central mass of observations. Naturally, this leads to the linear regression finding a worse and more biased fit with inferior predictive performance.
It is important to remember that the outliers can be found both in the features and the target variable, and all the scenarios can worsen the performance of the model.
There are many possible approaches to dealing with outliers: removing them from the observations, treating them (capping the extreme observations at a reasonable value, for example), or using algorithms that are well-suited for dealing with such values on their own. This post focuses on these robust methods.
Setup
I use fairly standard libraries: numpy, pandas, scikit-learn. All the models I work with here are imported from the linear_model module of scikit-learn.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import datasets
from sklearn.linear_model import (LinearRegression, HuberRegressor,
RANSACRegressor, TheilSenRegressor)
Data
Given that the goal is to show how different robust algorithms deal with outliers, the first step is to create a tailor-made dataset to show clearly the differences in the behavior. To do so, use the functionalities available in scikit-learn.
Start with creating a dataset of 500 observations, with one informative feature. With only one feature and the target, plot the data, together with the models’ fits. Also, specify the noise (standard deviation applied to the output) and create a list containing the coefficient of the underlying linear model; that is, what the coefficient would be if the linear regression model was fit to the generated data. In this example, the value of the coefficient is 64.6. Extract those coefficients for all the models and use them to compare how well they fit the data.
Next, replace the first 25 observations (5% of the observations) with outliers, far outside of the mass of generated observations. Bear in mind that the coefficient stored earlier comes from the data without outliers. Including them makes a difference.
Start with the good old linear regression model, which is likely highly influenced by the presence of the outliers. Fit the model to the data using the following example:
lr = LinearRegression().fit(X, y)
coef_list.append(["linear_regression", lr.coef_[0]])
Then prepare an object to use for plotting the fits of the models. The plotline_X object is a 2D array containing evenly spaced values within the interval dictated by the generated data set. Use this object for getting the fitted values for the models. It must be a 2D array, given it is the expected input of the models in scikit-learn. Then create a fit_df DataFrame in which to store the fitted values, created by fitting the models to the evenly spaced values.
Having prepared the DataFrame, plot the fit of the linear regression model to the data with outliers.
fix, ax = plt.subplots()
fit_df.plot(ax=ax)
plt.scatter(X, y, c="k")
plt.title("Linear regression on data with outliers");
Figure 2 shows the significant impact that outliers have on the linear regression model.
The benchmark model has been obtained using linear regression. Now it is time to move toward robust regression algorithms.
Huber regression
Huber regression is an example of a robust regression algorithm that assigns less weight to observations identified as outliers. To do so, it uses the Huber loss in the optimization routine. Here’s a better look at what is actually happening in this model.
Huber regression minimizes the following loss function:
Where denotes the standard deviation, represents the set of features, is the regression’s target variable, is a vector of the estimated coefficients and is the regularization parameter. The formula also indicates that outliers are treated differently from the regular observations according to the Huber loss:
The Huber loss identifies outliers by considering the residuals, denoted by . If the observation is considered to be regular (because the absolute value of the residual is smaller than some threshold , then apply the squared loss function. Otherwise, the observation is considered to be an outlier and you apply the absolute loss. Having said that, Huber loss is basically a combination of the squared and absolute loss functions.
An inquisitive reader might notice that the first equation is similar to Ridge regression, that is, including the L2 regularization. The difference between Huber regression and Ridge regression lies in the treatment of outliers.
You might recognize this approach to loss functions from analyzing the differences between two of the popular regression evaluation metrics: mean squared error (MSE) and mean absolute error (MAE). Similar to what the Huber loss implies, I recommend using MAE when you are dealing with outliers, as it does not penalize those observations as heavily as the squared loss does.
Connected to the previous point is the fact that optimizing the squared loss results in an unbiased estimator around the mean, while the absolute difference leads to an unbiased estimator around the median. The median is much more robust to outliers than the mean, so expect this to provide a less biased estimate.
Use the default value of 1.35 for , which determines the regression’s sensitivity to outliers. Huber (2004) shows that when the errors follow a normal distribution with = 1 and = 1.35, an efficiency of 95% is achieved relative to the OLS regression.
For your own use cases, I recommend tuning the hyperparameters alpha and epsilon, using a method such as grid search.
Fit the Huber regression to the data using the following example:
Figure 3 presents the fitted model’s best fit line.
RANSAC regression
Random sample consensus (RANSAC) regression is a non-deterministic algorithm that tries to separate the training data into inliers (which may be subject to noise) and outliers. Then, it estimates the final model only using the inliers.
RANSAC is an iterative algorithm in which iteration consists of the following steps:
Select a random subset from the initial data set.
Fit a model to the selected random subset. By default, that model is a linear regression model; however, you can change it to other regression models.
Use the estimated model to calculate the residuals for all the data points in the initial data set. All observations with absolute residuals smaller than or equal to the selected threshold are considered inliers and create the so-called consensus set. By default, the threshold is defined as the median absolute deviation (MAD) of the target values.
The fitted model is saved as the best one if sufficiently many points have been classified as part of the consensus set. If the current estimated model has the same number of inliers as the current best one, it is only considered to be better if it has a better score.
The steps are performed iteratively either a maximum number of times or until a special stop criterion is met. Those criteria can be set using three dedicated hyperparameters. As I mentioned earlier, the final model is estimated using all inlier samples.
As you can see, the procedure for recovering the coefficient is a bit more complex, as it’s first necessary to access the final estimator of the model (the one trained using all the identified inliers) using estimator_. As it is a LinearRegression object, proceed to recover the coefficient as you did earlier. Then, plot the fit of the RANSAC regression (Figure 4).
With RANSAC regression, you can also inspect the observations that the model considered to be inliers and outliers. First, check how many outliers the model identified in total and then how many of those that were manually introduced overlap with the models’ decision. The first 25 observations of the training data are all the outliers that have been introduced.
Total outliers: 51
Outliers you added yourself: 25 / 25
Roughly 10% of data was identified as outliers and all the observations introduced were correctly classified as outliers. It’s then possible to quickly visualize the inliers compared to outliers to see the remaining 26 observations flagged as outliers.
Figure 5 shows that the observations located farthest from the hypothetical best-fit line of the original data are considered outliers.
Theil-Sen regression
The last of the robust regression algorithms available in scikit-learn is the Theil-Sen regression. It is a non-parametric regression method, which means that it makes no assumption about the underlying data distribution. In short, it involves fitting multiple regression models on subsets of the training data and then aggregating the coefficients at the last step.
Here’s how the algorithm works. First, it calculates the least square solutions (slopes and intercepts) on subsets of size p (hyperparameter n_subsamples) created from all the observations in the training set X. If you calculate the intercept (it is optional), then the following condition must be satisfied p >= n_features + 1. The final slope of the line (and possibly the intercept) is defined as the (spatial) median of all the least square solutions.
A possible downside of the algorithm is its computational complexity, as it can consider a total number of least square solutions equal to n_samples choose n_subsamples, where n_samples is the number of observations in X. Given that this number can quickly explode in size, there are a few things that can be done:
Use the algorithm only for small problems in terms of the number of samples and features. However, for obvious reasons, this might not always be feasible.
Tune the n_subsamples hyperparameter. A lower value leads to higher robustness to outliers at the cost of lower efficiency, while a higher value leads to lower robustness and higher efficiency.
Use the max_subpopulation hyperparameter. If the total value of n_samples choose n_subsamples is larger than max_subpopulation, the algorithm only considers a stochastic subpopulation of a given maximal size. Naturally, using only a random subset of all the possible combinations leads to the algorithm losing some of its mathematical properties.
Also, be aware that the estimator’s robustness decreases quickly with the dimensionality of the problem. To see how that works out in practice, estimate the Theil-Sen regression using the following example:
So far, three robust regression algorithms have been fitted to the data containing outliers and the individual best fit lines have been identified. Now it is time for a comparison.
Start with the visual inspection of Figure 7. To show too many lines, the fit line of the original data is not printed. However, it is quite easy to imagine what it looks like, given the direction of the majority of the data points. Clearly, the RANSAC and Theil-Sen regressions have resulted in the most accurate best fit lines.
To be more precise, look at the estimated coefficients. Table 1 shows that the RANSAC regression results in the fit closest to the one of the original data. It is also interesting to see how big of an impact the 5% of outliers had on the regular linear regression’s fit.
model
coefficient
0
original_coef
64.59
1
linear_regression
8.77
2
huber_regression
37.52
3
ransac_regression
62.85
4
theilsen_regression
59.49
Table 1. The comparison of the coefficients of the different models fitted to the data with outliers
You might ask which robust regression algorithm is the best? As is often the case, the answer is, “It depends.” Here are some guidelines that might help you find the right model for your specific problem:
In general, robust fitting in a high-dimensional setting is difficult.
In contrast to Theil-Sen and RANSAC, Huber regression is not trying to completely filter out the outliers. Instead, it lessens their effect on the fit.
Huber regression should be faster thanRANSAC and Theil-Sen, as the latter ones fit on smaller subsets of the data.
Theil-Sen and RANSAC are unlikely to be as robust as the Huber regression using the default hyperparameters.
RANSAC is faster than Theil-Sen and it scales better with the number of samples.
RANSAC should deal better with large outliers in the y-direction, which is the most common scenario.
Taking all the preceding information into consideration, you might also empirically experiment with all three robust regression algorithms and see which one fits your data best.
You can find the code used in this post in my /erykml GitHub repo. I look forward to hearing from you in the comments.