Categories
Misc

GTC 21: Top 5 Data Center Networking Sessions

Attend GTC to learn more about breakthroughs in data center and cloud networking, including optimized modern workloads and programmable data center infrastructure.

NVIDIA GTC is starting on April 12 with a special focus this year on breakthroughs in data center and cloud networking, including optimized modern workloads and programmable data center infrastructure. Join us to explore advanced technologies and strategies for maximizing your data center networking performance and improve ROI. 

  1. Program Data Center Infrastructure Acceleration with the release of DOCA and the latest DPU software

    NVIDIA is releasing the first version of DOCA, a set of libraries, SDKs, and tools for programming the NVIDIA BlueField DPU, as well as the new version 3.6 of the data-processing unit (DPU) software. Together, these enable new infrastructure acceleration and management features in BlueField and simplify programming and application integration. DPU developers can offload and accelerate networking, virtualization, security, and storage features including VirtIO for NFV/VNFs, NVMe SNAP for storage virtualization, regular expression matching for malware detection, and deep packet inspection to enable sophisticated routing, firewall, and load-balancing applications.

    Ariel Kit, Director of Product Marketing for Networking, NVIDIA
    Ami Badani, Vice President of Marketing, NVIDIA

  1. How to Optimize Modern Workloads Efficiency over Next Generation Hybrid Cloud Solution Architecture

    Modern hybrid cloud solutions are shifting to software-defined networking and software-defined storage, which, combined with the traditional server virtualization, put a heavy load on the CPU as it needs to process more demanding storage, networking, and security infrastructure applications and often competes with revenue-generating workloads for CPU, memory, and I/O resources. This drives the need for a new, more efficient, data center architecture that’s easy to deploy, operate, and scale. Learn how VMware’s Project Monterey over NVIDIA’s BlueField-2 DPU enables IT personnel to deploy hybrid cloud clusters that can deliver on its goals while meeting organizational business objectives in the most efficient way.

    Sudhanshu (Suds) Jain, Director of Product Management, VMware
    Motti Beck, Senior Director, Enterprise Market Development, NVIDIA

  1. Turbocharge Red Hat OpenShift Container Platform with High Performance and Efficient Networking

    Cloud-native applications based on Kubernetes, containers, and microservices are rapidly growing in popularity. These modern workloads are distributed, data-intensive, and latency-sensitive by design. Therein lies the need for fast and super-efficient networking to achieve a predictable and consistent user experience and performance while using cloud-native applications. Learn how NVIDIA Mellanox Networking turbocharges Red Hat’s OpenShift cloud platform with hardware-accelerated, software-defined cloud-native networking. NVIDIA and Red Hat work together to boost the performance and efficiency of modern cloud infrastructure, delivering a delightful customer experience to enterprises and cloud operators alike.

    Erez Cohen, Mellanox VP CloudX Program, NVIDIA
    Marc Curry, Senior Principal Product Manager, OpenShift, Cloud Platform BU, Red Hat

  1. NVIDIA DGX Ethernet Fabric Design and Automated Deployment

    In order for a DGX pod to deliver the highest levels of AI performance, it needs a network configured to deliver the bandwidth, latency, and lossless characteristics necessary to feed the GPUs and high-speed storage devices attached to it. We’ll describe the requirements and design for an all-Ethernet DGX deployment. We’ll also demo how to automate and validate the deployment of NVIDIA DGX servers and NVIDIA networking.

    Pete Lumbis, Director Technical Marketing and Documentation, NVIDIA

  1. Apache Spark Acceleration over VMware’s Tanzu with NVIDIA’s GPU and Networking Solutions

    Apache Spark is an open-source project that’s achieved wide popularity in the analytical space. It’s used by well-known big data and machine learning workloads such as streaming, processing a wide array of datasets, and ETL, to name a few. Kubernetes is now a native option for Spark resource manager. By packaging Spark application as a container, you can reap the benefits of containers because you package your dependencies along with your application as a single entity.

    Boris Kovalev, Staff Solutions Architect, NVIDIA
    Mohan Potheri, Staff Solutions Architect, VMware

Visit the GTC website to register for GTC (free) and to learn more about our Data Center Networking track.

Leave a Reply

Your email address will not be published. Required fields are marked *