Categories
Offsites

CodeReading – 4. Python Code Style

Code Reading은 잘 작성되어 있는 프레임워크, 라이브러리, 툴킷 등의 다양한 프로젝트의 내부를 살펴보는 시리즈 입니다. 프로젝트의 아키텍처, 디자인철학이나 코드 스타일 등을 살펴보며, 구체적으로 하나하나 살펴보는 것이 아닌 전반적이면서 간단하게 살펴봅니다.
이번 포스트에서는 프로젝트는 아니지만, 코드 자체와 관련이 깊은 Python 언어의 코드 스타일에 대해서 알아봅니다.

Series.


Code Style의 중요성

코드스타일이 의미하는 것은 무엇이고, 왜 필요할까요?

코드스타일은 코드 가독성을 위해서 네이밍, 라인수, Indentation 등의 코드의 형식을 맞추는 것을 의미합니다. 이 활동은 특히 ‘협업’에 중점두고 있습니다. 즉 개인이 아닌 팀으로 일하는 경우에 사용하는 하나의 룰 혹은 가이드입니다.

Code Reading 시리즈에서 코드를 읽고 있는 것처럼, 코드는 기계 뿐만 아니라 사람들에게도 읽히게 됩니다. 특히 가장 많이 보게 되는 코드는 같이 협업을 하는 동료의 코드일 것 입니다. 이렇게 읽어야 하는 코드가 개인의 코드스타일에 따라서 작성되어 있다면 어떨까요? 스타일에 따라서 다르겠지만, 기본적으로 코드를 해석하는데 시간이 더 걸리게 될 것입니다. 하지만 팀 내부적으로 코드 스타일을 맞춰놓았다면, 훨씬 빠르게 로직을 이해할 수 있고 새롭게 기능을 추가하거나, 코드 리뷰를 하는 등의 다양한 활동을 조금 더 쉽게 진행할 수 있을 것 입니다.

이렇게 명확한 장점을 가지고 있는 코드스타일을 도입하는데 있어서, 한가지 주의할 점이 있습니다. 코드 스타일을 지원하는 다양한 툴이 있기도 하고, 회사마다 스타일 가이드가 있습니다. 즉 누군가에게 조금 더 선호되는 스타일이 있을 수 있지만, 딱 ‘이 스타일이 정답이다’ 라고 말할 수 없는 논술 문제에 가깝습니다. 그래서 팀 내부적으로 의논을 통해, 우리 팀에 맞는 스타일을 모두의 동의하에 정의하고 서로 맞춰나가는 것이 가장 중요합니다.

그렇다면 Pyhon Code Style에 대해서 조금 더 자세히 알아볼까요?

images

출처: https://www.reddit.com/r/programming/comments/p1j1c/tabs_vs_spaces_vs_both/

PEP8

Python 언어의 코드 스타일의 가장 기본이 되는 것은 PEP8 입니다. PEP(Python Enhance Proposal)은 공식적으로 운영되고 있는 개선 제안서로서, 그 중에서 PEP8 은 Style Guide for Python Code 이라는 제목으로 작성이 되어있습니다.

여기에는 다음과 같은 가이드들이 작성되어 있습니다. 간략하게 다뤄보면 다음과 같습니다.

  • Indentation
# Correct:
# Aligned with opening delimiter.
foo = long_function_name(var_one, var_two,
                         var_three, var_four)

# Add 4 spaces (an extra level of indentation) to distinguish arguments from the rest.
def long_function_name(
        var_one, var_two, var_three,
        var_four):
    print(var_one)

# -------------------------
# Wrong:
# Arguments on first line forbidden when not using vertical alignment.
foo = long_function_name(var_one, var_two,
    var_three, var_four)

# Further indentation required as indentation is not distinguishable.
def long_function_name(
    var_one, var_two, var_three,
    var_four):
    print(var_one)

indentation의 경우, 함수의 arguments 들이 명확히 구별되도록 가이드하고 있습니다.

  • Tabs vs Spaces

PEP8에서는 Space를 써야한다고 이야기하고 있고, 두가지를 혼합해서 사용하는 것을 금지하고 있습니다! (이 포스트 시작에 있는 그림과 연결되죠)

  • 권장사항

아래와 같이 empty sequence 는 False 인 것을 이용해서, length를 기반으로 체크하지 않도록 권장하기도 합니다.

# Correct:
if not seq:
if seq:

# Wrong:
if len(seq):
if not len(seq):

가장 기본이 되는 스타일 가이드이기 때문에 전체를 한번 읽어보실 것을 추천드립니다. 대부분의 내용은 지금도 사용되고 있습니다. 다만, PEP8이 제안될 당시와 비교했을 때 개발 환경은 많이 달라졌습니다. 작은 모니터에서 다양한 크기의 모니터를 여러개 연결해서 개발을 하는 것이 흔해진 상황이죠. 그래서 Maximum Line Length의 경우에는 의견이 분분하기도 합니다. 모든 행을 79자 제한으로 제안되었지만, 더 길게 셋팅을 하는 경우도 흔하게 찾아볼 수 있습니다.

Tools

코드스타일에 관련한 도구들은 위의 PEP8을 기본으로 지원하면서 환경셋팅을 통해서 원하는대로 수정할 수 있도록 되어있습니다. 여러가지 도구들이 있지만, 최근에 가장 많이 사용하고 있는 blackisort 두가지를 대표적으로 소개해드려고 합니다.

black

The Uncompromising Code Formatter

black은 비교적 최근(2018년)에 개발된 오픈소스로 정해진 코드의 규격에 맞춰서 정리해주는 Code Formatter 입니다. black의 가장 큰 특징은 바로 ‘타협하지 않는’ 입니다. 타협하지 않는 부분은 바로 PEP8 에서 커버하지 않고 있는, 사람마다 각각 스타일이 다른 부분입니다. 그래서 black의 공식 문서에서도 PEP을 준수하는 엄격한 하위집단이라고 이야기 하고 있습니다.

그러면 black에서 추구하는 코드 스타일에 대해서 간단하게 살펴보겠습니다.

  • Maximum Line Length : 88 (80 에서 10% 늘어난 값)
  • 줄 바꿈 방식
# [1] in:
ImportantClass.important_method(exc, limit, lookup_lines, capture_locals, extra_argument)

# [1] out:
ImportantClass.important_method(
    exc, limit, lookup_lines, capture_locals, extra_argument
)

# [2] in:
def very_important_function(template: str, *variables, file: os.PathLike, engine: str, header: bool = True, debug: bool = False):
    """Applies `variables` to the `template` and writes to `file`."""
    with open(file, 'w') as f:
        ...

# [2] out:
def very_important_function(
    template: str,
    *variables,
    file: os.PathLike,
    engine: str,
    header: bool = True,
    debug: bool = False,
):
    """Applies `variables` to the `template` and writes to `file`."""
    with open(file, "w") as f:
        ...

# [3] in:
if some_long_rule1 
  and some_long_rule2:
    ...

# [3] out:
if (
    some_long_rule1
    and some_long_rule2
):
    ...

줄 길이에 따라서 위와 같이, 들여쓰기[1]가 되거나 파라미터가 더욱 많은 경우에는 하나하나 새로운 줄[2]에 위치하게 됩니다. 그리고 문장 길이를 맞추기 위해서 사용하는 백슬래시()가 파싱오류 등의 이슈가 있기 때문에 이를 사용하지 않는다고 말하고 있습니다.

  • String: 작은따옴표(‘) 보다는 큰따옴표(“)를 선호

doc string에서도 큰따옴표(“)를 사용하고 있고, 큰따옴표로 구성된 empty string의 경우 (“”) 혼동을 줄 여지가 없기 때문이죠.

  • Call chains
def example(session):
    result = (
        session.query(models.Customer.id)
        .filter(
            models.Customer.account_id == account_id,
            models.Customer.email == email_address,
        )
        .order_by(models.Customer.id.asc())
        .all()
    )

마지막으로 많은 언어에서 사용되고 있는 call chain 형식 또한 사용하고 있습니다. 저는 개인적으로 Java 와 Scala에서 이런 패턴을 많이 봐서 더 익숙하기도 하고, 어떤 로직인지 이해가기가 쉬워서 선호하는 스타일이기도 합니다.

그 외에도 다양한 주제들이 있으니 문서를 참고해보시기를 추천드립니다!

black 에서 추구하고 있는 코드스타일이 모호한 부분들을 다루고 있는 것처럼, 모두가 여기서 제안하는 스타일이 만족스럽지 않을 수 있습니다. 협업을 위한 코드스타일에 좋은 가이드이자 시작점이 아닐까 싶습니다.

isort

isort는 import 를 관리해주는 Code Formatter 입니다. 아래 예시를 보시면 바로 감이 잡히실 것 같네요.

# Before isort:
from my_lib import Object

import os

from my_lib import Object3

from my_lib import Object2

import sys

from third_party import lib15, lib1, lib2, lib3, lib4, lib5, lib6, lib7, lib8, lib9, lib10, lib11, lib12, lib13, lib14

import sys

from __future__ import absolute_import

from third_party import lib3

print("Hey")
print("yo")

# ----------------------------------------------------------------------
# After isort:
from __future__ import absolute_import

import os
import sys

from third_party import (lib1, lib2, lib3, lib4, lib5, lib6, lib7, lib8,
                         lib9, lib10, lib11, lib12, lib13, lib14, lib15)

from my_lib import Object, Object2, Object3

print("Hey")
print("yo")

위의 예제에서 보는 것처럼, import 되는 package의 종류를 아래 4가지로 관리하고 있습니다.

  • __future__ : Python2 에서 3의 기능을 사용하기 위해서 Import 하는 모듈
  • built-in modules : os, sys, collections 등 Python의 기본 내장 모듈
  • third party : pip를 통해서 설치한 외부 library 들 (e.g. requests, pytorch 등)
  • my library : 스스로 만든 package

black 과의 큰 차이점은 다양한 옵션들을 제공하는 것입니다. 위의 예시에서 third_party 의 모듈 여러개를 import 하는 방식에서 한 줄로 나열하는 방식, 하나씩 줄을 바꾸는 경우 등 다양한 옵션들이 있는 것을 확인하실 수 있습니다.

# 0 - Grid
from third_party import (lib1, lib2, lib3,
                         lib4, lib5, ...)

# 1 - Vertical
from third_party import (lib1,
                         lib2,
                         lib3
                         lib4,
                         lib5,
                         ...)

...
# 3 - Vertical Hanging Indent (Black Style)
from third_party import (
    lib1,
    lib2,
    lib3,
    lib4,
)

...

관련해서 옵션이 11개 있으니, 원하는 대로 설정해서 사용하시면 될 것 같습니다. 만약에 위에서 소개했던 black과 같이 사용한다면 3번 옵션으로 셋팅해서 사용하면 스타일을 유지할 수 있게 됩니다. isort 에서는 다른 Code Formatter 라이브러리들끼리는 서로가 호환되도록 개발을 모두 해놓았기 때문에 간단한 설정으로 양쪽 모두 사용하실 수 있을 것 입니다.

끝으로

이번 포스트에서는 Code Style을 왜 맞추어야 하는지 그리고 Python 코드 스타일 가이드 PEP8에서 시작해서 쉽게 적용해볼 수 있는 Tool 을 소개시켜 드렸습니다. 서문에서 이야기 한 것처럼, 코드 스타일은 다른 무엇이 아닌 협업을 위한 것 입니다. 그래서 코드 스타일 자체보다는 이것을 도입하기 까지의 과정이 더 중요하다고 생각이 드네요. ‘함께’ 더 잘 일하기 위한 것이니까요.

References

Categories
Misc

Questions about certificate exam

Is visualizing data a necessity to solving the problems?

Is there an age requirement?

Does the exam normally take the full 5 hours?

submitted by /u/Sad_Combination9971
[visit reddit] [comments]

Categories
Misc

Questions about Python for apple ARM64 (M1), Anaconda and Miniforge

I (dumb) thought anaconda would automatically choose the proper version for the type of chip, turns out I have been using the intel-based python all this time instead of the native version.

/Users/x/opt/anaconda3/envs/ml/bin/python: Mach-O 64-bit executable x86_64 

It should say:

... Mach-O 64-bit executable arm64 

Now, it is normal to use the intel based version if not already updated to native silicon? I guess I’m just missing out on M1 performance?

From my understanding, the only way to download the native Python 3.9X versions is through homebrew / miniforge: TLDR. Is it worth updating, especially considering I’m starting to use TensorFlow, etc. for ML? Also, what happens to all my current envs and packages I have in (the current, intel based ) anaconda if I switch to Miniforge? Do they all translate, or what happens to the packages not optimised for arm64?

Or is the performance not life changing at all and I should save myself headaches and keep using TensorFlow 2.0.0 on the intel-based anaconda / python 3.6?

submitted by /u/capital-man
[visit reddit] [comments]

Categories
Misc

Accelerating Blender Python Using CUDA

Simulated or synthetic data generation is an important emerging trend in the development of AI tools. Classically, these datasets can be used to address low-data problems or edge-case scenarios that might now be present in available real-world datasets. Emerging applications for synthetic data include establishing model performance levels, quantifying the domain of applicability, and next-generation … Continued

Simulated or synthetic data generation is an important emerging trend in the development of AI tools. Classically, these datasets can be used to address low-data problems or edge-case scenarios that might now be present in available real-world datasets.

Emerging applications for synthetic data include establishing model performance levels, quantifying the domain of applicability, and next-generation systems engineering, where AI models and sensors are designed in tandem.

Picture of a ship next  a representation of the ship’s phase map, which is composed of lines and colors.
Figure 1. Synthetic SAR render of a ship:  Phase map (left), Compressed image (right).

Blender is a common and compelling tool for generating these datasets. It is free to use and open source but, just as important, it is fully extensible through a powerful Python API. This feature of Blender has made it an attractive option for visual image rendering. As a result, it has been used extensively for this purpose, with 18+ rendering engine options to choose from.

Rendering engines integrated into Blender, such as Cycles, often come with tightly integrated GPU support, including state-of-the-art NVIDIA RTX support. However, if high performance levels are required outside of a visual rendering engine, such as the render of a synthetic SAR image, the Python environment can be too sluggish for practical applications. One option to accelerate this code is to use the popular Numba package to precompile portions of the Python code into C. This still leaves room for improvement, however, particularly when it comes to the adoption of leading GPU architectures for scientific computing.

GPU capabilities for scientific computing can be available from directly within Blender, allowing for simple unified tools that leverage Blender’s powerful geometry creation capabilities as well as cutting-edge computing environments. As of the recent changes in Blender 2.83+, this can be done using CuPy, a GPU-accelerated Python library devoted to array calculations, directly from within a Python script.

In line with these ideas, the following tutorial compares two different ways of accelerating matrix multiplication. The first approach uses Python’s Numba compiler while the second approach uses the NVIDIA GPU-compute API, CUDA. Implementation of these approaches can be found in the rleonard1224/matmul GitHub repo, along with a Dockerfile that sets up an anaconda environment from which CUDA-accelerated Blender Python scripts can be run.

Matrix multiplication algorithms

As a precursor to discussing the different approaches used to accelerate matrix multiplication, we briefly review matrix multiplication itself.

For the product of two matrices [A cdot B] to be well defined, the number of columns of [A] must be equal to the number of rows of [B].

  • [A] then is a matrix with [m] rows and [n] columns, that is, an [m times n] matrix.
  • [B] is an [n times p] matrix.
  • The product [C = A cdot B] results in an [m times p] matrix.

If the first element in each row and each column of [C], [A], and [B] are indexed with the number one—that is, one-based indexing, then the element in the i-th row and j-th column of [C], [C[i,j]], is determined by the following formula:

[C[i,j] = Sigma_{r = 1}^{n} A[i,r] cdot B[r,j]]

Numba acceleration

The Numba compiler can be applied to a function in a Python script by using the numba.jit decorator. By precompilation into C, the use of the numba.jit decorator significantly reduces the run times of loops when used in Python code. Because matrix multiplication translated directly into code requires nested for loops, use of the numba.jit decorator significantly reduces the run times of a matrix multiplication function written in Python. The matmulnumba.py Python script implements matrix multiplication and uses the numba.jit decorator.

CUDA acceleration

Before we discuss an approach to accelerate matrix multiplication using CUDA, we should broadly outline the parallel structure of a CUDA kernel launch. All parallel processes within a kernel launch belong to a grid. A grid is composed of an array of blocks and each block is composed of an array of threads. The threads within a grid compose the fundamental parallel processes launched by a CUDA kernel. Figure 2 outlines a sample parallel structure of this kind.

Picture of a grid arranged as a 2x2 matrix representing a CUDA kernel grid with each element of the matrix representing a CUDA block.
Figure 2. Parallel structure of a CUDA kernel grid composed of a 2×2 array of blocks. Each block is composed of a 2×2 array of threads.

Now that this summary of the parallel structure of a CUDA kernel launch is spelled out, the approach used to parallelize matrix multiplication in the matmulcuda.py Python script can be described as follows. 

Suppose the following are to be calculated by a CUDA kernel grid composed of a 2D array of blocks with each block composed of a 1D array of threads:

  • matrix product [C = A cdot B]
    • [A] and [m times n] matrix
    • [B] and [n times p] matrix
    • [C] and [m times p] matrix 

Also, further assume the following:

  • The number of blocks in the x-dimension of the grid ([textrm{nblocksx}]) is greater than or equal to [m] ([textrm{nblocksx} geq m]).
  • The number of blocks in the y-dimension of the grid ([textrm{nblocksy}]) is greater than or equal to [p] ([textrm{nblocksy} geq p]).,
  • The number of threads in each block ([textrm{nthreads}]) is greater than or equal to [n] ([textrm{nthreads} geq n]).

The elements of the matrix product [C = A cdot B] can be calculated in parallel by assigning to each block the calculation of an element of [C], [C[i,j]].

You can obtain further parallel enhancement by assigning, to each thread of the block to which [C[i,j]] was assigned, the calculation of one of the [n] products whose sum equals [C[i,j]].

To avoid a race condition, the summing of these [n] products and the assignment of the result to [C[i,j]] can be handled using the CUDA atomicAdd function. The atomicAdd function signature consists of a pointer as the first input and a numerical value as the second input. The definition adds the numerical value input to the value pointed to by the first input and later stores this sum in the location pointed to by the first input.

Assume that the elements of [C] are initialized to zero and that [textrm{tid}(i,j)] denoting the thread index of a thread belonging to a block with indices in the grid of [[i,j]]. The preceding parallel arrangement can be summarized by the following equation:

[C[i,j] = textrm{atomicAdd}(C[i,j], A[i, textrm{tid}(i,j)] cdot B[textrm{tid}(i,j), j])]

Figure 3 summarizes this parallel arrangement for the multiplication of two sample matrices of [2 times 2].

Grid composed of blocks in a 2x2 grid, which parallelize the multiplication of two 2x2 matrices composed of ones.
Figure 3. A way to parallelize the multiplication of two 2×2 matrices. Each block is assigned one element of the product of the two matrices and the threads in a thread block parallelize the calculation of the products necessary to determine the value of the matrix element to which the block is assigned.

Speedups

Figure 4 displays the speedups of CUDA-accelerated matrix multiplication relative to Numba-accelerated matrix multiplication for matrices of varying sizes. In this figure, speedups are plotted for the calculation of two [N times N] matrices with all elements of both matrices equal to one. [N] ranges from one hundred to one thousand in increments of one hundred.

One-dimensional plot showing increasing speedup of CUDA relative to Numba as the size of the matrices increases.
Figure 4. Speedup of cuda-accelerated matrix multiplication relative to Numba-accelerated matrix multiplication for the multiplication of two NxN matrices.

Future work

Given Blender’s role as a computer graphics tool, one relevant area of application suitable for CUDA acceleration relates to solving the visibility problem through ray tracing. The visibility problem can be broadly summarized as follows:  A camera exists at some point in space and is looking at a mesh composed of, for instance, triangular elements. The goal of the visibility problem is to determine which mesh elements are visible to the camera and which are instead occluded by other mesh elements.

Ray tracing can be used to solve the visibility problem. A mesh whose visibility you are trying to determine is composed of [N] mesh elements. In that case, [N] rays can be generated whose origin point is the camera in the scene. Those endpoints are located at the center of the [N] mesh elements.

Each ray has an endpoint at a different mesh element. If a ray reaches its endpoint without being occluded by another mesh element, then the endpoint mesh element is visible from the camera. Figure 5 shows this procedure.

One face blocking another face from the viewpoint of a camera.
Figure 5. Two rays emanating from a camera toward the faces in the scene; one face is visible while the other is occluded.

The nature of the use of ray tracing to solve the visibility problem makes it an [mathcal{O}(N^{2})] problem when implemented as a direct calculation. Fortunately, NVIDIA has developed a ray tracing library, named NVIDIA OptiX, which uses GPU parallelism to achieve significant speedups. Use of NVIDIA OptiX from within a Blender Python environment would then offer tangible benefits.

Summary

This post described two different approaches for how to accelerate matrix multiplication. The first approach used the Numba compiler to decrease the overhead associated with loops in Python code. The second approach used CUDA to parallelize matrix multiplication. A speed comparison demonstrated the effectiveness of CUDA in accelerating matrix multiplication.

Because the CUDA-accelerated code described earlier can be run as a Blender Python script, any number of algorithms can be accelerated using CUDA from within a Blender Python environment. That greatly increases the effectiveness of Blender Python as a scientific computing tool.

If you have questions or comments, please comment below or contact us at info@rendered.ai.

Categories
Misc

Adding MIG, Preinstalled Drivers, and More to NVIDIA GPU Operator

The Network Operator and GPU Operators are installed side by side on a Kubernetes node, powered by the NVIDIA EGX software stack and NVIDIA-certified server hardware platformLearn about the latest GPU Operator releases which include support for multi-instance GPU Support, pre-installed NVIDIA drivers, Red Hat OpenShift 4.7, and more.The Network Operator and GPU Operators are installed side by side on a Kubernetes node, powered by the NVIDIA EGX software stack and NVIDIA-certified server hardware platform

Reliably provisioning servers with GPUs in Kubernetes can quickly become complex as multiple components must be installed and managed to use GPUs. The GPU Operator, based on the Operator Framework, simplifies the initial deployment and management of GPU servers. NVIDIA, Red Hat, and others in the community have collaborated on creating the GPU Operator.

To provision GPU worker nodes in a Kubernetes cluster, the following NVIDIA software components are required:

  • NVIDIA Driver
  • NVIDIA Container Toolkit
  • Kubernetes device plugin
  • Monitoring

These components should be provisioned before GPU resources are available to the cluster and managed during the cluster operation.

The GPU Operator simplifies both the initial deployment and management of the components by containerizing all components. It uses standard Kubernetes APIs for automating and managing these components, including versioning and upgrades. The GPU Operator is fully open source. It is available on NGC and as part of the NVIDIA EGX Stack and Red Hat OpenShift.

The latest GPU Operator releases, 1.6 and 1.7, include several new features:

  • Support for automatic configuration of MIG geometry with NVIDIA Ampere Architecture products
  • Support for preinstalled NVIDIA drivers and the NVIDIA Container Toolkit
  • Updated support for Red Hat OpenShift 4.7
  • Updated GPU Driver version to include support for NVIDIA A40, A30, and A10
  • Support for RuntimeClasses with Containerd

Multi-Instance GPU support

Multi-Instance GPU (MIG) expands the performance and value of each NVIDIA A100 Tensor Core GPU. MIG can partition the A100 or A30 GPU into as many as seven instances (A100) or four instances (A30), each fully isolated with their own high-bandwidth memory, cache, and compute cores.

Without MIG, different jobs running on the same GPU, such as different AI inference requests, compete for the same resources, such as memory bandwidth. With MIG, jobs run simultaneously on different instances, each with dedicated resources for compute, memory, and memory bandwidth. This results in predictable performance with quality of service and maximum GPU utilization. Because simultaneous jobs can operate, MIG is ideal for edge computing use cases.

GPU Operator 1.7 added a new component called NVIDIA MIG Manager for Kubernetes, which runs as a DaemonSet and manages MIG mode and MIG configuration changes on each node. You can apply MIG configuration on the node by adding a label that indicates the predefined configuration name to be applied. After applying MIG configuration, GPU Operator automatically validates that MIG changes are applied as expected. For more information, see GPU Operator with MIG.

Figure 1. MIG Manager for Kubernetes manages MIG configuration for GPU Operator

Preinstalled drivers and Container Toolkit

GPU Operator 1.7 now supports selectively installing NVIDIA Driver and Container Toolkit (container config) components. This new feature provides great flexibility for environments where the driver or nvidia-docker2 packages are preinstalled. These environments can now use GPU Operator for simplified management of other software components like Device Plugin, GPU Feature Discovery Plugin, DCGM Exporter for monitoring, or MIG Manager for Kubernetes.

Install command with only the drivers preinstalled:

 helm install --wait --generate-name 
      nvidia/gpu-operator 
      --set driver.enabled=false 

Install command with both drivers and nvidia-docker2 preinstalled:

 helm install --wait --generate-name 
      nvidia/gpu-operator 
      --set driver.enabled=false
      --set toolkit.enabled=false 

Added support for Red Hat OpenShift

We continue our line of support for Red Hat OpenShift,

  • GPU Operator 1.6 and 1.7 include support for the latest Red Hat OpenShift 4.7 version.
  • GPU Operator 1.5 supports Red Hat OpenShift 4.6.
  • GPU Operator 1.4 and 1.3 support Red Hat OpenShift 4.5 and 4.4, respectively.

GPU Operator is an OpenShift certified operator. Through the OpenShift web console, you can install and start using the GPU Operator with only a few mouse clicks. Being a certified operator makes it significantly easier for you to use NVIDIA GPUs with Red Hat OpenShift.

GPU Driver support for NVIDIA A40, A30, and A10

We updated the GPU Driver version to include support for NVIDIA A40, A30, and A10.

NVIDIA A40

The NVIDIA A40 delivers the data center-based solution that designers, engineers, artists, and scientists need for meeting today’s challenges. Built on the NVIDIA Ampere Architecture, the A40 combines the latest generation RT Cores, Tensor Cores, and CUDA Cores. It has 48 GB of graphics memory for unprecedented graphics, rendering, compute, and AI performance. From powerful virtual workstations accessible from anywhere, to dedicated render and compute nodes, the A40 is built to tackle the most demanding visual computing workloads from the data center.

For more information, see NVIDIA A40.

NVIDIA A30

The NVIDIA A30 Tensor Core GPU is the most versatile mainstream compute GPU for AI inference and enterprise workloads. Tensor Cores with MIG combine with fast memory bandwidth in a low 165W power envelope, all in a PCIe form factor ideal for mainstream servers.

Built for AI inference at scale, A30 can also rapidly retrain AI models with TF32 as well as accelerate HPC applications using FP64 Tensor Cores. The combination of the NVIDIA Ampere Architecture Tensor Cores and MIG delivers speedups securely across diverse workloads, all powered by a versatile GPU enabling an elastic data center. The versatile A30 compute capabilities deliver maximum value for mainstream enterprises.

For more information, see NVIDIA A30.

NVIDIA A10

The NVIDIA A10 Tensor Core GPU is the ideal GPU for mainstream media and graphics with AI. Second-generation RT Cores and third-generation Tensor Cores enrich graphics and video applications with powerful AI. NVIDIA A10 delivers a single-wide, full-height, full-length PCIe form factor and a 150W power envelope for dense servers.

Built for graphics, media, and cloud gaming applications with powerful AI capabilities, the NVIDIA A10 Tensor Core GPU can deliver rich media experiences. It delivers up to 4k for cloud gaming, with 2.5x the graphics and over 3x the inference performance compared to the NVIDIA T4 Tensor Core GPU.

For more information, see NVIDIA A10.

RuntimeClass support with Containerd

RuntimeClass provides you with the flexibility of choosing the container runtime configuration per Pod and then applying the default runtime configuration for all Pods on each node. With this support, you can specify the specific runtime configuration for Pods running GPU-accelerated workloads and choose other runtimes for generic workloads.

GPU Operator v1.7.0 now supports auto creation of nvidia RuntimeClass when default runtime is selected as containerd during installation.  You can explicitly specify this RuntimeClass name when running applications consuming GPUs.

 apiVersion: node.k8s.io/v1beta1
 handler: nvidia
 kind: RuntimeClass
 metadata:
  labels:
    app.kubernetes.io/component: gpu-operator
  name: nvidia 

Summary

To start using NVIDIA GPU Operator today, see the following resources:

Categories
Misc

Looking Behind the Curtain of EVPN Traffic Flows

Is EVPN magic? As Arthur C Clarke said, any sufficiently advanced technology is indistinguishable from magic. On that premise, moving from a traditional layer 2 environment to VXLAN driven by EVPN has much of that same hocus-pocus feeling. To help demystify the sorcery, I aim to help users new to EVPN understand how EVPN works … Continued

Is EVPN magic? As Arthur C Clarke said, any sufficiently advanced technology is indistinguishable from magic. On that premise, moving from a traditional layer 2 environment to VXLAN driven by EVPN has much of that same hocus-pocus feeling.

To help demystify the sorcery, I aim to help users new to EVPN understand how EVPN works and how the control plane converges. In this post, I focus on basic layer 2 (L2) building blocks then work my way up to layer 3 (L3) connectivity and the control plane.

I use the reference topology as the cable plan and foundation to build your understanding of the traffic flow. The infrastructure tries to demystify a symmetric-mode EVPN environment using distributed gateways. All configurations are standardized using the production-ready automation and linked in the publicly available cumulus_ansible_modules GitLab repo.

To follow along, build your own Cumulus in the Cloud and deploy the following playbook:

~$ git clone https://gitlab.com/cumulus-consulting/goldenturtle/cumulus_ansible_modules.git
  
 Cloning into 'cumulus_ansible_modules'...
 remote: Enumerating objects: 822, done.
 remote: Counting objects: 100% (822/822), done.
 remote: Compressing objects: 100% (374/374), done.
 remote: Total 4777 (delta 416), reused 714 (delta 340), pack-reused 3955
 Receiving objects: 100% (4777/4777), 4.64 MiB | 22.64 MiB/s, done.
 Resolving deltas: 100% (2121/2121), done.
  
 ~$
 ~$ cd cumulus_ansible_modules/
 ~/cumulus_ansible_modules$ ansible-playbook -i inventories/evpn_symmetric/host playbooks/deploy.yml 

EVPN message types

Like any good protocol, EVPN has a robust process for exchanging information with its peers:  message types. If you already know OSPF and the LSA messages, you can think of EVPN message types as similar. Each EVPN message type can carry a different kind of information about the EVPN traffic flow.

There are about five different message types. In this post, I focus on the two most popular types for now: Type 2 MAC and Type 2 MAC/IP information.

Digging into EVPN message types: Type 2

The easiest EVPN messages to understand are type 2. As mentioned earlier, type 2 routes contain MAC and MAC/IP mappings. To start off, inspect a type 2 entry at work. To do that, you can verify basic connectivity from leaf01 to the server01.

First, look at the bridge table to make sure that the MAC address of the switch has the correct mapping to the correct port for the server.

Get the Server01 MAC address:

cumulus@server01:~$ ip address show
 ...
 5: uplink:  mtu 9000 qdisc noqueue state UP group default qlen 1000
      link/ether 44:38:39:00:00:32 brd ff:ff:ff:ff:ff:ff
      inet 10.1.10.101/24 scope global uplink
      valid_lft forever preferred_lft forever
      inet6 fe80::4638:39ff:fe00:32/64 scope link
      valid_lft forever preferred_lft forever 

Look at Leaf01’s bridge table to make sure the MAC address is mapped to the port that you expect. Cross reference it with LLDP:

cumulus@server01:~$ ip address show
 ...
 5: uplink:  mtu 9000 qdisc noqueue state UP group default qlen 1000
      link/ether 44:38:39:00:00:32 brd ff:ff:ff:ff:ff:ff
      inet 10.1.10.101/24 scope global uplink
      valid_lft forever preferred_lft forever
      inet6 fe80::4638:39ff:fe00:32/64 scope link
      valid_lft forever preferred_lft forever
 Look at Leaf01’s bridge table to make sure the MAC address is mapped to the port that you expect. Cross reference it with LLDP:
  
 cumulus@leaf01:mgmt:~$ net show bridge macs
  
 VLAN       Master  Interface  MAC                TunnelDest  State     Flags            LastSeen
 --------  ------  ---------  -----------------  ----------  ---------  ------------------  --------
 ...
 10         bridge  bond1   46:38:39:00:00:32                                           swp1       1G   BondMember  server01.simulation  44:38:39:00:00:32
 swp2       1G   BondMember  server02             44:38:39:00:00:34
 swp3       1G   BondMember  server03             44:38:39:00:00:36
 swp49      1G   BondMember  leaf02               swp49
 swp50      1G   BondMember  leaf02               swp50
 swp51      1G   Default    spine01               swp1
 swp52      1G   Default    spine02               swp1
 swp53      1G   Default    spine03               swp1
 swp54      1G   Default    spine04               swp1
 Checking the ARP table, you can validate that the MAC and IP addresses are mapped correctly.
  
 cumulus@leaf01:mgmt:~$ net show neighbor
 Neighbor                   MAC             Interface       AF   STATE
 -------------------------  -----------------  -------------  ----  ---------
 ...
 10.1.10.101                44:38:39:00:00:32  vlan10        IPv4  REACHABLE
 ... 

Now that you’ve checked the basics, start looking at how this gets pulled into EVPN. Validate the local VNIs that are configured:

cumulus@leaf01:mgmt:~$ net show evpn vni
 VNI        Type VxLAN IF              # MACs   # ARPs   # Remote VTEPs  Tenant VRF
 20         L2   vni20                 9     2     1               RED
 30         L2   vni30                 10    2     1               BLUE
 10         L2   vni10                 11    4     1               RED
 4001       L3   vniRED                2     2     n/a             RED
 4002       L3   vniBLUE               1     1     n/a             BLUE 

Because you validated that server01 is mapped to vlan10 as per the bridge mac table, you now check if the IP neighbor entries are being pulled into the EVPN cache. This cache describes the information that is being exchanged with the other EVPN speakers in the environment.

cumulus@leaf01:mgmt:~$ net show evpn arp-cache vni 10
 Number of ARPs (local and remote) known for this VNI: 4
 Flags: I=local-inactive, P=peer-active, X=peer-proxy
 Neighbor              Type   Flags State    MAC             Remote ES/VTEP              Seq #'s
 ...
 10.1.10.101           local      active   44:38:39:00:00:32                             0/0
 10.1.10.104                 remote             active   44:38:39:00:00:3e 10.0.1.34                  

Here’s what you’ve got so far. The L2 connectivity works correctly as the L2 bridge table and L3 neighbor table are populated locally on leaf01. Next, you verified that the mac and IP information are being properly pulled into EVPN through the EVPN ARP cache.

Using this information, you can check the RD and RT mapping so that you can learn more about the full VNI advertisement.

An RD is a route distinguisher. It’s used to disambiguate EVPN routes in different VNIs, as they may have the same MAC or IP address.

The RTs are route targets. They are used to describe the VPN membership for the route, specifically which VRFs are exporting and importing the different routes in the infrastructure.

 cumulus@leaf01:mgmt:~$ net show bgp l2vpn evpn vni
 Advertise Gateway Macip: Disabled
 Advertise SVI Macip: Disabled
 Advertise All VNI flag: Enabled
 BUM flooding: Head-end replication
 Number of L2 VNIs: 3
 Number of L3 VNIs: 2
 Flags: * - Kernel
   VNI      Type RD                    Import RT                  Export RT                  Tenant VRF
 * 20       L2   10.10.10.1:2          65101:20                   65101:20              RED
 * 30       L2   10.10.10.1:4          65101:30                   65101:30              BLUE
 * 10       L2   10.10.10.1:3          65101:10                   65101:10              RED
 * 4001     L3   10.10.10.1:5          65101:4001                 65101:4001            RED
 * 4002     L3   10.10.10.1:6          65101:4002                 65101:4002            BLUE 

Because the local L2 VNI has RD 10.255.255.11:2, the RD is essentially an identifier for all routes that are exchanged by this node. When looking elsewhere in the fabric, you use that information to see all the routes advertised by leaf01.

 cumulus@leaf01:mgmt:~$ net show bgp l2vpn evpn route rd 10.10.10.1:3
 EVPN type-1 prefix: [1]:[ESI]:[EthTag]:[IPlen]:[VTEP-IP]
 EVPN type-2 prefix: [2]:[EthTag]:[MAClen]:[MAC]
 EVPN type-3 prefix: [3]:[EthTag]:[IPlen]:[OrigIP]
 EVPN type-4 prefix: [4]:[ESI]:[IPlen]:[OrigIP]
 EVPN type-5 prefix: [5]:[EthTag]:[IPlen]:[IP]         
                
 BGP routing table entry for 10.10.10.1:3:UNK prefix
 Paths: (1 available, best #1)
   Advertised to non peer-group peers:
   leaf02(peerlink.4094) spine01(swp51) spine02(swp52) spine03(swp53) spine04(swp54)
   Route [2]:[0]:[48]:[44:38:39:00:00:32] VNI 10/4001
   Local
      10.0.1.12 from 0.0.0.0 (10.10.10.1)
      Origin IGP, weight 32768, valid, sourced, local, bestpath-from-AS Local, best (First path received)
      Extended Community: ET:8 RT:65101:10 RT:65101:4001 Rmac:44:38:39:be:ef:aa
      Last update: Tue May 18 11:41:45 2021
 BGP routing table entry for 10.10.10.1:3:UNK prefix
 Paths: (1 available, best #1)
   Advertised to non peer-group peers:
   leaf02(peerlink.4094) spine01(swp51) spine02(swp52) spine03(swp53) spine04(swp54)
   Route [2]:[0]:[48]:[44:38:39:00:00:32]:[32]:[10.1.10.101] VNI 10/4001
   Local
      10.0.1.12 from 0.0.0.0 (10.10.10.1)
      Origin IGP, weight 32768, valid, sourced, local, bestpath-from-AS Local, best (First path received)
      Extended Community: ET:8 RT:65101:10 RT:65101:4001 Rmac:44:38:39:be:ef:aa
      Last update: Tue May 18 11:44:38 2021
  
 ....
  
 Displayed 8 prefixes (8 paths) with this RD 

Here’s an important piece of information. There are two different forms that a type 2 route can take. In this case, you’re sending each of the two types:

  • Type 2 MAC Route: It only includes a 48-byte MAC entry. This entry is pulled in directly from the bridge table and only has L2 information in it. Anytime a MAC address is learned in the bridge table, that MAC address is pulled into EVPN as a type 2 MAC route.
               
  • Type 2 MAC/IP Route: These entries are pulled into EVPN from the ARP table. Reading this entry, the first section includes MAC address and the second one is a mapping for the IP address and mask. The mask for the IP address is a /32. As this is pulled from the ARP table, all EVPN routes are pulled in as host routes.
 BGP routing table entry for 10.10.10.1:3:UNK prefix
 ...
   Route [2]:[0]:[48]:[44:38:39:00:00:32] VNI 10/4001
 …
  
 BGP routing table entry for 10.10.10.1:3:UNK prefix
 ...
   Route [2]:[0]:[48]:[44:38:39:00:00:32]:[32]:[10.1.10.101] VNI 10/4001
 ... 

Using this information, you can validate that this /32 host route for server01 is in the routing table of leaf03 as a pure L3 route, pointing out to the L3VNI.

 cumulus@leaf01:mgmt:~$ net show route vrf RED
 show ip route vrf RED
 ======================
 Codes: K - kernel route, C - connected, S - static, R - RIP,
      O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
      T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
      F - PBR, f - OpenFabric,
      > - selected route, * - FIB route, q - queued, r - rejected, b - backup
      t - trapped, o - offload failure
  
 VRF RED:
 K>* 0.0.0.0/0 [255/8192] unreachable (ICMP unreachable), 00:18:17
 C * 10.1.10.0/24 [0/1024] is directly connected, vlan10-v0, 00:18:17
 C>* 10.1.10.0/24 is directly connected, vlan10, 00:18:17
 B>* 10.1.10.104/32 [20/0] via 10.0.1.34, vlan4001 onlink, weight 1, 00:18:05
 C * 10.1.20.0/24 [0/1024] is directly connected, vlan20-v0, 00:18:17
 C>* 10.1.20.0/24 is directly connected, vlan20, 00:18:17
 B>* 10.1.30.0/24 [20/0] via 10.0.1.255, vlan4001 onlink, weight 1, 00:18:04 

Spend some time dissecting this output. The neighbor entry in Leaf01 for Server01 has made it all the way to Leaf03 as a /32 host route where the next hop is leaf01 but through the L3VNI.

To validate that the connection between the L2 VNI and the L3 VNI are accomplished successfully, examine the L3 VNI:

 cumulus@leaf01:mgmt:~$ net show evpn vni 4001
 VNI: 4001
   Type: L3
   Tenant VRF: RED
   Local Vtep Ip: 10.0.1.12
   Vxlan-Intf: vniRED
   SVI-If: vlan4001
   State: Up
   VNI Filter: none
   System MAC: 44:38:39:be:ef:aa
   Router MAC: 44:38:39:be:ef:aa
   L2 VNIs: 10 20          

In this output, the L3 VNI of 4001 is mapped to VRF RED, which you validated in the output of net show evpn vni 10. Using this, you also can see that VNI 10 is mapped to VRF 4001 through VLAN 4001. All the outputs that you’re seeing are lining up to indicate that you have a full working EVPN Type 2 VXLAN infrastructure.

Summary

There you have it. From start to finish, you saw how EVPN works for Type 2–based routes. Specifically, I discussed the different EVPN message types and how control planes converge in an L2 extension environment. It’s not witchcraft, just good technology.

For more information about extending the EVPN control plane demystification and tackling the traffic flows around Type 5 messages and VXLAN routing, see [LINK]. If you haven’t already, I highly recommend trying this out for yourself with NVIDIA Cumulus in the Cloud. If you’d like to take a deeper dive, we’ve put together a hub of EVPN content, from whitepapers to videos.

Categories
Misc

Using VXLAN Routing with EVPN Through Asymmetric or Symmetric Models

This posts compares asymmetric and symmetric EVPN routing models using EVPN as the control plane. It provides architecture differences and maps them to specific NOS CLI output for educational purposes.

We all know and love EVPN as a control plane for VXLAN tunnels over a layer 3 infrastructure. EVPN enables you to deploy VXLAN tunnels without controllers. Plus, it offers a range of other benefits, such as reduction of data center traffic through ARP suppression, quick convergence during mobility, one routing protocol for both underlay and overlay, and the inherent ability to support multitenancy.

So EVPN for VXLAN for all your layer 2 needs, right? Well, it’s a little more complicated than that. You might also have to communicate between VXLANs and between a VXLAN tunnel and the outside world, so VXLAN routing must also be enabled in the network, which I cover in this post.

VXLAN routing can be performed with one of two architectures:

  • Centralized routing performs all the VXLAN routing on one or two centralized routers, which can cause additional east-west traffic in the data center.
  • Distributed routing provides the VXLAN routing closest to the hosts on the directly connected leaf switches, which simplifies the traffic flow.

This is where VXLAN routing with EVPN comes in. BGP EVPN is used to communicate the VXLAN layer 3 routing information to the leaves.

Using the distributed architecture, the IETF defines two models to accomplish intersubnet routing with EVPN: asymmetric integrated routing and bridging (IRB) and symmetric IRB. Some vendors offer a symmetric model and others offer an asymmetric model.

At NVIDIA networking, we believe that you control your own network. Both models have value, depending on how your network is set up and who might have built your legacy network systems. We offer both solutions so that you can choose whichever method is right for your network.

Difference between asymmetric and symmetric models

The main difference between the asymmetric IRB model and symmetric IRB model is how and where the routing lookups are done. This results in differences concerning which VNI the packet travels on through the infrastructure. Because of these differences, there are variations in how they must be configured on the switch and how they are deployed in your network.

Asymmetric model

The asymmetric model enables routing and bridging on the VXLAN tunnel ingress, but only bridging on the egress. This results in bidirectional VXLAN traffic traveling on different VNIs in each direction (always the destination VNI) across the routed infrastructure.

Traffic is routed on encapsulation and switched on decapsulation.
Figure 1. Asymmetric VXLAN traffic flow

Consider the example from earlier. Host A wants to communicate with Host B, which is located on a different VLAN and a different rack, thus reachable through a different VNI.

  • As Host B is on a different subnet from Host A, Host A sends the frame to its default gateway, which is Leaf01. This is generally an Anycast Gateway.
  • Leaf01 recognizes that the destination MAC address is itself, looks up the routing table and routes the packet to the Green VNI while still on Leaf01.
  • Leaf01 then tunnels the frame in the Green VNI to Leaf02.
  • Leaf02 removes the VXLAN header from the frame, and bridges the frame to Host B.
  • Likewise, the return traffic would behave similarly.
  • Host B sends a frame to Leaf02.
  • Leaf02 recognizes its own destination MAC address and routes the packet to the Orange VNI on Leaf02.
  • The packet is tunneled within the Orange VNI to Leaf01.
  • Leaf01 removes the VXLAN header from the frame and bridges it to Host A.

With the asymmetric model, all the required source and destination VNIs (for example, orange and green) must be present on each leaf, even if that leaf doesn’t have a host in that VLAN in its rack. This may increase the number of IP/MAC addresses that the leaf must hold, which results in somewhat limited scale. However, in many instances, all VNIs in the network are configured on all leaves anyway to allow VM mobility and to simplify configuration of the whole network. In this case, the asymmetric model is desirable.

While it is not hugely scalable, deployment with the asymmetric model is a simple solution, as no additional VNIs or VLANs must be configured. Additionally, fewer routing hops occur to communicate between VXLANs, which results in lower latency.

Where multitenancy is required, each set of VLANs can also be placed into separate VRFs and routed between the VLANs within a VRF.

Symmetric model

The symmetric model routes and bridges on both the ingress and the egress leaves. This results in bidirectional traffic being able to travel on the same VNI, hence the symmetric name.

However, a new specialty transit VNI is used for all routed VXLAN traffic, called the L3VNI. All traffic that must be routed is routed onto the L3VNI, tunneled across the layer 3 infrastructure, routed off the L3VNI to the appropriate VLAN, and ultimately bridged to the destination.

Traffic is routed on encapsulation and routed on decapsulation
Figure 2. Symmetric VXLAN traffic flow

Now consider the scenario with a symmetric model (Figure 2). Host A on VLAN A must communicate with Host B on VLAN B.

  • Because the destination is a different subnet from Host A, Host A sends the frame to its default gateway, which is Leaf01.
  • Leaf01 recognizes that the destination MAC address is itself and uses the routing table to route the packet to the L3VNI and the next hop Leaf02.
  • The VXLAN-encapsulated packet has the egress leaf’s MAC as the destination MAC address and this L3VNI as the VNI.
  • Leaf02 performs VXLAN decapsulation and recognizes that the destination MAC address is itself and routes the packet on to the destination VLAN, to reach the destination host.
  • The return traffic is routed similarly over the same L3VNI.

With symmetric model, the leaf switches only need to host the VLANs and the corresponding VNIs that are located on its rack, as well as the L3VNI and its associated VLAN. This is because the ingress leaf switch doesn’t need to know the destination VNI.

The ability to host only the local VNIs (plus one extra) helps with scale. However, the configuration is more complex as an extra VXLAN tunnel and VLAN in your network are required. The data plane traffic is also more complex as an extra routing hop occurs and could cause extra latency.

Multitenancy requires one L3VNI per VRF, and all switches participating in that VRF must be configured with the same L3VNI. The L3VNI is used by the egress leaf to identify the VRF in which to route the packet.

Which IRB model is right?

The hardest part of choosing an IRB model is knowing the difference between symmetric and asymmetric methods. Now that you know the difference, you can make an informed decision regarding the best option for your network.

Generally, if you configure all VLANs, subnets, or VNIs on all leaves anyway (for mobility or ease of configuration), the asymmetric model is for you. It’s simpler to configure and doesn’t require extra VNIs to troubleshoot. It may even have slightly less latency.

The asymmetric model also works well if your data center can be broken down into Pods with VLANs and subnets contained in a Pod. Each leaf within the Pod is configured with all VLANs and subnets or VNIs in that local Pod. Other Pods and external networks are reachable through EVPN external routes. EVPN external routing with the asymmetric model is supported in Cumulus Linux 3.6 release, using the L3VNI for external routing only.

If your VLANs, subnets, or VNIs are widely dispersed or provisioned on the fly, choose the symmetric model. The symmetric model supports reachability to external networks with Cumulus Linux 3.5.

NVIDIA believes that you own and control your network, not a proprietary vendor, so we provide both solutions and enable you to choose.

Categories
Misc

Top 5 Ray Tracing Sessions for Graphics Developers from GTC 21

Industry luminaries joined us to introduce the fundamentals of real-time ray tracing, and how current developers such as Autodesk, Dassault, Chaos and ESI have integrated ray traced technologies into their most popular apps.

Engineers, product developers and designers around the world attended GTC to experience the latest NVIDIA solutions that are accelerating interactive rendering and simulation workflows in real time.

We showcased a wide variety of NVIDIA-powered ray tracing technologies and features that provide more realistic visualizations for artists and designers worldwide. Industry luminaries joined us at GTC to introduce the fundamentals of real-time ray tracing and how current developers such as Autodesk, Dassault, Chaos and ESI have integrated ray traced technologies into their most popular applications.

All of these GTC sessions are now available through NVIDIA On-Demand, so learn more about ray tracing and catch up on the latest advancements in professional content creation, from real-time ray traced shadows to real-time denoising.

The developer resources listed below are exclusively available to NVIDIA Developer Program members. Join today for free to get access to the tools and training necessary to build on NVIDIA’s technology platform here.

On-Demand Sessions

Ray Tracing in One Weekend
Pete Shirley gets you started on the fundamentals of ray tracing.

Incorporating Real-Time Ray Tracing in Autodesk’s Next-Generation Viewport System
Learn how Autodesk radically improves the quality and performance of their viewport experience by leveraging DXR and Vulkan Ray Tracing.

Real-Time Ray-Traced Effects for CAD: A Developer Story
Hear from Dassault Systèmes on how real-time ray traced shadows enhances the design review workflow for CATIA CAD users.

From Production Rendering with V-Ray GPU to Real-Time Ray Tracing with Chaos Vantage
Get an exclusive peek on the latest advancements in V-Ray and Chaos Vantage.

Not Just for Games: Applying NVIDIA Real-Time Denoisers in Advanced Immersive Virtual Prototyping
See how ESI group is computing physically correct, high-quality ambient occlusion and soft shadows for the most complex CAD models.

Check out all the ray tracing sessions from GTC, now available for free on NVIDIA On-Demand.

Categories
Misc

New on NGC: NVIDIA Maxine, NVIDIA TLT 3.0, Clara Train SDK 4.0, PyTorch Lightning and Vyasa Layar

The NVIDIA NGC catalog is a hub of highly performant software containers, pre-trained models, industry specific SDKs and Helm charts you can simplify and accelerate your end-to-end workflows.

The NVIDIA NGC catalog is a hub of GPU-optimized deep learning, machine learning and HPC applications. With highly performant software containers, pre-trained models, industry specific SDKs and Helm charts you can simplify and accelerate your end-to-end workflows. 

The NVIDIA NGC team works closely with our internal and external partners to update the content in the catalog on a regular basis. Below are some of the highlights: 

NVIDIA Maxine 

NVIDIA Maxine is a GPU-accelerated SDK with state-of-the-art AI features for developers to build virtual collaboration and content creation solutions, including video conferencing and streaming applications. You can add any of Maxine’s AI effects – Video, Audio, and Augmented Reality – into your existing application or develop a new pipeline from scratch.

Maxine’s Video Effects SDK and Audio Effects SDK are now available through the Maxine collection on the NGC catalog that includes a container for each SDK:

  • Video Effects SDK container enables video quality enhancement such as super resolution, reducing compression artifacts and video degradation caused by low light conditions or lower-quality cameras.
  • Audio Effects SDK container removes reverberations due to talking in low sound absorption spaces and reduces over 25 different unwanted background noise profiles such as keyboard typing, mouse-clicking, and fan noise.

Clara Train SDK 4.0 

Clara Train v4.0 is now powered by MONAI, a domain-specialized open-source PyTorch framework, accelerating deep learning in Healthcare imaging. 

The latest version also expands into Digital Pathology and introduces homomorphic encryption for server side aggregation in federated learning.

Transfer Learning Toolkit (TLT)

The NVIDIA Transfer Learning Toolkit (TLT) is the AI toolkit that abstracts away the AI/DL framework complexity and leverages high quality pre-trained models to enable you to build production quality models faster with only a fraction of data required. 

Version 3.0 of TLT is now available for computer vision and conversational AI use cases. Get started today by exploring the TLT collections for: 

Deep Learning and Inference 

Our most popular deep learning frameworks for training and inference have also been updated to the latest 21.02 version

Partner Software

  • PyTorch Lightning, developed by Grid.AI, allows you to leverage multiple GPUs and state-of-the-art training features such as 16-bit precision, early stopping, logging, pruning and quantization, while enabling faster iteration and reproducibility for your AI research. 
  • Vyasa’s suite of biomedical analytics allows users to derive insights from analytical modules including question answering, named entity recognition, PDF table extraction and image classification, irrespective of where that data resides.
Categories
Misc

Experience the Latest Breakthroughs in Game Development with NVIDIA at GDC

The Game Developer Conference (GDC) is here, and NVIDIA will be showcasing how our latest technologies are driving the future of game development and graphics. Check out our list of sessions now.

The Game Developer Conference (GDC) is here, and NVIDIA will be showcasing how our latest technologies are driving the future of  game development and graphics.

From NVIDIA Deep Learning Super Sampling (DLSS) to RTX Global Illumination (RTXGI), our latest tools and technologies are helping game developers create realistic and stunning virtual worlds for gamers. Attendees will also get an exclusive look at how NVIDIA Omniverse, the open platform for virtual collaboration and simulation, is helping developers accelerate production workflows. 

And don’t miss our sessions at GDC:

Collaborative Game Development with NVIDIA Omniverse

Get an inside look at all the collaboration tools available in Omniverse. Explore the platform’s ability to connect popular tools and applications, including Epic Games’ Unreal Engine 4, Autodesk Maya and 3ds Max, and Substance by Adobe. 

NvRTX Artist Guide

Learn more about the NVIDIA RTX Unreal Engine Branch (NvRTX), and discover technologies such as RTXDI, RTXGI, new denoisers like Relax, ray-traced volumetrics and tools like the BVH viewer. See demonstrations on how complex settings like a jungle or museum can be built with NvRTX, and get a better understanding of how AAA real-time ray tracing visuals are created.

NVIDIA DLSS Overview & Game Integrations

This session will cover the technology that makes DLSS possible. Learn how to integrate DLSS into a new game engine. Graphic programmers, technical artists and technical directors are encouraged to join this session so they can learn more about the engine requirements for DLSS and pick up general DLSS debugging tools.

The Technology Behind NvRTX

Game developers can dive into the NvRTX family of branches, and learn how to bring enhanced ray tracing support to Unreal Engine 4. This session will cover several challenges developers can encounter when working to deploy ray tracing in a game environment. Join us and explore how NVIDIA has crafted solutions for these challenges within the context of a curated branch of UE4.

DevTools for Harnessing Ray Tracing in Games

Students, technical artists, programmers and developers can experience ray tracing at interactive framerates with DXR and Vulkan Ray Tracing. Join this session to check out some of the available tools and features that developers can use to take advantage of NVIDIA GPUs and improve the graphics in games.

Enter for a Chance to Win Some Gems

Attendees can win a limited-edition hard copy of Ray Tracing Gems II, the follow up to 2019’s Ray Tracing Gems.

Ray Tracing Gems II brings the community of rendering experts back together to share their knowledge. The book covers everything in ray tracing and rendering, from basic concepts geared toward beginners to full ray tracing deployment in shipping AAA games.

Learn more about the sweepstakes and enter for a chance to win.

Register for GDC today and join us to get the latest on NVIDIA technology in the gaming industry.