Categories
Misc

How Digitec Galaxus trains and serves millions of personalized newsletters per week with TFX

How Digitec Galaxus trains and serves millions of personalized newsletters per week with TFX submitted by /u/nbortolotti
[visit reddit] [comments]
Categories
Misc

Webinar: Learn How NVIDIA DriveWorks Gets to the Point with Lidar Sensor Processing

With NVIDIA DriveWorks SDK, autonomous vehicles can bring their understanding of the world to a new dimension. The SDK enables autonomous vehicle developers to easily process three-dimensional lidar data and apply it to specific tasks, such as perception or localization. You can learn how to implement this critical toolkit in our expert-led webinar, Point Cloud … Continued

With NVIDIA DriveWorks SDK, autonomous vehicles can bring their understanding of the world to a new dimension.

The SDK enables autonomous vehicle developers to easily process three-dimensional lidar data and apply it to specific tasks, such as perception or localization. You can learn how to implement this critical toolkit in our expert-led webinar, Point Cloud Processing on DriveWorks, Aug. 25.

Lidar sensors enhance an autonomous vehicle’s sensing capabilities, detecting the depth of surrounding objects that may not be picked up by camera or radar.

It does so by bouncing invisible lasers off the vehicle’s surrounding environment, building a 3D image based on the time it takes for those lasers to return. However, processing and extracting contextual meaning from lidar data efficiently and quickly is not as straightforward.

Lidar point cloud processing must be performed in real-time and in tight coordination with other sensing modalities to deliver the full benefits of enhanced perception — a difficult feat to accomplish when working with third-party open source modules.

A Streamlined Solution

With DriveWorks, efficient and accelerated lidar point cloud processing can be performed right out of the gate.

The SDK provides middleware functions that are fundamental to autonomous vehicle development. These consist of the sensor abstraction layer (SAL) and sensor plugins, data recorder, vehicle I/O support and a deep neural network framework. It’s modular, open, and designed to be compliant with automotive industry software standards.

These development tools include a point cloud processing module, which works with the SAL and sensor plugin framework to provide a solid basis for developers to implement a lidar-based perception pipeline with little effort and quick results. 

The module is CUDA-accelerated and straightforward to implement. It’s the same toolkit the NVIDIA autonomous driving team uses to develop our own self-driving systems, making it purpose-built for production solutions rather than purely research and development.

Register now to learn more from NVIDIA experts about the DriveWorks point cloud processing module and how to use it in your autonomous vehicle development process.

Categories
Misc

convert a .pkl file to .pb ? StyleGan2-ada TF model

Hey all, I’m trying to take a train model and move it to a local deployment software (OpenFrameworks with ofxTensorFlow2 library) but the lib only takes .pb format models. Is there a way to convert the model from .pkl to .pb? It is a TF model, so I feel like maybe it isn’t so hard, but I have no idea how.

This is the colab I’m working from: https://colab.research.google.com/github/dvschultz/ml-art-colabs/blob/master/Stylegan2_ada_Custom_Training.ipynb

submitted by /u/diditforthevideocard
[visit reddit] [comments]

Categories
Misc

Nvidia Releases CUDA Python

Nvidia Releases CUDA Python submitted by /u/lindaarden
[visit reddit] [comments]
Categories
Misc

Unlocking Operational Consistency with the NVIDIA User Experience CLI Object Model

Cumulus Linux 4.4 introduces a new CLI, NVUE, that is more than just a CLI. NVUE provides a complete object model for Linux, unlocking incredible operational potential.

Cumulus Linux 4.4 is the first release with the NVIDIA User Experience (NVUE), a brand new CLI for Cumulus Linux. Being excited about a new networking CLI sounds a bit like being excited about your new 56k modem. What makes NVUE special isn’t just that it’s a new CLI but it’s the principles it was built on that make it unique. At its core, NVUE has created a full object model of Cumulus Linux enabling advanced programmability, extensibility, and usability.

What is an object model?

Object models aren’t exactly the kind of thing network engineers think about daily. I didn’t know what an object model was before I got involved in helping the team design NVUE.

An object model defines the components of a system and their relationships to each other. For example, an interface is an object. It has components like an IP address or MTU setting. It’s not just the fact that an object model exists that is important, but the thought that is put into how those relationships between objects and components fit together.

An interface and IP address are an easy example, but what about something more complicated? Think about a “bond” interface, also called a port-channel. Is the bond a top-level interface like an Ethernet port with the components of other Ethernet interfaces as children or is being a member in a bond an element of the interface?

A circular relationship between interfaces, the bond, and Ethernet.
Figure 1. Ethernet interfaces and bonds are at the same level with relationships between them.
A hierarchical relationship between objects.
Figure 2. A bond is a property of an interface, like the MTU or IP address.

These relationships get complicated fast. Failing to think through them creates a poor user experience where you may have to define the same setting multiple times to achieve an end goal or an inconsistent configuration. An imaginary network CLI could have you define any route inside a VRF under a VRF object but any route in the global routing table at the top level, like the following example:

ip vrf red
   ip route 10.1.1.0/24 via 169.254.1.1
 !
 ip route 192.168.1.0/24 via 172.16.1.1 

This is a trivial example, but now the way that a route is defined is not uniform, depending on where you are in the system.

What do you get with an object model?

With an understanding of what an object model is, the next question is, “Why should you care?” By having an object model, it makes building ways to interact with the system extremely easy. Systems talk to an API that represents the object model. The first interface is, of course, the CLI, but anything can now be an interface to the system: REST, gRPC, or even RFC1149 Avian Carriers.

CLI, REST, gRPC, Terraform, or RFC1149 Carrier Pigeons all interface with the same NVUE API.
Figure 3. CLI and REST interfaces are available in Cumulus Linux 4.4.

By having all the interfaces use the same object model, it guarantees consistent results regardless of how you interface with the system. The CLI and REST API use the same methods to configure a BGP peer. There is never a chance of seeing different behaviors based on which interface you use. Because the object model is the same no matter how you interact with it, this means that going from playing with the CLI to building full automation is an evolution, not a completely new process.

REST and CLI are expected for any network device today. Where can we think beyond this? An object model can be directly imported into a programming language like Python or Java. This enables you to use true programming concepts to build configurations for one device or an entire fabric of devices. You can enforce inputs, values, and relationships like never before. The following code example shows what an NVUE Python interface might look like:

from nvue import Switch
  
 spine01 = Switch()
 x = 1
 while x 



In this example, I load the nvue library and create a new Switch object called spine01. I have the object tell me how many interfaces exist on the system with len(spine01.interfaces). For each interface, I put it in the up state and assign an IP address with the subnet value matching the interface number. For example, port 3 would have an IP address of 10.1.3.1/24.

This doesn’t exist yet, but it is absolutely in the realm of possibility because an object model exists. Unlike all other networking vendor systems, where the model is determined by the CLI, this CLI is based on the model. The object model is a standalone element that can be imported into programming languages, APIs, or any other system.

Try it out

One of the most valuable pieces of Cumulus Linux is the ability to try all our features and functions virtually. You can use NVIDIA Air to start using NVUE today and see what you think of the future of network CLIs and programmability.

Categories
Offsites

SoundStream: An End-to-End Neural Audio Codec

Audio codecs are used to efficiently compress audio to reduce either storage requirements or network bandwidth. Ideally, audio codecs should be transparent to the end user, so that the decoded audio is perceptually indistinguishable from the original and the encoding/decoding process does not introduce perceivable latency.

Over the past few years, different audio codecs have been successfully developed to meet these requirements, including Opus and Enhanced Voice Services (EVS). Opus is a versatile speech and audio codec, supporting bitrates from 6 kbps (kilobits per second) to 510 kbps, which has been widely deployed across applications ranging from video conferencing platforms, like Google Meet, to streaming services, like YouTube. EVS is the latest codec developed by the 3GPP standardization body targeting mobile telephony. Like Opus, it is a versatile codec operating at multiple bitrates, 5.9 kbps to 128 kbps. The quality of the reconstructed audio using either of these codecs is excellent at medium-to-low bitrates (12–20 kbps), but it degrades sharply when operating at very low bitrates (⪅3 kbps). While these codecs leverage expert knowledge of human perception as well as carefully engineered signal processing pipelines to maximize the efficiency of the compression algorithms, there has been recent interest in replacing these handcrafted pipelines by machine learning approaches that learn to encode audio in a data-driven manner.

Earlier this year, we released Lyra, a neural audio codec for low-bitrate speech. In “SoundStream: an End-to-End Neural Audio Codec”, we introduce a novel neural audio codec that extends those efforts by providing higher-quality audio and expanding to encode different sound types, including clean speech, noisy and reverberant speech, music, and environmental sounds. SoundStream is the first neural network codec to work on speech and music, while being able to run in real-time on a smartphone CPU. It is able to deliver state-of-the-art quality over a broad range of bitrates with a single trained model, which represents a significant advance in learnable codecs.

Learning an Audio Codec from Data
The main technical ingredient of SoundStream is a neural network, consisting of an encoder, decoder and quantizer, all of which are trained end-to-end. The encoder converts the input audio stream into a coded signal, which is compressed using the quantizer and then converted back to audio using the decoder. SoundStream leverages state-of-the-art solutions in the field of neural audio synthesis to deliver audio at high perceptual quality, by training a discriminator that computes a combination of adversarial and reconstruction loss functions that induce the reconstructed audio to sound like the uncompressed original input. Once trained, the encoder and decoder can be run on separate clients to efficiently transmit high-quality audio over a network.

SoundStream training and inference. During training, the encoder, quantizer and decoder parameters are optimized using a combination of reconstruction and adversarial losses, computed by a discriminator, which is trained to distinguish between the original input audio and the reconstructed audio. During inference, the encoder and quantizer on a transmitter client send the compressed bitstream to a receiver client that can then decode the audio signal.

Learning a Scalable Codec with Residual Vector Quantization
The encoder of SoundStream produces vectors that can take an indefinite number of values. In order to transmit them to the receiver using a limited number of bits, it is necessary to replace them by close vectors from a finite set (called a codebook), a process known as vector quantization. This approach works well at bitrates around 1 kbps or lower, but quickly reaches its limits when using higher bitrates. For example, even at a bitrate as low as 3 kbps, and assuming the encoder produces 100 vectors per second, one would need to store a codebook with more than 1 billion vectors, which is infeasible in practice.

In SoundStream, we address this issue by proposing a new residual vector quantizer (RVQ), consisting of several layers (up to 80 in our experiments). The first layer quantizes the code vectors with moderate resolution, and each of the following layers processes the residual error from the previous one. By splitting the quantization process in several layers, the codebook size can be reduced drastically. As an example, with 100 vectors per second at 3 kbps, and using 5 quantizer layers, the codebook size goes from 1 billion to 320. Moreover, we can easily increase or decrease the bitrate by adding or removing quantizer layers, respectively.

Because network conditions can vary while transmitting audio, ideally a codec should be “scalable” so that it can change its bitrate from low to high depending on the state of the network. While most traditional codecs are scalable, previous learnable codecs need to be trained and deployed specifically for each bitrate.

To circumvent this limitation, we leverage the fact that the number of quantization layers in SoundStream controls the bitrate, and propose a new method called “quantizer dropout”. During training, we randomly drop some quantization layers to simulate a varying bitrate. This pushes the decoder to perform well at any bitrate of the incoming audio stream, and thus helps SoundStream to become “scalable” so that a single trained model can operate at any bitrate, performing as well as models trained specifically for these bitrates.

Comparison of SoundStream models (higher is better) that are trained at 18 kbps with quantizer dropout (bitrate scalable), without quantizer dropout (not bitrate scalable) and evaluated with a variable number of quantizers, or trained and evaluated at a fixed bitrate (bitrate specific). The bitrate-scalable model (a single model for all bitrates) does not lose any quality when compared to bitrate-specific models (a different model for each bitrate), thanks to quantizer dropout.

A State-of-the-Art Audio Codec
SoundStream at 3 kbps outperforms Opus at 12 kbps and approaches the quality of EVS at 9.6 kbps, while using 3.2x–4x fewer bits. This means that encoding audio with SoundStream can provide a similar quality while using a significantly lower amount of bandwidth. Moreover, at the same bitrate, SoundStream outperforms the current version of Lyra, which is based on an autoregressive network. Unlike Lyra, which is already deployed and optimized for production usage, SoundStream is still at an experimental stage. In the future, Lyra will incorporate the components of SoundStream to provide both higher audio quality and reduced complexity.

SoundStream at 3kbps vs. state-of-the-art codecs. MUSHRA score is an indication of subjective quality (the higher the better).

The demonstration of SoundStream’s performance compared to Opus, EVS, and the original Lyra codec is presented in these audio examples, a selection of which are provided below.

Speech

Reference
Lyra (3kbps)
Opus (6kbps)
EVS (5.9kbps)
SoundStream (3kbps)  

Music

Reference
Lyra (3kbps)
Opus (6kbps)
EVS (5.9kbps)
SoundStream (3kbps)  

Joint Audio Compression and Enhancement
In traditional audio processing pipelines, compression and enhancement (the removal of background noise) are typically performed by different modules. For example, it is possible to apply an audio enhancement algorithm at the transmitter side, before audio is compressed, or at the receiver side, after audio is decoded. In such a setup, each processing step contributes to the end-to-end latency. Conversely, we design SoundStream in such a way that compression and enhancement can be carried out jointly by the same model, without increasing the overall latency. In the following examples, we show that it is possible to combine compression with background noise suppression, by activating and deactivating denoising dynamically (no denoising for 5 seconds, denoising for 5 seconds, no denoising for 5 seconds, etc.).

Original noisy audio  
Denoised output*
* Demonstrated by turning denoising on and off every 5 seconds.

Conclusion
Efficient compression is necessary whenever one needs to transmit audio, whether when streaming a video, or during a conference call. SoundStream is an important step towards improving machine learning-driven audio codecs. It outperforms state-of-the-art codecs, such as Opus and EVS, can enhance audio on demand, and requires deployment of only a single scalable model, rather than many.

SoundStream will be released as a part of the next, improved version of Lyra. By integrating SoundStream with Lyra, developers can leverage the existing Lyra APIs and tools for their work, providing both flexibility and better sound quality. We will also release it as a separate TensorFlow model for experimentation.

AcknowledgmentsThe work described here was authored by Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund and Marco Tagliasacchi. We are grateful for all discussions and feedback on this work that we received from our colleagues at Google.

Categories
Misc

Explore the Latest in Omniverse Create: From Material Browsers to the Animation Sequencer

NVIDIA Omniverse Create 2021.3 is now available in open beta, delivering a new set of features for Omniverse artists, designers, developers, and engineers to enhance graphics and content creation workflows. We sat down with Frank DeLise, Senior Director of Product Management for Omniverse, to get a tour of some of the exciting new features. Get … Continued

NVIDIA Omniverse Create 2021.3 is now available in open beta, delivering a new set of features for Omniverse artists, designers, developers, and engineers to enhance graphics and content creation workflows.

We sat down with Frank DeLise, Senior Director of Product Management for Omniverse, to get a tour of some of the exciting new features. Get an overview through the clips or view the entirety of the livestream here.

A Beginner’s Look at Omniverse

Let’s start with a quick overview of the Omniverse Platform.


Introduction to Omniverse Create

NVIDIA Omniverse Create is an app that allows users to assemble, light, simulate, and render large-scale scenes. It is built using NVIDIA Omniverse Kit, and the scene description and in-memory model is based on Pixar’s USD

Omniverse Create can be used on its own or as a companion application alongside popular content creation tools in a connected, collaborative workflow. Omniverse Connectors, or plug-ins to applications, can provide real-time, synchronized feedback. Being an extra viewport with physically accurate path tracing and physics simulation, greatly increases any creative and design workflow.

Zero Gravity Mode, powered by PhysX 5

Frank shows us Zero Gravity, a physics-based manipulation tool built to make scene composition intuitive for creators. With physics interactions based on NVIDIA PhysX 5, users can now nudge, slide, bump, and push objects into position with no interpenetration. Zero Gravity easily makes objects solid, making precise positioning, scattering and grouping of objects a breeze. 

Features to Simplify Workflows
Next on our tour is a trio of features: 

  • Browser Extension: A new set of windows were added for easy browsing of assets, textures, materials, samples, and more. 
  • Paint Scattering: Users can select assets and randomly scatter using a paint brush. The ability to flood fill areas with percentage ratios makes it easy to create lifelike environments with realistic variety.
  • Quick Search: Users can now search for anything with Create, including connected libraries, functions, and tools, by simply typing the name. Quick Search also uses skills to provide contextual suggestions, like suggesting an HDRI map after you place a dome light. It’s a highly extensible system and can be enhanced through AI integration.

Sun Study Simulations

Omniverse users can further explore lighting options with the Sun Study extension, which offers a quick way to review a model with accurate sunlight. When the Sun Study Timeline is invoked, it will appear on the bottom of the viewport and allow the user to “scrub” or “play” through a given day/night cycle. It even includes dynamic skies with animated clouds for added realism.

Animation and Sequencer

Animation gets a massive push forward with the addition of a sequencer and key framer.

The new sequencer enables users to assemble animations through clips, easily cut from one camera to another, apply motion data to characters, and add a soundtrack or sound effects. 

The key framer extension provides a user-friendly way of adding keyframes and animations to prims in your scenes.

UsdShade Graphic Editor for Material Definition Language (MDL)

New with Create 2021.3 is the UsdShade graph editor for Material Definition Language (MDL) materials. Provided with the Material Graph is a comprehensive list of MDL BSDFs and functions. Materials and functions are represented as drag and droppable nodes in the Material Graph Node List. Now, you can easily create custom materials by connecting shading nodes together and storing them in USD.

OpenVDB Support, Accelerated by NanoVDB

Support for OpenVDB volumes has also been added, making use of NanoVDB for acceleration. This feature helps artists visualize volumetric data created with applications like SideFX Houdini or Autodesk Bifrost.

What is Omniverse Create versus Omniverse View?

Lastly, Frank finishes our tour with an explanation of Omniverse View compared to Omniverse Create.

To learn more, look at the new features in Omniverse View 2021.3.

More Resources

  • Watch the full recording for coverage of additional features including installation using the launcher, USDZ and point cloud support, version control, Iray rendering, payloads, and more! 
  • You can get more details about the latest Create and View apps by reading the release notes in our online documentation: 
  • Download the Omniverse Open Beta today and explore these new features!
  • Join us live on Twitch for interactive answers to your questions. 
  • Visit our forums or Discord server to discuss features or seek assistance.
  • Binge watch our tutorials for a deep dive into Omniverse Create and Omniverse View. 
Categories
Misc

Hooked on a Feeling: GFN Thursday Brings ‘NARAKA: BLADEPOINT’ to GeForce NOW

Calling all warriors. It’s a glorious week full of new games. This GFN Thursday comes with the exciting release of the new battle royale NARAKA: BLADEPOINT, as well as the Hello Neighbor franchise as part of the 11 great games joining the GeForce NOW library this week. Plus, the newest Assassin’s Creed Valhalla DLC has Read article >

The post Hooked on a Feeling: GFN Thursday Brings ‘NARAKA: BLADEPOINT’ to GeForce NOW appeared first on The Official NVIDIA Blog.

Categories
Misc

If I compile tensorflow from source, will it run faster than if I install it with pip

During the configuration before compilation it asks for what cuda capability your graphics card has if you enable cuda so wouldn’t that mean that if I compile it myself and select the correct capability then it will be a better fit for my graphics card than the generic tensorflow-gpu package?

submitted by /u/NotSamar
[visit reddit] [comments]

Categories
Misc

Looking for beginner regression exercise

Hey guys, I have just getting started with tensorflow regression, now I want to do some practice. Can you guys suggest any simple dataset for me to practice on? Are there ‘beginner’ dataset on kaggle?

submitted by /u/nyyirs
[visit reddit] [comments]