Categories
Misc

Run RAPIDS on Microsoft Windows 10 Using WSL 2—The Windows Subsystem for Linux

A tutorial to run your favorite Linux software, including NVIDIA CUDA, on Windows RAPIDS is now more accessible to Windows users! This post walks you through installing RAPIDS on Windows Subsystem for Linux (WSL). WSL is a Windows 10 feature that enables users to run native Linux command-line tools directly on Windows. Using this feature … Continued

This post was originally published on the RAPIDS AI Blog.

A tutorial to run your favorite Linux software, including NVIDIA CUDA, on Windows

RAPIDS is now more accessible to Windows users! This post walks you through installing RAPIDS on Windows Subsystem for Linux (WSL). WSL is a Windows 10 feature that enables users to run native Linux command-line tools directly on Windows. Using this feature does not require a dual boot environment, taking away complexity and hopefully saving you time. You’ll need access to an NVIDIA GPU with NVIDIA Pascal architecture or newer. Let’s get started right away.

Getting Started

To install RAPIDS, you’ll need to do the following:

  1. Install the latest builds from the Microsoft Insider Program.
  2. Install the NVIDIA preview driver for WSL 2.
  3. Install WSL 2.
  4. Install RAPIDS.

Steps 1–3 can be completed by following the NVIDIA CUDA on WSL guide. However, there are some gotchas. This article will walk through each section and point out what to look out for. We recommend opening a tab for the guide alongside this post to make sure that you don’t miss anything. Before you start, be aware that all the steps in the guide must be carried out in order. It’s particularly important that you install a fresh version of WSL 2 only after installing the new build and driver. Also note, CUDA toolkit will be installed along with RAPIDS in step 4. Therefore, stop following the CUDA on WSL guide after you reach the Setting up CUDA Toolkit section.

Installing the latest builds from the Microsoft Insider program

For your program to run correctly, you need to be using Windows Build version 20145 or higher. When installing the builds, some things to note are:

  • Start off by navigating to your Windows menu. Select Settings > Update and Security > Windows Update. Make sure that you don’t have any pending Windows updates. If you do, click the update button to ensure you’re starting out without any.
  • Dev Channel (previously Fast ring): Fast ring is mentioned in the guide as the channel you should download your build from. The name of this channel is now the Dev Channel. Windows call the process of updating and installing the latest builds ‘flighting.’ During this process, you must select the DEV Channel when choosing which updates to receive.
  • Downloading and updating requires a restart and can take up to 90mins. Feel free to grab a coffee while you wait ;).
  • After you’ve restarted your computer, check your build version by running winver via the Windows Run command. It can be a little tricky to identify the right number. Here’s what you should look for after a successful installation (BUILD 20145 or higher):
Image of updated Windows 10 OS Build. The build in the image is OS Build 21296.
Figure 1: Build version is now OS Build 21296 which is sufficient to run WSL2.

Once you’ve confirmed your build, move onto step 2.

Installing NVIDIA drivers

Next, you’ll need to install an NVIDIA Driver. Keep the following in mind:

  • Select the driver based on the type of NVIDIA GPU in your system. To verify your GPU type look for the NVIDIA Control Panel in your Start menu. The name should appear there. See the CUDA on Windows Subsystem for Linux (WSL) public preview for more information. 
  • Once the download is complete install the driver using the executable. We strongly recommend choosing the default location for saving it.
  • A check to ensure the driver install was successful is to run the command nvidia-smi in PowerShell. It should output a table with information about your GPU and the driver. You’ll notice the driver version is the same as the one you downloaded.
Image of NVIDIA-SMI table displayed in Windows Powershell. NVIDIA Driver version 465.21 has been correctly installed.
Figure 2: NVIDIA Driver has correctly been installed, version 465.21.

(Your table might be much shorter and not show any GPU processes. As long as you can see a table and no visible errors, your install should have been successful!) If your driver is successfully installed, let’s jump to step 3. If nothing appears, check if you’ve missed any of the steps and if your build version is correct.

Installing WSL 2

Next, you’ll install WSL 2 with a Linux distribution of your choice using the docs here. Make sure that the distribution you choose is supported by RAPIDS. You can confirm this here. The rest of this post describes the installation of WSL 2 with Ubuntu 18.04. These steps should work similarly with other supported distributions.

There are two ways you can install your RAPIDS supporting Linux distribution with WSL 2 on Windows 10. The instructions listed in the Windows guide can seem overwhelming so we’ve distilled it down to the most important parts here:

Using the command line

  • Open your command line and ensure you’re in the Admin role.
  • Find out which Linux distributions are available and support WSL by typing in the command wsl --list -online.
  • To install the distribution, use the command wsl --install -d .
  • For Ubuntu 18.04 this command translated to wsl --install -d Ubuntu-18.04 (be aware of the capital letter U.) This should download and install your Linux distribution.
  • Your selected distribution should either immediately open or appear in your Windows Start menu.
  • If this is not true for you, double-check that your Linux distribution and WSL install was successful by running wsl.exe -list. If no distribution appears, navigate to “Programs” in your Control Panel. Confirm that the “Windows Hypervisor Platform” and “Windows Subsystem for Linux” boxes are checked. It should look something like the image below. Once confirmed, reboot your computer and try running the install again (possibly twice.) Ideally, the WSL terminal should pop up right after the installation.
Image of Windows Features list. Windows Hypervisor Platform and Windows Subsystem for Linux boxes are checked successfully.
Figure 3: In case your WSL terminal install doesn’t work right away, make sure the folders checked preceding are checked on your system as well.
  • When opening your WSL terminal for the first time, you will be prompted to set up a default(non-root) user. Ensure that you do not skip this step, as you will need to be the root user to install other packages.
  • Once you’ve set the default user, proceed to reboot your machine. When you return, you’ll be all set for step 4.

Through the Microsoft Store

  • If you already know which distribution you would like to use, you can download and install it directly from the Microsoft Store on your machine.
  • You’ll need to set the default user and do a reboot in this case as well.

Once you’ve completed this step, you’re ready to install the CUDA Toolkit and almost done!

Install RAPIDS

  • If you don’t have it already, start by installing and activating Miniconda in your WSL terminal. We’ll be using the conda command to install the packages we need in this step.
  • You can install RAPIDS with a single conda command. Just type the following line in and you’re all set.
Type this command into your terminal to install RAPIDS.

To test your installation, start-up the RAPIDS virtual environment. You can do this by:

  • Typing out conda info --envs, which will let you know the name of the installed RAPIDS environment.
  • Note: cuDF is supported only on Linux and with Python versions 3.7 and later.
  • Finally, import any RAPIDS library or start a Jupyter notebook.

Hopefully, your installation was successful. RAPIDS is open-source, so if you managed to get this far and would like to contribute, take another look at the contributing guide of any of our libraries or join the RAPIDS Slack channel to find out more.

Categories
Misc

Sherd Alert: GPU-Accelerated Deep Learning Sorts Pottery Fragments as Well as Expert Archeologists

A pair of researchers at Northern Arizona University used GPU-based deep-learning algorithms to categorize sherds — tiny fragments of ancient pottery — as well as, or better than, four expert archaeologists. The technique, outlined in a paper published in the June issue of The Journal of Archaeological Science by Leszek Pawlowicz and Christian Downum focused Read article >

The post Sherd Alert: GPU-Accelerated Deep Learning Sorts Pottery Fragments as Well as Expert Archeologists appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA Studio Goes 3D: Real-Time Ray Tracing and AI Accelerate Adobe’s New Substance 3D Collection of Design Applications

The NVIDIA Studio ecosystem continues to deliver time-saving features and visual improvements to top creative applications. Today, Adobe announced a significant update to their 3D lineup, with new and improved tools available in the Adobe Substance 3D Collection: new versions of Substance 3D Painter, Designer and Sampler, as well as the new application Substance 3D Read article >

The post NVIDIA Studio Goes 3D: Real-Time Ray Tracing and AI Accelerate Adobe’s New Substance 3D Collection of Design Applications appeared first on The Official NVIDIA Blog.

Categories
Misc

As Fast as One Can Gogh: Turn Sketches Into Stunning Landscapes with NVIDIA Canvas

Turning doodles into stunning landscapes — there’s an app for that. The NVIDIA Canvas app, now available as a free beta, brings the real-time painting tool GauGAN to anyone with an NVIDIA RTX GPU. Developed by the NVIDIA Research team, GauGAN has wowed creative communities at trade shows around the world by using deep learning Read article >

The post As Fast as One Can Gogh: Turn Sketches Into Stunning Landscapes with NVIDIA Canvas appeared first on The Official NVIDIA Blog.

Categories
Misc

Into to Deep Learning project in TensorFlow 2.x and Python – free course from udemy

Into to Deep Learning project in TensorFlow 2.x and Python - free course from udemy submitted by /u/Ordinary_Craft
[visit reddit] [comments]
Categories
Misc

Concating 3 multivariate sequences as an input to 1 model?

I’ve been trying to figure it out for about a week now but I keep getting ‘Data cardinality is ambiguous’. I’m creating a sequential model for each multivariate sequence, then concating the .output from each of those models as the input to a Keras model. I’m also feeding the inputs in as a list of each .input from each model.

Even when I make the last layer of each sequence’s model a dense layer with the same amount of units, the cardinality error still complain’s about concating different sequence lengths.

Any ideas or working code appreciated

submitted by /u/Techguy13
[visit reddit] [comments]

Categories
Misc

Metropolis Spotlight: Nota Is Transforming Traffic Management Systems With AI

Nota, an NVIDIA Metropolis partner, is using AI to make roadways safer and more efficient with NVIDIA’s edge GPUs and deep learning SDKs.

Nota, an NVIDIA Metropolis partner, is using AI to make roadways safer and more efficient with NVIDIA’s edge GPUs and deep learning SDKs.

Nota developed a real-time traffic control solution that uses image recognition technology to identify traffic volume and queues, analyze congestion, and optimize traffic signal controls at intersections. 

Using the DeepStream SDK off-the-shelf features, such as line crossing and setting a region of interest, Nota significantly improved how accurately it could examine traffic situations. Nota deployed the solution at a busy intersection in Pyeongtaek, South Korea to analyze traffic flow and control traffic lights in real-time. Nota was able to improve the traffic flow by 25% during regular hours, and by more than 300% during rush hour, saving the city traffic-congestion-related costs and reducing the time spent by drivers stuck in traffic. 

Read more in our solution showcase.

Categories
Misc

Metropolis Spotlight: INEX Is Revolutionizing Toll Road Systems with Real-time Video Processing

INEX Technologies, an NVIDIA Metropolis partner, designs, develops, and manufactures comprehensive hardware and software solutions for license plate recognition and vehicle identification.

INEX Technologies, an NVIDIA Metropolis partner, designs, develops, and manufactures comprehensive hardware and software solutions for license plate recognition and vehicle identification. 

The INEX RoadView solution provides automatic axle counting, vehicle classification, as well as lane zone tracking and triggering using LPR and RoadView cameras. RoadView video-based recognition eliminates the need for costly concrete cutting, in-ground loop maintenance, and axle-counting treadles.

NVIDIA GPUs are used to accelerate the real-time video analysis of the INEX ALPR system, which requires incredibly high accuracy along with high throughput and high frame rates. At the edge, the INEX uses the NVIDIA Jetson Nano and Jetson NX platform and embedded software stack.

Under the hood  

INEX video pipeline is based on the NVIDIA DeepStream SDK, which helps achieve super-optimized throughput, and makes it simpler to integrate complex classification and detection algorithms. INEX further leverages some of the world’s most powerful AI productivity tools by integrating NVIDIA pre-trained models and NVIDIA Transfer Learning Toolkit into their development workflow, reducing development time by a stunning 60%. And by going end-to-end with the full stack of NVIDIA hardware and software and deploying on NVIDIA Jetson edge platform, they reduced hardware and setup costs by 60% and lowered operating and maintenance costs by 50%.

The implications and impact for INEX are significant. Leveraging the NVIDIA platform, they can roll out world-class solutions performing challenging real-time vehicle detection and classification read licenses from all 50 states in the US and have expanded to countries in Europe, the Far East, the Middle East, and Australia. Tolling authorities upgrading to the INEX vehicle classification and ALPR system, can supercharge their toll systems quickly and easily – leveraging the latest AI technology.

Read more in our solution showcase.

Categories
Misc

NVIDIA Research: Learning Modular Scene Representations With Neural Scene Graphs

NVIDIA researchers will present their paper “Neural Scene Graph Rendering” at SIGGRAPH 2021, August 9-13, which introduces a neural scene representation inspired by traditional graphics scene graphs.  Recent advances in neural rendering have pushed the boundaries of photorealistic rendering; take StyleGAN as an example of producing realistic images of fictional people. The next big challenge … Continued

NVIDIA researchers will present their paper “Neural Scene Graph Rendering” at SIGGRAPH 2021, August 9-13, which introduces a neural scene representation inspired by traditional graphics scene graphs. 

Recent advances in neural rendering have pushed the boundaries of photorealistic rendering; take StyleGAN as an example of producing realistic images of fictional people. The next big challenge is bringing these neural techniques into digital content-creation applications, like Maya and Blender. This challenge requires a new generation of neural scene models that feature artistic control and modularity that is comparable to classical 3D meshes and material representations.

“In order to kick-off these developments, we needed to step back a little bit and scale down the scene complexity,” mentions Jonathan Granskog, the first author of the paper.

This is one of the reasons why the images in the paper are reminiscent of early years of computer graphics. However, the artistic control and the granularity of neural elements is closer to what modern applications would require to integrate neural rendering into traditional authoring pipelines. The proposed approach allows organizing learned neural elements into an (animated) scene graph much like in standard authoring tools. 

Three frames from an animation with tangram shapes that gradually
morph from one assembly into another. The twirl deformation is applied to
individual pieces during the transition.

Frames from a 2D sprite animation featuring 16 alpha-masked
textures that are instantiated over a static background image. The prediction
attains most of the texture detail. Artifacts appear primarily where two
“ground” tiles meet due to slightly softer reproduction of texture edges.

Two diffuse tori playing beach volleyball with a volumetric ball. In
the right-most column, the materials of the ball and tori are swapped.

A neural element may represent, for instance, the geometry of a teapot or the appearance of porcelain. Each such scene element is stored as an abstract, high-dimensional vector with its parameters being learned from images. During the training process, the method also learns how to manipulate and render these abstract vectors. For instance, a vector representing a piece of geometry can be translated, rotated, bent, or twisted using a manipulator. Analogously, material elements can be altered by stretching the texture content, desaturating it, or changing the hue.

Since the optimizable components (vectors, manipulators, and the renderer) are very general, the approach can handle both 2D and 3D scenes without changing the methodology. The artist can compose a scene by organizing the vectors and manipulators into a scene graph. The scene graph is then collapsed into a stream of neural primitives that are translated into an RGB image using a streaming neural renderer, much like a rasterizer would turn a stream of triangles into an image.

The analogy to the traditional scene graphs and rendering pipelines is not coincidental.

“Our goal is to eventually combine neural and classical scene primitives, and bringing the representations closer to each other is the first step on that path,” says Jan Novák, a co-author of the paper.

This will unlock the possibility of extracting scene elements from photographs using AI algorithms, combining them with classical graphics representations, and composing scenes and animations in a controlled manner.

The animations on this page illustrate the potential. The individual neural elements were learned from images of random static scenes. An artist then defined a sequence of scene graphs to produce a fluent animation consisting of the learned elements. While there is still a long way to go to achieve high-quality visuals and scene complexity of modern applications with this approach, the article presents a feasible approach for bringing neural and classical rendering together. Once these fully join forces, real-time photorealistic rendering could experience the next quantum leap.

Learn more: Check out the project website.

Categories
Offsites

Quantum Machine Learning and the Power of Data

Quantum computing has rapidly advanced in both theory and practice in recent years, and with it the hope for the potential impact in real applications. One key area of interest is how quantum computers might affect machine learning. We recently demonstrated experimentally that quantum computers are able to naturally solve certain problems with complex correlations between inputs that can be incredibly hard for traditional, or “classical”, computers. This suggests that learning models made on quantum computers may be dramatically more powerful for select applications, potentially boasting faster computation, better generalization on less data, or both. Hence it is of great interest to understand in what situations such a “quantum advantage” might be achieved.

The idea of quantum advantage is typically phrased in terms of computational advantages. That is, given some task with well defined inputs and outputs, can a quantum computer achieve a more accurate result than a classical machine in a comparable runtime? There are a number of algorithms for which quantum computers are suspected to have overwhelming advantages, such as Shor’s factoring algorithm for factoring products of large primes (relevant to RSA encryption) or the quantum simulation of quantum systems. However, the difficulty of solving a problem, and hence the potential advantage for a quantum computer, can be greatly impacted by the availability of data. As such, understanding when a quantum computer can help in a machine learning task depends not only on the task, but also the data available, and a complete understanding of this must include both.

In “Power of data in quantum machine learning”, published in Nature Communications, we dissect the problem of quantum advantage in machine learning to better understand when it will apply. We show how the complexity of a problem formally changes with the availability of data, and how this sometimes has the power to elevate classical learning models to be competitive with quantum algorithms. We then develop a practical method for screening when there may be a quantum advantage for a chosen set of data embeddings in the context of kernel methods. We use the insights from the screening method and learning bounds to introduce a novel method that projects select aspects of feature maps from a quantum computer back into classical space. This enables us to imbue the quantum approach with additional insights from classical machine learning that shows the best empirical separation in quantum learning advantages to date.

Computational Power of Data
The idea of quantum advantage over a classical computer is often framed in terms of computational complexity classes. Examples such as factoring large numbers and simulating quantum systems are classified as bounded quantum polynomial time (BQP) problems, which are those thought to be handled more easily by quantum computers than by classical systems. Problems easily solved on classical computers are called bounded probabilistic polynomial (BPP) problems.

We show that learning algorithms equipped with data from a quantum process, such as a natural process like fusion or chemical reactions, form a new class of problems (which we call BPP/Samp) that can efficiently perform some tasks that traditional algorithms without data cannot, and is a subclass of the problems efficiently solvable with polynomial sized advice (P/poly). This demonstrates that for some machine learning tasks, understanding the quantum advantage requires examination of available data as well.


Geometric Test for Quantum Learning Advantage

Informed by the results that the potential for advantage changes depending on the availability of data, one may ask how a practitioner can quickly evaluate if their problem may be well suited for a quantum computer. To help with this, we developed a workflow for assessing the potential for advantage within a kernel learning framework. We examined a number of tests, the most powerful and informative of which was a novel geometric test we developed.

In quantum machine learning methods, such as quantum neural networks or quantum kernel methods, a quantum program is often divided into two parts, a quantum embedding of the data (an embedding map for the feature space using a quantum computer), and the evaluation of a function applied to the data embedding. In the context of quantum computing, quantum kernel methods make use of traditional kernel methods, but use the quantum computer to evaluate part or all of the kernel on the quantum embedding, which has a different geometry than a classical embedding. It was conjectured that a quantum advantage might arise from the quantum embedding, which might be much better suited to a particular problem than any accessible classical geometry.

We developed a quick and rigorous test that can be used to quickly compare a particular quantum embedding, kernel, and data set to a range of classical kernels and assess if there is any opportunity for quantum advantage across, e.g., possible label functions such as those used for image recognition tasks. We define a geometric constant g, which quantifies the amount of data that could theoretically close that gap, based on the geometric test. This is an extremely useful technique for deciding, based on data constraints, if a quantum solution is right for the given problem.

Projected Quantum Kernel Approach
One insight revealed by the geometric test, was that existing quantum kernels often suffered from a geometry that was easy to best classically because they encouraged memorization, instead of understanding. This inspired us to develop a projected quantum kernel, in which the quantum embedding is projected back to a classical representation. While this representation is still hard to compute with a classical computer directly, it comes with a number of practical advantages in comparison to staying in the quantum space entirely.

Geometric quantity g, which quantifies the potential for quantum advantage, depicted for several embeddings, including the projected quantum kernel introduced here.

By selectly projecting back to classical space, we can retain aspects of the quantum geometry that are still hard to simulate classically, but now it is much easier to develop distance functions, and hence kernels, that are better behaved with respect to modest changes in the input than was the original quantum kernel. In addition the projected quantum kernel facilitates better integration with powerful non-linear kernels (like a squared exponential) that have been developed classically, which is much more challenging to do in the native quantum space.

This projected quantum kernel has a number of benefits over previous approaches, including an improved ability to describe non-linear functions of the existing embedding, a reduction in the resources needed to process the kernel from quadratic to linear with the number of data points, and the ability to generalize better at larger sizes. The kernel also helps to expand the geometric g, which helps to ensure the greatest potential for quantum advantage.

Data Sets Exhibit Learning Advantages
The geometric test quantifies potential advantage for all possible label functions, however in practice we are most often interested in specific label functions. Using learning theoretic approaches, we also bound the generalization error for specific tasks, including those which are definitively quantum in origin. As the advantage of a quantum computer relies on its ability to use many qubits simultaneously but previous approaches scale poorly in number of qubits, it is important to verify the tasks at reasonably large qubit sizes ( > 20 ) to ensure a method has the potential to scale to real problems. For our studies we verified up to 30 qubits, which was enabled by the open source tool, TensorFlow-Quantum, enabling scaling to petaflops of compute.

Interestingly, we showed that many naturally quantum problems, even up to 30 qubits, were readily handled by classical learning methods when sufficient data were provided. Hence one conclusion is that even for some problems that look quantum, classical machine learning methods empowered by data can match the power of quantum computers. However, using the geometric construction in combination with the projected quantum kernel, we were able to construct a data set that exhibited an empirical learning advantage for a quantum model over a classical one. Thus, while it remains an open question to find such data sets in natural problems, we were able to show the existence of label functions where this can be the case. Although this problem was engineered and a quantum computational advantage would require the embeddings to be larger and more challenging, this work represents an important step in understanding the role data plays in quantum machine learning.

Prediction accuracy as a function of the number of qubits (n) for a problem engineered to maximize the potential for learning advantage in a quantum model. The data is shown for two different sizes of training data (N).

For this problem, we scaled up the number of qubits (n) and compared the prediction accuracy of the projected quantum kernel to existing kernel approaches and the best classical machine learning model in our dataset. Moreover, a key takeaway from these results is that although we showed the existence of datasets where a quantum computer has an advantage, for many quantum problems, classical learning methods were still the best approach. Understanding how data can affect a given problem is a key factor to consider when discussing quantum advantage in learning problems, unlike traditional computation problems for which that is not a consideration.

Conclusions
When considering the ability of quantum computers to aid in machine learning, we have shown that the availability of data fundamentally changes the question. In our work, we develop a practical set of tools for examining these questions, and use them to develop a new projected quantum kernel method that has a number of advantages over existing approaches. We build towards the largest numerical demonstration to date, 30 qubits, of potential learning advantages for quantum embeddings. While a complete computational advantage on a real world application remains to be seen, this work helps set the foundation for the path forward. We encourage any interested readers to check out both the paper and related TensorFlow-Quantum tutorials that make it easy to build on this work.

Acknowledgements
We would like to acknowledge our co-authors on this paper — Michael Broughton, Masoud Mohseni, Ryan Babbush, Sergio Boixo, and Hartmut Neven, as well as the entirety of the Google Quantum AI team. In addition, we acknowledge valuable help and feedback from Richard Kueng, John Platt, John Preskill, Thomas Vidick, Nathan Wiebe, Chun-Ju Wu, and Balint Pato.


1Current affiliation — Institute for Quantum Information and Matter and Department of Computing and Mathematical Sciences, Caltech, Pasadena, CA, USA