Categories
Misc

"kernel driver does not appear to be running on this host"

i looked the problem up but i didnt find any solutions plus the only threads i found were problems with poeple who wanted to use tensorflow with gpu. so here i post:

My situation:

i know basics in python and know a little bit about virtual environments and im using tensorflow object detection api without gpu on ubuntu 18.04

I installed the tensorflow object detection api with this anaconda guide “https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/” , tho im not sure if i activated the tensorflow environment (“conda activate tensorflow”) doing this. It worked fine and wrote various programs with spyder 5.2.3 using tensorflow and object detection.

Then i did a terrible rookie mistake and updated anaconda and i believe conda too cause i was pretty much mindlessly copying some pip commands and everything stopped working cause of a dependency chaos.

i tried with conda revisions to revert the update but it wasnt working and i tried deleting anaconda with

conda install anaconda-clean

anaconda-clean –yes

rm -rf ~/anaconda3

and uninstalling tensorflow with

pip uninstall tensorflow

and tried reinstalling the whole thing twice but since then i get the classic error or hint for not using a gpu but additionally some error message like “kernel driver does not appear to be running on this host” and UNKOWN ERROR: 303 with some luda files missing which are associated with Cuda, but i dont use cuda since i have no gpu.

does it have something to do with a virtual environment i dont use or did i not uninstall tensorflow or anaconda properly or something else.

would appreciate some help if possible

submitted by /u/Mumm13
[visit reddit] [comments]

Categories
Misc

can someone please tell me how to upload training data I’m trying to find nums 0-9 on a page

can someone please tell me how to upload training data I'm trying to find nums 0-9 on a page submitted by /u/Living-Aardvark-952
[visit reddit] [comments]
Categories
Misc

First Wave of Startups Harnesses UK’s Most Powerful Supercomputer to Power Digital Biology Breakthroughs

Four NVIDIA Inception members have been selected as the first cohort of startups to access Cambridge-1, the U.K.’s most powerful supercomputer. The system will help British companies Alchemab Therapeutics, InstaDeep, Peptone and Relation Therapeutics enable breakthroughs in digital biology. Officially launched in July, Cambridge-1 — an NVIDIA DGX SuperPOD cluster powered by NVIDIA DGX A100 Read article >

The post First Wave of Startups Harnesses UK’s Most Powerful Supercomputer to Power Digital Biology Breakthroughs appeared first on NVIDIA Blog.

Categories
Misc

NVIDIA Launches Omniverse for Developers: A Powerful and Collaborative Game Creation Environment

Enriching its game developer ecosystem, NVIDIA today announced the launch of new NVIDIA Omniverse™ features that make it easier for developers to share assets, sort asset libraries, collaborate and deploy AI to animate characters’ facial expressions in a new game development pipeline.

Categories
Misc

At GTC: NVIDIA RTX Professional Laptop GPUs Debut, New NVIDIA Studio Laptops, a Massive Omniverse Upgrade and NVIDIA Canvas Update

Digital artists and creative professionals have plenty to be excited about at NVIDIA GTC. Impressive NVIDIA Studio laptop offerings from ASUS and MSI launch with upgraded RTX GPUs, providing more options for professional content creators to elevate and expand creative possibilities. NVIDIA Omniverse gets a significant upgrade — including updates to the Omniverse Create, Machinima Read article >

The post At GTC: NVIDIA RTX Professional Laptop GPUs Debut, New NVIDIA Studio Laptops, a Massive Omniverse Upgrade and NVIDIA Canvas Update appeared first on NVIDIA Blog.

Categories
Misc

NVIDIA Omniverse Upgrade Delivers Extraordinary Benefits to 3D Content Creators

At GTC, NVIDIA announced significant updates for millions of creators using the NVIDIA Omniverse real-time 3D design collaboration platform. The announcements kicked off with updates to the Omniverse apps Create, Machinima and Showroom, with an immement View release. Powered by GeForce RTX and NVIDIA RTX GPUs, they dramatically accelerate 3D creative workflows. New Omniverse Connections Read article >

The post NVIDIA Omniverse Upgrade Delivers Extraordinary Benefits to 3D Content Creators appeared first on NVIDIA Blog.

Categories
Misc

Jumpstarting Link-Level Simulations with NVIDIA Sionna

Sionna simulates physical layer and link-level research.Sionna is a GPU-accelerated open-source library for link-level simulations.Sionna simulates physical layer and link-level research.

Even while 5G wireless networks are being installed and used worldwide, researchers in academia and industry have already started defining visions and critical technologies for 6G. Although nobody knows what 6G will be, a recurring vision is that 6G must enable the creation of digital twins and distributed machine learning (ML) applications at an unprecedented scale. 6G research requires new tools.

Holographic mimo, reconfigurable intelligent surfaces and AI-native air interface are key emerging technologies in 6G.
Figure 1. 6G key technologies

Some of the key technologies underpinning the 6G vision are communications at the high frequencies known as the Terahertz band. In this band, more spectrum is available by orders of magnitude. Technology examples include the following:

  • Reconfigurable intelligent surfaces (RIS) to control how electromagnetic waves are reflected and achieve the best coverage.
  • Integrated sensing and communications (ISAC) to turn 6G networks into sensors, which offers many exciting applications for autonomous vehicles, road safety, robotics, and logistics. 

Machine learning is expected to play a defining role for the entire 6G protocol stack, which may revolutionize how we design and standardize communication systems.

Addressing the research challenges of these revolutionary technologies requires a new generation of tools to achieve the breakthroughs that will define communications in the 6G era. Here is why:

  • Many 6G technologies require the simulation of a specific environment, such as a factory or cell site, with a spatially consistent correspondence between physical location, wireless channel impulse response, and visual input. This can currently only be achieved by either costly measurement campaigns, or by efficient simulation based on a combination of scene rendering and ray tracing.  
  • As machine learning and neural networks become increasingly important, researchers would benefit tremendously from a link-level simulator with native ML integration and automatic gradient computation.
  • 6G simulations need unprecedented modeling accuracy and scale. The full potential of ML-enhanced algorithms will only be realized through physically-based simulations that account for reality in a level of detail that has been impossible in the past. 

Introducing NVIDIA Sionna

To address these needs, NVIDIA developed Sionna, a GPU-accelerated open-source library for link-level simulations. 

Sionna enables rapid prototyping of complex communication system architectures. It’s the world’s first framework that natively enables the use of neural networks in the physical layer and eliminates the need for separate toolchains for data generation, training, and performance evaluation. 

Sionna implements a wide range of carefully tested, state-of-the-art algorithms that can be used for benchmarking and end-to-end performance evaluation. This lets you focus on your research, making it more impactful and reproducible while you spend less time implementing components outside your area of expertise. 

Sionna is written in Python and based on TensorFlow and Keras. All components are implemented as Keras layers, which lets you build sophisticated system architectures by connecting the desired layers in the same way you would build a neural network. 

Apart from a few exceptions, all components are differentiable so that gradients can be back-propagated through an entire system. This is the key enabler for system optimization and machine learning, especially the integration of neural networks. 

NVIDIA GPU acceleration provides orders-of-magnitude faster simulations and scaling to large multi-GPU setups, enabling the interactive exploration of such systems. If no GPU is available, Sionna even runs on the CPU, though more slowly.

Sionna comes with rich documentation and a wide range of tutorials that make it easy to get started. 

Forward error correction, channel models, multiuser mimo and OFDM are features of Sionna at release.
Figure 2. Features of Sionna’s first release

The first release of Sionna has the following major features:  

  • 5G LDPC, 5G polar, and convolutional codes, rate-matching, CRC, interleaver, scrambler 
  • Various decoders: BP variants, SC, SCL, SCL-CRC, Viterbi 
  • QAM and custom modulation schemes 
  • 3GPP 38.901 Channel Models (TDL, CDL, RMa, UMa, Umi), Rayleigh, AWGN 
  • OFDM 
  • MIMO channel estimation, equalization, and precoding 

Sionna is released under the Apache 2.0 license, and we welcome contributions from external parties.

Hello, Sionna!

The following code example shows a Sionna “Hello, World!” example in which the transmission of a batch of LDPC codewords over an AWGN channel using 16QAM modulation is simulated. This example shows how Sionna layers are instantiated and applied to a previously defined tensor. The coding style follows the functional API of Keras. You can open this example directly in a Jupyter notebook on Google Collaboratory.

batch_size = 1024
n = 1000 # codeword length
k = 500 # information bits per codeword
m = 4 # bits per symbol
snr = 10 # signal-to-noise ratio

c = Constellation("qam",m,trainable=True)
b = BinarySource()([batch_size, k])
u = LDPC5GEncoder (k,n)(b)
x = Mapper (constellation=c)(u)
y = AWGN()([x,1/snr])
11r = Demapper("app", constellation=c)([y,1/snr])
b_hat = LDPC5GDecoder(LDPC5GEncoder (k, n))(11r)

One of the key advantages of Sionna is that components can be made trainable or replaced by neural networks. NVIDIA made Constellation trainable and replaced Demapper with a NeuralDemapper, which is just a neural network defined through Keras.

c = Constellation("qam",m,trainable=True)
b = BinarySource()([batch_size, k])
u = LDPC5GEncoder (k,n)(b)
x = Mapper (constellation=c)(u)
y = AWGN()([x,1/snr])
11r = NeuralDemapper()([y,1/snr])
b_hat = LDPC5GDecoder(LDPC5GEncoder (k, n))(11r)

What happens under the hood is that the tensor defining the constellation points has now become a trainable TensorFlow variable and can be tracked together with the weights of NeuralDemapper by the TensorFlow automatic differentiation feature. For these reasons, Sionna can be seen as a differentiable link-level simulator.

Looking ahead

Soon, Sionna will allow for integrated ray tracing to replace stochastic channel models, enabling many new fields of research. Ultra-fast ray tracing is a crucial technology for digital twins of communication systems. For example, this enables the co-design of a building’s architecture and the communication infrastructure to achieve unprecedented levels of throughput and reliability. 

Pseudo-code blocks surrounding a simulated image.
Figure 3. Access the power of hardware-accelerated ray tracing from within a Jupyter notebook

Sionna takes advantage of computing (NVIDIA CUDA cores), AI (NVIDIA Tensor Cores), and ray tracing cores of NVIDIA GPUs for lightning-fast simulations of 6G systems.

We hope you share our excitement about Sionna, and we look forward to hearing about your success stories!

For more information, see the following resources:

Categories
Misc

New Sensor Partners Expand Surgical, Ultrasound, and Data Acquisition Capabilities in the NVIDIA Clara Holoscan Platform

NVIDIA Clara Holoscan offers an expanded selection of third-party interface options for video capture, ultrasound research, data acquisition, and connection to legacy medical devices.

New advances in computation make it possible for medical devices to automatically detect, measure, predict, simulate, map, and guide clinical care teams. NVIDIA Clara Holoscan, the full-stack AI computing platform for medical devices, has added new sensor front-end partners for video capture, ultrasound research, data acquisition, and connection to legacy-medical devices. 

Clara Holoscan currently consists of developer kits with an accompanying Clara Holoscan SDK for developing AI models. Announced today at GTC, Clara Holoscan MGX, the medical-grade platform for building software-defined medical devices, will be available in 2023 for production-ready deployment

With nine front-end partners now supported on Clara Holoscan, medical device developers can add AI capabilities that augment human interpretation, maximize efficiency, and reduce error.

Powering low latency streaming for surgical video applications

NVIDIA has partnered with several leading video capture card manufacturers to provide the software driver support for these cards to be inserted into the PCI Express slots in the Clara AGX and Clara Holoscan Developer Kits. In addition, these capture cards will support the NVIDIA GPUDirect technology, which uses remote direct memory access to transfer video data directly from the capture card to GPU memory.

AJA Video Systems provides high-quality video I/O devices for professional video applications. The Corvid and Kona series of SDI and HDMI video capture cards are supported on Clara Developer Kits. The partnership between NVIDIA and AJA has led to the addition of Clara AGX Developer Kit support in the AJA NTV2 SDK and device drivers as of the NTV2 SDK 16.1 release. 

KAYA Instruments offers CoaXPress and CameraLink video capture cards for connecting a wide array of scientific camera solutions and electron microscopes. KAYA Instruments capture cards are supported on the Clara AGX Developer Kit with an upcoming version of the Kaya Instruments driver software.

Deltacast designs and produces a range of cost-effective video capture cards for use in the broadcast video, industrial, aerospace, and medical markets. Deltacast video interface cards support a variety of protocols including 12G-SDI and HDMI 2.0, offering reliability, low latency, and high quality.  Deltacast will support the Clara AGX Developer Kit in their upcoming VideoMaster 6.20 SDK and driver software release.

Blackmagic Design is one of the world’s leading innovators and manufacturers of creative video technology. Their DeckLink series of SDI and HDMI video capture cards support resolutions up to 8k. An upcoming release of their desktop video ecosystem, including the DeckLink driver and desktop video SDK, will support the Clara AGX Developer Kit.

YUAN High Tech offers a wide variety of video capture cards for HDMI, SDI, DVI, IP, and analog video.  Yuan has over 10 years of experience supporting the medical device industry with their video capture solutions and will support the Clara AGX Developer Kit in an upcoming release of their driver software.

Magewell produces a line of Pro Capture PCIe cards supporting SDI, HDMI, DVI, and analog video formats for reliable, high-quality video applications in broadcast, media, and medical applications. Magewell will support the Clara AGX Developer Kit in an upcoming release of their driver software.

Real time, high-performance compute for ultrasound imaging

Ultrasound imaging is another application where real-time, high-performance compute is crucial at all points in the processing pipeline. The NVIDIA Clara Holoscan SDK can support ultrasound imaging at all stages of the processing pipeline from beamforming to image reconstruction to post-processing and rendering. For developers designing the next generation of software-defined ultrasound devices, NVIDIA has partnered with two leading ultrasound research platform providers for Clara Holoscan.

Ultrasound R&D company us4us provides a range of cost-effective ultrasound front-end research platforms. When connected to the Clara AGX Developer Kit by PCI Express, these can be used to prototype a software-defined ultrasound system. Beamforming, image processing, AI image analysis, and rendering are all done on an NVIDIA GPU. 

This provides developers with maximum flexibility in developing, testing, and modifying their processing pipelines, on a platform similar to one they would deploy in a production medical device. Direct access to raw ultrasound data from up to 1024 Tx and up to 256 Rx transducer channels opens up exciting possibilities for AI algorithm development at much higher accuracy and resolution than available from processed video output. See the NVIDIA Ultrasound NGC container for more information.

Verasonics offers the world-leading Vantage ultrasound research platform, a powerful development system offering up to 256 Tx and 256 Rx channels and a long list of features, which can be flexibly configured.  The Vantage system operates with the powerful MATLAB scripting environment and connects to the Clara AGX Developer Kit using an Ethernet connection, for maximum flexibility in data connectivity.    

Supporting analog data acquisition

Finally, for applications that require analog data, analog waveform generation, or general-purpose I/O, NVIDIA is partnering with Spectrum Instrumentation.   

Spectrum Instrumentation produces a diverse range of PCI Express data acquisition cards, offering the Clara AGX Developer Kit the ability to both sample and produce multiple analog signals, interact with medical devices and sensors using simple control signals, or control power relays or other system components. 

This rapidly growing interface ecosystem is currently supported on the Clara AGX Developer Kit and will be supported on the future Clara Holoscan Developer Kit. With nine sensor frontends supporting a range of modalities, the Clara Holoscan ecosystem will continue to provide flexibility and speed to sensing instruments.

Access the Clara Holoscan NGC Collection for a growing collection of AI frameworks, reference applications, and AI models built for Clara Developer Kits and medical device development, including containers for streaming video, ultrasound, metagenomics, and dermatology.

Categories
Misc

Major Updates to NVIDIA AI Software Advancing Speech, Recommenders, Inference, and More Announced at NVIDIA GTC 2022

At GTC 2022, NVIDIA announced Riva 2.0, Merlin 1.0, new features to NVIDIA Triton, and more.

At GTC 2022, NVIDIA announced major updates to its suite of NVIDIA AI software, for developers to build real-time speech AI applications, create high-performing recommenders at scale and optimize inference in every application, and more. Watch the keynote from CEO, Jensen Huang, to learn about the latest advancements from NVIDIA.


Announcing NVIDIA Riva 2.0

Today, NVIDIA announced Riva 2.0 in general availability. Riva is an accelerated speech AI SDK which provides models, tools, and fully-optimized speech recognition and text-to-speech pipelines for real-time applications.

Highlights include:

  • World class automatic speech recognition in seven languages.
  • Neural-based text to speech, generating high-quality human-like voices. 
  • Domain-specific customization with TAO Toolkit and NeMo.
  • Support to run in cloud, on-prem, and on embedded platforms.

NVIDIA also announced Riva Enterprise, providing enterprises with large-scale deployments access to speech experts at NVIDIA. Enterprises can try Riva with guided labs on ready to run infrastructure in LaunchPad.

Add this GTC session to your calendar to learn more:


Announcing NVIDIA Merlin 1.0 Hyperscale ML, DL Recommender Systems on CPU, GPU

Today, NVIDIA announced NVIDIA Merlin 1.0, an end-to-end framework designed to accelerate recommender workflows, from data preprocessing, feature transforms, training, optimization, and deployment. With this latest release of NVIDIA Merlin, data scientists and machine learning engineers can scale faster with less code. The new capabilities offer quick iteration over features, models, as well as deployment of fully trained recommender pipelines with feature transforms, retrieval, and ranking models as an inference microservice.

Highlights include:

  • Merlin Models, a new library for data scientists to train and deploy recommender models in less than 50 lines of code.
  • Merlin Systems, a new library, for machine learning engineers to easily deploy recommender pipelines as an ensembled Triton microservice.
  • Support for large scale multi-GPU, multinode inference, and less compute intensive workloads.  

For more information about the latest release, download and try NVIDIA Merlin.

Add these GTC sessions to your calendar to learn more:


Announcing new features in NVIDIA Triton

Today, NVIDIA announced new key updates to NVIDIA Triton. Triton is an open-source inference-serving software that brings fast and scalable AI to every application in production. 

Highlights include:

  • Triton FIL backend: Model explainability with Shapley values and CPU optimizations for better performance.
  • Triton Management Service to simplify and automate setting up and managing a fleet of Triton instances on Kubernetes. Alpha release is targeted for the end of March.
  • Triton Model Navigator to automate preparing a trained model for production deployment with Triton.
  • Fleet Command integration for edge deployment.
  • Support for inference on AWS Inferentia and MLFlow plug-in to deploy MLFlow models. 
  • Kick-start your Triton journey with immediate, short-term access in NVIDIA LaunchPad without needing to set up your own Triton environment.

You can download Triton from the NGC catalog, and access code and documentation on GitHub.

Add these GTC sessions to your calendar to learn more:


Announcing new updates to NVIDIA NeMo Megatron

Today NVIDIA announced the latest version of NVIDIA NeMo Megatron, a framework for training large language models (LLM.) With NeMo Megatron research institutions and enterprises can achieve the fastest training for any LLM. It also includes the latest parallelism techniques, data preprocessing scripts, and recipes to ensure training convergence.

Highlights include:

  • Hyper parameter tuning tool that automatically creates recipes based on customers’ needs and infrastructure limitations. 
  • Reference recipes for T5 and mT5 models.
  • Cloud support for Azure.
  • Distributed data preprocessing scripts to shorten end-to-end training time.

Click here to apply for early access. 

Add these GTC sessions to your calendar to learn more:


Announcing new Features in NVIDIA Maxine

Today NVIDIA announced the latest version of NVIDIA Maxine, a suite of GPU-accelerated SDKs that reinvent audio and video communications with AI, elevating standard microphones and cameras for clear online communications. Maxine provides state-of-the-art real-time AI audio, video, and augmented reality features that can be built into customizable, end to end deep learning pipelines. 

Highlights include:

  • Audio super resolution: Improves real-time audio quality by upsampling the audio input stream from 8kHz to 16kHz and from 16kHz to 48kHz sampling rate.
  • Acoustic echo cancellation: Cancels real-time acoustic device echo from input audio stream, eliminating mismatched acoustic pairs and double-talk. With AI-based technology, more effective cancellation is achieved than with traditional digital signal processing.
  • Noise removal: Removes several common background noises using state-of-the-art AI models while preserving the speaker’s natural voice.
  • Room echo cancellation: Removes reverberations from audio using state-of-the-art AI models, restoring clarity of a speaker’s voice.

Download NVIDIA Maxine now.  

Add these GTC sessions to your calendar to learn more:

Register for GTC now to learn more about the latest updates to GPU-accelerated AI technologies.

Categories
Misc

Supercharge AI-Powered Robotics Prototyping and Edge AI Applications with the Jetson AGX Orin Developer Kit

A rendering of the now available NVIDIA Jetson AGX Orin Developer Kit.The Jetson AGX Orin Developer Kit offers 8X the performance of the last generation, offering the most powerful AI supercomputer for advanced robotics, and embedded and edge computing.A rendering of the now available NVIDIA Jetson AGX Orin Developer Kit.

Availability of the the NVIDIA Jetson AGX Orin Developer Kit was announced today at NVIDIA GTC. The platform is the world’s most powerful, compact, and energy-efficient AI supercomputer for advanced robotics, autonomous machines, and next-generation embedded and edge computing.

Jetson AGX Orin delivers up to 275 trillion operations per second (TOPS). It gives customers more than 8X the processing power of its predecessor Jetson AGX Xavier, while maintaining the same small form factor and pin compatibility. It features an NVIDIA Ampere Architecture GPU, Arm Cortex-A78AE CPU, next-generation deep learning and vision accelerators, high-speed interfaces, faster memory bandwidth, and multimodal sensor support to feed multiple, concurrent AI application pipelines.

The NVIDIA Jetson AGX Orin Developer Kit is perfect for prototyping advanced AI-powered robots and edge AI applications for manufacturing, logistics, retail, agriculture, healthcare, and more.

“As AI transforms manufacturing, healthcare, retail, transportation, smart cities, and other essential sectors of the economy, demand for processing continues to surge,” said Deepu Talla, vice president and general manager of embedded and edge computing at NVIDIA. “A million developers and more than 6,000 companies have already turned to Jetson. The availability of Jetson AGX Orin will supercharge the efforts of the entire industry as it builds the next generation of robotics and edge AI products.”

Jetson AGX Orin Developer Kit features:

  • Up to 275 TOPS and 8X the performance of the last generation, plus high-speed interface support for multiple sensors.
  • An NVIDIA Ampere Architecture GPU and 12-core Arm Cortex-A78AE 64-bit CPU, together with next-generation deep learning and vision accelerators.
  • High-speed I/O, 204.8GB/s of memory bandwidth, and 32GB of DRAM capable of feeding multiple concurrent AI application pipelines.

The Jetson AGX Orin Developer Kit has the computing capability of more than eight Jetson AGX Xavier systems. It integrates the latest NVIDIA GPU technology with the world’s most advanced deep learning software stack, delivering the flexibility to create sophisticated AI solutions now and well into the future. The developer kit can emulate all the production Jetson AGX Orin and Orin NX modules, set for release Q4 2022.

Customers using the Jetson AGX Orin Developer Kit can leverage the full NVIDIA CUDA-X accelerated computing stack. This suite includes pretrained models from the NVIDIA NGC catalog and the latest NVIDIA application frameworks and tools for application development and optimization, such as Isaac, Metropolis, TAO, and Omniverse.

These tools reduce time and cost for production-quality AI deployments. Developers can access the largest, most complex models needed to solve robotics and edge AI challenges in 3D perception, natural language understanding, multisensor fusion, and more.

Developer kit pricing and availability

The NVIDIA Jetson AGX Orin Developer Kit is available now at $1,999. Production modules will be available in Q4 2022 starting at $399.

Learn more about this new Jetson offering and attend an upcoming dedicated GTC session.

Downloadable documentation, software, and other resources are available in the Jetson Download Center.