A SmartNIC is a programmable accelerator that makes data center networking, security and storage efficient and flexible.
A SmartNIC is a programmable accelerator that makes data center networking, security and storage efficient and flexible.
A SmartNIC is a programmable accelerator that makes data center networking, security and storage efficient and flexible.
A SmartNIC is a programmable accelerator that makes data center networking, security and storage efficient and flexible.
Learn how to upgrade your Jetson devices with the latest CUDA version at this webinar on November 9.
Learn how to upgrade your Jetson devices with the latest CUDA version at this webinar on November 9.
Spearheading research in very high-speed silicon nanophotonics/plasmonics, the European plaCMOS project has reached a successful conclusion. The 51-month…
Spearheading research in very high-speed silicon nanophotonics/plasmonics, the European plaCMOS project has reached a successful conclusion. The 51-month project explored ferroelectric materials to improve performance and reliability. The team achieved world-leading advancements related to key components used in optical links: modulators, photodiodes, and optical switches.
Modulators using barium-titanate integrated on silicon were demonstrated, and monolithically integrated modulators with BiCMOS drivers were tested up to 187 GBaud. Germanium photodiode designs were generated achieving 3 dB bandwidths up to 265 GHz. Ferroelectric nonvolatile optical BTO switches were demonstrated with 100 states in a closed loop control scheme.
These groundbreaking results have been published in the article, A Ferroelectric Multilevel Nonvolatile Photonic Phase Shifter in the journal Nature Photonics. High-profile articles have also appeared in Nature Electronics, Nature Materials, and IEEE/OSA journals.
The consortium brought together eight partners from industry and academia, all renowned experts in their fields: NVIDIA Mellanox, MICRAM Microelectronic GmbH (now Keysight Technologies), ETH Zurich, IHP Leibniz-institut für innovative mikroelektronik, Aristotle University of Thessaloniki, IBM Research GmbH, Universität des Saarlandes, and Lumiphase AG.
Funding was provided by the European Commission’s Horizon 2020 program for research and innovation and the project was coordinated by Elad Mentovich, Head of the Advanced Development Group at NVIDIA Mellanox.
The innovative technologies developed in plaCMOS provide the foundation for the evolution of optical interconnects in data center networks for the second half of the decade. The team has furthered numerous research fields, including materials engineering and nanofabrication, plasmonic-photonic devices, high-speed analog electronics, and transceiver design.
Research on the leading-edge technologies established in plaCMOS continues in the spin-off projects, NEBULA and plasmoniAC. These new projects aim to extend the plaCMOS material platform and investigate new applications of the technology in co-packaged optics, inter-data center coherent links, and optical neuromorphic computing.
For more information, see the articles listed below.
NVIDIA announces new SDKs available in the NGC catalog, a hub of GPU-optimized deep learning, machine learning, and HPC applications. With highly performant…
NVIDIA announces new SDKs available in the NGC catalog, a hub of GPU-optimized deep learning, machine learning, and HPC applications. With highly performant software containers, pretrained models, industry-specific SDKs, and Jupyter notebooks available, AI developers and data scientists can simplify and reduce complexities in their end-to-end workflows.
This post provides an overview of new and updated services in the NGC catalog, along with the latest advanced SDKs to help you streamline workflows and build solutions faster.
Recent advances in large language models (LLMs) have fueled state-of-the-art performance for NLP applications, such as virtual scribes in healthcare, interactive virtual assistants, and many more.
NVIDIA NeMo Megatron, an end-to-end framework for training and deploying LLMs with up to trillions of parameters, is now available in open beta from the NGC catalog. It consists of an end-to-end workflow for automated distributed data processing; training large-scale customized GPT-3, T5, and multilingual T5 (mT5) models; and deploying models for inference at scale.
NeMo Megatron can be deployed on several cloud platforms, including Microsoft Azure, Amazon Web Services, and Oracle Cloud Infrastructure. It can also be accessed through NVIDIA DGX SuperPODs and NVIDIA DGX Foundry.
Request NeMo Megatron in open beta.
The NVIDIA NeMo LLM service provides the fastest path to customize foundation LLMs and deploy them at scale, using the NVIDIA-managed cloud API or through private and public clouds.
NVIDIA and community-built foundation models can be customized using prompt learning capabilities, which are compute-efficient techniques that embed context in user queries to enable greater accuracy in specific use cases. These techniques require just a few hundred samples to achieve high accuracy in building applications. These applications can range from text summarization and paraphrasing to story generation.
This service also provides access to the Megatron 530B model, one of the world’s largest LLMs with 530 billion parameters. Additional model checkpoints include 3B T5 and NVIDIA-trained 5B and 20B GPT-3.
Apply now for NeMo LLM early access.
The NVIDIA BioNeMo service is a unified cloud environment for end-to-end, AI-based drug discovery workflows, without the need for IT infrastructure.
Today, the BioNeMo service includes two protein models, with models for DNA, RNA, generative chemistry, and other biology and chemistry models coming soon.
ESM-1 is a protein LLM, which was trained on 52 million protein sequences, and can be used to help drug discovery researchers understand protein properties, such as cellular location or solubility, and secondary structures, such as alpha helix or beta sheet.
The second protein model in the BioNeMo service is OpenFold, a PyTorch-based NVIDIA-optimized reproduction of AlphaFold2 that quickly predicts the 3D structure of a protein from its primary amino acid sequence.
With the BioNeMo service, chemists, biologists, and AI drug discovery researchers can generate novel therapeutics and understand the properties and function of proteins and DNA. Ultimately, they can combine many AI models in a connected, large-scale, in silico AI workflow that requires supercomputing scale over multiple GPUs.
BioNeMo will enable end-to-end modular drug discovery to accelerate research and better understand proteins, DNA, and chemicals.
Apply now for BioNeMo early access.
A digital twin is a virtual representation—a true-to-reality simulation of physics and materials—of a real-world physical asset or system, which is continuously updated. Digital twins aren’t just for inanimate objects and people. They can replicate a fulfillment center process to test out human-robot interactions before activating certain robot functions in live environments and the applications are as wide as the imagination.
NVIDIA Omniverse Replicator is a highly extensible framework built on the NVIDIA Omniverse platform that enables physically accurate 3D synthetic data generation to accelerate the training and accuracy of perception networks.
Technical artists, software developers, and ML engineers can now easily build custom, physically accurate, synthetic data generation pipelines in the cloud or on-premises with the Omniverse Replicator container available from the NGC catalog.
Download the Omniverse Replicator container for self-service cloud deployment.
NVIDIA Modulus is a neural network AI framework that enables you to create customizable training pipelines for digital twins, climate models, and physics-based modeling and simulation.
Modulus is integrated with NVIDIA Omniverse so that you can visualize the outputs of Modulus-trained models. This interface enables interactive exploration of design variables and parameters for inferring new system behavior and visualizing it in near real time.
The latest release (v22.09), includes key enhancements to increase composition flexibility for neural operator architectures, features to improve training convergence and performance, and most importantly, significant improvements to the user experience and documentation.
Download the latest version of Modulus.
The most popular deep learning frameworks for training and inference are updated monthly. Pull the latest version (v22.09):
We are constantly adding state-of-the-art models for a variety of speech and vision models. The following pretrained models are new on NGC:
Explore more pretrained models for common AI tasks on the NGC Models page.
One of the key contributors in originating flash floods is the blockage of cross-drainage hydraulic structures, such as culverts, by unwanted, flood-borne…
One of the key contributors in originating flash floods is the blockage of cross-drainage hydraulic structures, such as culverts, by unwanted, flood-borne debris being transported.
The accumulation and interaction of debris with culverts often result in reduced hydraulic capacity, diversion of upstream flows, and structural failure. For example, the Newcastle, Australia floods in 2007, Wollongong, Australia floods in 1998 and Pentre, United Kingdom floods in 2021, are just a few instances where blockages were reported as a primary reason for cross-drainage hydraulic structure failure.
In this post, we describe our technique for building a diverse visual dataset for computer vision model training, including examples of synthetic images. We break down each component of our solution and provide insights on future research directions.
Non-linear debris accumulation, the unavailability of real-time data, and complex hydrodynamics suggested the invalidity of a conventional numerical modeling-based approach to address the problem. In this context, post-flood visual information was used to develop the blockage policies involving several assumptions, which many argue are not a true representative of blockage.
This suggests the need for better understanding and exploring the blockage issue from a technology perspective to aid flood management officials and policymakers.
To help address the blockage problem, StopBlock was initiated as a part of SMART Stormwater Management. Overall, this project involved collaboration between city councils in the Illawarra (Wollongong, Shellharbour, and Kiama) and Shoalhaven regions, Lendlease, and the University of Wollongong’s SMART Infrastructure Facility.
StopBlock aims to assess and monitor the visual blockage at culverts in real time using the latest technologies:
In addition, we build and deployed an artificial intelligence of things (AIoT) solution using NVIDIA edge computing, the latest computer vision detection and classification models, a CCTV camera, and a 4G module. The solution detected the visual blockage status (blocked, partially blocked, or clear) at three culvert sites within the Illawarra region.
Training computer vision CNN models requires numerous images related to the intended task. The problem of culvert blockage detection has not been addressed from this perspective before. No database of image data and datasets exists for this purpose.
We developed a new training database consisting of diverse image data related to culvert blockage. These images showed varying culvert types, debris types, camera angles, scaling, and lighting conditions.
Limited data from real culvert blockage was available through the city council records. We adopted the idea of using the combination of real, lab-simulated, and synthetic visual data.
We collected real images of culverts (blocked and clear) from multiple sources:
The collected images represent great diversity in terms of culvert types, debris types, illumination conditions, camera viewpoints, scale, resolution, and even backgrounds. The images of culvert openings and blockages (ICOB) dataset consisted of 929 images in total.
We collected simulated images from scaled laboratory experiments to optimize the existing visual dataset, as not enough real images were available.
A thorough hydraulics laboratory investigation was performed where a series of experiments used scaled physical models of culverts. Blockage scenarios used scaled debris (urban and vegetative) under various flooding conditions.
The images represented diversity in terms of culvert types (single circular, double circular, single box, or double box), blockage types (urban, vegetative, or mixed), simulated lighting conditions, camera viewpoints (two cameras), and flooding conditions (inlet discharge levels). However, the dataset was limited in terms of reflections, clear water, identical background, and identical scaling.
In total, we collected 1,630 images from these experiments to establish the VHD dataset.
We generated synthetic images of culverts (SIC) using a three-dimensional computer application based on the Unity gaming engine with the goal of enhancing the datasets for training.
The application is specifically designed to simulate culvert blockage scenarios and can generate virtually countless instances of blocked culverts with any possible blockage situation that you can think of. You can also alter culvert types, water levels, debris types, camera viewpoints, time of the day, and scaling.
The app design enables you to select scene features from dropdown menus and drag debris objects from a library to place anywhere in the scene with any possible orientation. You can write code using parameters to recreate multiple scenarios and batch capture the images with corresponding labels, to aid the training process.
Some highlighted limitations included unrealistic effects and animations and a single natural background. Figure 3 shows samples from the SIC dataset.
We developed an AIoT solution using edge computing hardware, computer vision models, and sensors for the real-time visual blockage monitoring at culverts:
More specifically, in terms of software, a two-stage detection-classification pipeline is adopted (Figure 4).
In the first stage, a computer vision object detection model (YOLOv4) is used to detect the culvert openings. The detected openings are cropped from the original image and are processed for the classification stage. If no culvert opening is detected, an alert is issued to suggest that the culvert might be submerged.
At the second stage, a CNN classification model such as ResNet-50) is used to classify the cropped culvert openings into one of three blockage classes (blocked, partially blocked, or clear). The blockage-related information is then transmitted to a web dashboard for flood management officials to facilitate the decision-making process.
We trained the YOLOv4 and ResNet-50 models used for detection and classification, respectively, using the NVIDIA TAO platform powered by Python, TensorFlow, and Keras. We used a Linux machine equipped with the NVIDIA A100 GPU for training the models using images from the ICOB, VHD, and SIC datasets.
Here’s the four-stage approach adopted for development:
Relating to software performance, the culvert opening detection model achieved the validation mAP of 0.90 while the blockage classification model achieved a validation accuracy of 0.88.
We developed an end-to-end video analytics pipeline on the NVIDIA DeepStream 6 SDK, using the trained computer vision models to make the inference on the NVIDIA TX2-powered edge computer. Using these detection and classification models, the DeepStream pipeline achieved the FPS of 24.8 for NVIDIA TX2 hardware.
We built the smart device for culvert blockage monitoring using a CCTV camera, NVIDIA TX2 edge computer, and 4G dongle (Figure 5). We optimized the developed hardware for power consumption and computational time for real-time utility. Powered by a solar panel, the hardware consumes only 9.1W average power. The AIoT solution is also configured to transmit the blockage metadata every hour to the web dashboard.
The solution is configured to consider the privacy issues and avoid storing any images on board or in the cloud. Instead, it only processes the images and transmits the blockage metadata. Figure 5 shows the installation of the AIoT hardware at one of the remote sites to monitor the culvert visual blockage.
The potential of computer vision can be further explored to establish a better understanding of visual blockage by extracting blockage-related information:
In the context of flood management decision making, knowing the blockage status of a given culvert is not always enough to make a maintenance-related decision. Going one step further and estimating the percentage visual blockage at a given culvert assists flood management officials in prioritizing the culverts with high visual blockage.
A segmentation-classification pipeline to segment the visible openings from image and classifying the segmented masks into one of four percentage visual blockage classes can be one of the potential solutions. Figure 6 shows the conceptual block diagram for the percentage visual blockage estimation.
The type of flood-borne debris interacting and accumulating at the culvert can result in distinct flooding impacts. Usually, vegetative debris is considered less concerning because of its porous nature in comparison to compact, urban debris.
Automatic detection of debris type is another crucial aspect to be explored.
A CNN classification model may be used to facilitate the manual culvert inspections as a simplistic solution while keeping the flood management official in the loop. Given the complexity of the problem and preliminary analysis, it is not possible to only use a CNN classification model to automate the process. However, a partially automated framework can be developed to facilitate the process.
Figure 7 shows the concept of such a framework based on the classification probability of the trained model. If the classification probability for a given image is less than a given threshold, it can be flagged to flood management officials for cross-validation.
We provided an edge-computing solution for the visual blockage detection at the culverts to assist the timely maintenance and to avoid the blockage-related flooding events.
A classification-detection computer vision model is developed and deployed using the NVIDIA edge-computing hardware to retrieve the blockage status of a culvert as “clear,” “blocked,” or “partially blocked.” To facilitate the training of computer vision models for this unique problem domain, we used simulated and artificially generated images related to culvert visual blockage.
There is a tremendous scope of extending the provided solution in multiple ways to achieve further improved and additional visual blockage information. Estimation of percentage visual blockage, detection of flood-borne debris, and developing a partially automated visual blockage classification framework are a few potential enhancements that can be made within the existing solution.
Learn how to leverage the latest NVIDIA RTX technology in Unity Engine and connect with experts during a live Q&A at this webinar on November 16.
Learn how to leverage the latest NVIDIA RTX technology in Unity Engine and connect with experts during a live Q&A at this webinar on November 16.
Editor’s note: This is the first in a series of blogs on researchers advancing science in the expanding universe of high performance computing. A perpetual shower of random raindrops falls inside a three-foot metal ring Dale Durran erected outside his front door (shown above). It’s a symbol of his passion for finding order in the Read article >
The post Stormy Weather? Scientist Sharpens Forecasts With AI appeared first on NVIDIA Blog.
Everyone agrees that open solutions are the best solutions but, there are few truly open operating systems for Ethernet switches. At NVIDIA, we embraced open…
Everyone agrees that open solutions are the best solutions but, there are few truly open operating systems for Ethernet switches. At NVIDIA, we embraced open source for our Ethernet switches. Besides supporting SONiC, we have contributed many innovations to open-source community projects.
This post was originally published on the Mellanox blog in June 2018 but has been updated.
Microsoft runs one of the largest clouds in the world with Azure. In building and deploying Azure, they have gained a lot of insight into managing a global, high-performance, highly available, and secure network.
The network operating system (NOS) Microsoft uses for Azure, SONiC (Software for Open Networking in the Cloud), is built on open source. Their experience with hundreds of data centers and tens of thousands of switches has educated them about what is required:
SONiC, a breakthrough for network switch operations and management, addresses these requirements. Microsoft open-sourced this innovation to the community, making it available on their SONiC GitHub repository.
SONiC is a uniquely extensible platform with a large and growing ecosystem of hardware and software partners that offers multiple switching platforms and various software components.
SONiC system’s architecture comprises multiple modules that interact with each other through a centralized and scalable infrastructure. This infrastructure relies on a Redis-database engine which allows data persistence, replication, and multi-process communication among all SONiC subsystems.
The Redis-engine infrastructure relies on a messaging paradigm of publisher/subscriber so that applications can subscribe only to the data views that they require, avoiding implementation details irrelevant to their functionality.
For more information about the SONiC architecture, see Architecture in the SONiC wiki.
NVIDIA Spectrum switches support a variety of Layer 2 and Layer 3 networking connectivity and management features. Table 1 shows the features that SONiC currently supports.
L3 | L2 | Management |
BGP | LAG | SNMP |
ECMP | LLDP | Syslog |
DHCP Relay | ECN | NTP |
IPv6/4 | PFC | CoPP |
WRED | TACACS+ | |
CoS | Sysdump | |
Mirroring | ||
ACL | ||
When choosing a switch to run SONiC on top, you should look at two main factors:
The NVIDIA Open Ethernet Switch portfolio is entirely based on the Spectrum ASIC, providing the lowest latency for 25G/100G in the market, zero packet loss, and a fully shared buffer. It is the ideal combination for cloud networking demands.
SONiC works with the Spectrum ASICs through their unique driver solutions. SONiC uses SAI, an open-source driver solution co-invented by NVIDIA. This open capability of Spectrum also means that any Linux distribution can run on a Spectrum switch.
NVIDIA is the only switch silicon vendor that has contributed their ASIC driver directly to the Linux kernel, enabling support for a mix of SONiC and any standard Linux distributions, like Red Hat or Ubuntu, to run directly on the switch.
NVIDIA is the only company participating in all levels of the SONiC development community. We are one of the first companies to develop and adopt SAI. SONiC fully supports all Spectrum family switches and can be deployed on any switch in our Ethernet portfolio. We are also a major and active contributor to the SONiC OS feature set.
All NVIDIA networking platforms support port splitting through the SONiC OS, the only platforms that currently support this feature. Spectrum switches also deliver exceptional network performance compared to a commodity silicon-based switch using real-life mixed frame size, “noisy neighbor,” and microburst absorption scenarios.
For more information about the fundamental differences between NVIDIA Spectrum and Broadcom Tomahawk-based switches, and our unmatched ASIC performance, see Tolly Performance Evaluation: NVIDIA Spectrum-3 Ethernet Switch.
NVIDIA Spectrum switch systems are an ideal spine and top-of-rack solution, allowing flexibility, with port speeds ranging from 10 Gb/s to 100 Gb/s per port, and port density that enables full rack connectivity to every server at any speed. These ONIE-based switch platforms support multiple operating systems, including SONiC and leverage the advantages of Open Network disaggregation and the NVIDIA Spectrum ASIC capabilities.
Spectrum adaptive routing technology supports various network topologies. For typical topologies such as CLOS (or leaf/spine), the distance of the multiple paths to a given destination is the same. Therefore, the switch transmits the packets through the least congested port.
In other topologies where distances vary between paths, the switch prefers to send the traffic over the shortest path. If congestion occurs on the shortest path, then the least-congested alternative paths are selected. You can build a high-performing CLOS data center using the NVIDIA switches as your building blocks.
Similarly, Border Gateway Protocol (BGP) is a routing protocol responsible for looking at all the available paths that data could travel and picking the best route. BGP enables communication to happen quickly and efficiently.
Spectrum switches enable PODs. A POD is a network, storage, and compute unit that works together to deliver networking services. A POD is a repeatable design pattern that provides scalable and easier-to-manage data centers.
Finally, the Spectrum family enables a set of advanced network functions that future-proof the switch with the flexibility to handle evolving networking technologies. This includes new protocols that may be developed in the future, enabling custom applications, advanced telemetry, and new tunneling/overlay capabilities. Spectrum combines a programmable, flexible, and massively parallel packet processing pipeline with a fully shared and stateful forwarding database. Spectrum also features What Just Happened (WJH), the world’s most useful switch telemetry technology.
For more information, see the following resources:
Humanity has seen major scientific breakthroughs directly related to discoveries that do not share the glamor of the breakthrough they enabled. Sir Alexander…
Humanity has seen major scientific breakthroughs directly related to discoveries that do not share the glamor of the breakthrough they enabled.
Sir Alexander Fleming’s penicillin gave rise to effective treatments for infections like pneumonia, but penicillin’s importance outshines a technology known as the Petri dish, invented by a German physician. It was in a Petri dish that penicillin was found when Fleming returned from his vacation.
Naturally, the importance of the tools and components that enabled scientific advancements and technological progress are not as celebrated as the new technology, but they are just as important to the discovery.
Today, in a world full of open-source projects, pretrained machine learning models, and affordable computing available at scale, developers and scientists have more resources to combine and create.
Like the Petri dish that enabled penicillin, developers and scientists can use existing components to generate new discoveries of great social impact in the healthcare industry.
NVIDIA is hosting a free Healthcare and Life Sciences Developer Summit on November 10, 2022, with key webcasts for developers, startups, and industry leaders. The sessions show how NVIDIA technologies are supporting the future of medicine.
The virtual summit offers a full day of technical talks to reach developers and technical leaders in the EMEA region. Led by NVIDIA healthcare team members and startups like Relation Therapeutics, ImFusion, Rhino Health, and Quantib, the day features talks about high-performance computing, large language models, genomics, and medical imaging.
NVIDIA has been nurturing more than 12K startups globally through the Inception program, its virtual accelerator, with nearly 2,000 startups in the healthcare industry.
At the latest GTC, success stories from Inception members were shared in the Accelerating Healthcare & Life Science Innovation with Makers and Breakers session. Startups such as Activ Surgical, Instadeep, Haply Robotics, DNAnexus, and Quantib talked about their experiences and recent achievements in medical imaging, medical instruments, and biopharma.
Renee Yao, Global AI Healthcare AI Startups Lead at NVIDIA, has seen several startups achieve success by leveraging NVIDIA technologies, enabling those in the healthcare and life sciences industry to build faster and cheaper.
A lot of innovation is happening in the healthcare space, particularly those contributing to enhancing precision health powered by machine learning. Yao advises startups to consider what scientific and software communities have already built before starting developments from scratch.
David Ruau, NVIDIA Head of Strategic Alliances in Drug Discovery, talks about the upcoming breakthroughs in the biopharma domain, not only through large and traditional companies but also through startups.
“The pace of innovation that AI has applied to drug discovery is still accelerating,” says Ruau. He believes that technologies such as transformers, geometric deep learning, diffusion models, and many other approaches applied to all the steps of the drug discovery process are contributing to giant leaps in innovation.
Ruau explains, “startups must be nimble and agile, as speed is key in this domain.” He points out the importance of funding and lowering the entry requirements for innovating with machine learning. Software developers and scientists can use NVIDIA open-source frameworks in the cloud, on-premises, or in a hybrid approach. This enables faster results by any healthcare technology startup, regardless of their funding stage.
NVIDIA technologies are used by organizations all over the world to accelerate their research and fuel new discoveries.
NVIDIA recently announced BioNeMo, a transformer-based framework and cloud service. It can process SMILES and protein sequences to predict structures and accelerate the discovery of druggable targets. BioNeMo has been built on top of NVIDIA NeMo Megatron, a 3B parameters model, and it is already optimized for GPU.
Another major announcement came from Project MONAI, a PyTorch-based, open-source framework for deep learning in healthcare imaging. The latest version, 1.0, includes new and enhanced features, which include preprocessing for multidimensional medical imaging data, automated segmentation, GPU data parallelism, and more.
Useful for creating state-of-the-art, end-to-end training workflows for healthcare imaging, MONAI provides researchers with the optimized and standardized way to create and evaluate deep learning models.
Following these two announcements, select startups are presenting their research at the Healthcare and Life Sciences Developer Summit, open to all.
Relation Therapeutics, a drug discovery startup from London, is using transformer-based machine learning and ActiveGraph technology to better understand the biology of diseases. They aim to discover and develop new therapeutics, help humanity understand why patients become sick, and ultimately cure disease.
Their technology can understand combinatorial relationships between genes, proteins, and drugs. It involves calculations that require efficient use of computational resources and an exceptional interdisciplinary team of researchers, machine learning, data scientists, and engineers to devise these models.
Relation Thereapeutics is currently training a transformer-based DNA-to-gene-expression model on 80 NVIDIA A100 GPUs. The GPUs are hosted on Cambridge-1, the UK’s most powerful supercomputer. It has 400 petaflops of AI performance that leverages NCCL and cuDNN for GPU-optimized workflows.
By using existing technologies, such as NVIDIA GPUs and an optimized software stack, Relation Therapeutics can create novel approaches that push the boundaries of what is known about biology. They are becoming one of the best modern-day examples of how to leverage existing technologies to create new ones and positively impact human health.
Our society is already witnessing historical breakthroughs powered by computational methods, and startups play a crucial role in achieving such advancements.
A new era of scientific discovery and computational biology has begun, and NVIDIA technologies are the reliable ground in which innovation can be built upon. Speeding up scientific development or serving as a catalyst of novel stunning technologies, like the key tools that enabled penicillin to exist.
NVIDIA is enabling thousands of individuals and organizations across all industries through its optimized computing stack, enabling major technological transformation. Register now for the Healthcare and Life Sciences Developer Summit on November 10, 2022 to see how key startups are defining the future of medicine with the power of AI.
A digital twin is a virtual representation synchronized with physical things, people, or processes.
A digital twin is a virtual representation synchronized with physical things, people, or processes.