Categories
Misc

FlavorGraph Serves Up Food Pairings with AI, Molecular Science

A new ingredient mapping tool by Sony AI and Korea University uses molecular science and recipe data to predict how two ingredients will pair together.

It’s not just gourmet chefs who can discover new flavor combinations— a new ingredient mapping tool by Sony AI and Korea University uses molecular science and recipe data to predict how two ingredients will pair together and suggest new mash-ups. 

Dubbed FlavorGraph, the graph embedding model was trained on a million recipes and chemical structure data from more than 1,500 flavor molecules. The researchers used PyTorch, CUDA and an NVIDIA TITAN GPU to train and test their large-scale food graph.

Researchers have previously used molecular science to explain classic flavor pairings such as garlic and ginger, cheese and tomato, or pork and apple — determining that ingredients with common dominant flavor molecules combine well. In the FlavorGraph database, flavor molecule information was grouped into profiles such as bitter, fruity, and sweet. 

But other ingredient pairings have different chemical makeups, prompting the team to incorporate recipes into the database as well, giving the model insight into ways flavors have been combined in the past.

FlavorGraph is a large-scale graph network of food and chemical compound nodes.

“The outcome is pairing suggestions that achieve better results than ever before,” wrote Korea University researcher Donghyeon Park and Fred Gifford, strategy and partnerships manager at Sony. “These suggestions can be used to predict relationships between compounds and foods, hinting at new and exciting recipe techniques and driving new perspectives on food science in general.” 

Featuring in Scientific Reports, FlavorGraph shows the connections between flavor profiles and the underlying chemical compounds in specific foods. It’s based on the metapath2vec model, and outperforms other baseline methods for food clustering.

The researchers hope the project will lead to the discovery of new recipes, more interesting flavor combinations, and potential substitutes for unhealthy or unsustainable ingredients. 

“We hope that projects like this will continue to complement both the complex ingredient systems fossilized over time through cultural evolution, as well as the electric ingenuity of modern innovators and chefs,” the team wrote. 

Read the full paper in Scientific Reports, and find the data and trained food representations on GitHub

Read more from Sony >> 

Categories
Misc

Cultivating AI: AgTech Industry Taps NVIDIA GPUs to Protect the Planet

What began as a budding academic movement into farm AI projects has now blossomed into a field of startups creating agriculture technology with a positive social impact for Earth. Whether it’s the threat to honey bees worldwide from varroa mites, devastation to citrus markets from citrus greening, or contamination of groundwater caused from agrochemicals — Read article >

The post Cultivating AI: AgTech Industry Taps NVIDIA GPUs to Protect the Planet appeared first on The Official NVIDIA Blog.

Categories
Misc

Mooning Over Selene: NVIDIA’s Julie Bernauer Talks Setting Up One of World’s Fastest Supercomputers

Though admittedly prone to breaking kitchen appliances like ovens and microwaves, Julie Bernauer — senior solutions architect for machine learning and deep learning at NVIDIA — led the small team that successfully built Selene, the world’s fifth-fastest supercomputer. Adding to an already impressive feat, Bernauer’s team brought up Selene as the world went into lockdown Read article >

The post Mooning Over Selene: NVIDIA’s Julie Bernauer Talks Setting Up One of World’s Fastest Supercomputers appeared first on The Official NVIDIA Blog.

Categories
Misc

GFN Thursday Drops the Hammer with ‘Vermintide 2 Chaos Wastes’ Free Expansion, ‘Immortals Fenyx Rising The Lost Gods’ DLC

GFN Thursday is our ongoing commitment to bringing great PC games and service updates to our members each week. Every Thursday, we share updates on what’s new in the cloud — games, exclusive features, and news on GeForce NOW. This week, it includes the latest updates for two popular games: Fatshark’s free expansion Warhammer: Vermintide Read article >

The post GFN Thursday Drops the Hammer with ‘Vermintide 2 Chaos Wastes’ Free Expansion, ‘Immortals Fenyx Rising The Lost Gods’ DLC appeared first on The Official NVIDIA Blog.

Categories
Misc

Green for Good: How We’re Supporting Sustainability Efforts in India

When a community embraces sustainability, it can reap multiple benefits: gainful employment for vulnerable populations, more resilient local ecosystems and a cleaner environment. This Earth Day, we’re announcing our four latest corporate social responsibility investments in India, home to more than 2,700 NVIDIANs. These initiatives are part of our multi-year efforts in the country, which Read article >

The post Green for Good: How We’re Supporting Sustainability Efforts in India appeared first on The Official NVIDIA Blog.

Categories
Misc

Hello World!!! I built a Course on Udemy where I teach you how to build 8 reinforcement learning agents in environments like Mario, Flappy Bird, Stocks and Much More!! You only need to know Python! (FREE FOR LIMITED TIME)

In this course I will teach you 8 agents including:

· Space Invaders Agent using Keras-RL

· Autonomous Taxi using Q-Learning built from scratch

· Flappy Bird Agent using Deep Q Network that we build from scratch

· Mario Agent using Deep Q Network that we build from scratch

· A reinforcement Learning S&P 500 stock trading agent that is rewarded with making money off the stock market!

· Another Reinforcement Learning Stock Trading Agent using 89 different Technical indicators (you can pair these to make a lot of money off the stock market 😉)

· 3 Car agents that learn to maneuver roundabouts, parking lots, & merge onto a highway

The only thing you need to know is Python! If you are interested in cutting edge technology, then this is the course for you! Check it out!

https://www.udemy.com/course/practical-reinforcement-learning/?couponCode=150336130778173C0A71

submitted by /u/samboylansajous
[visit reddit] [comments]

Categories
Misc

I’m currently self studying machine learning using the book "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" (2nd Edition) by Aurélien Géron. Is anybody else studying this book and interested in being part of a remote study group to complete it?

I'm currently self studying machine learning using the book "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" (2nd Edition) by Aurélien Géron. Is anybody else studying this book and interested in being part of a remote study group to complete it? submitted by /u/Letuku
[visit reddit] [comments]
Categories
Misc

NVIDIA Sets AI Inference Records, Introduces A30 and A10 GPUs for Enterprise Servers

NVIDIA today announced that its AI inference platform, newly expanded with NVIDIA® A30 and A10 GPUs for mainstream servers, has achieved record-setting performance across every category on the latest release of MLPerf.

Categories
Misc

The Future’s So Bright: NVIDIA DRIVE Shines at Auto Shanghai

NVIDIA DRIVE-powered cars electrified the atmosphere this week at Auto Shanghai. The global auto show is the oldest in China and has become the stage to debut the latest vehicles. And this year, automakers, suppliers and startups developing on NVIDIA DRIVE brought a new energy to the event with a wave of intelligent electric vehicles Read article >

The post The Future’s So Bright: NVIDIA DRIVE Shines at Auto Shanghai appeared first on The Official NVIDIA Blog.

Categories
Misc

Extending NVIDIA Performance Leadership with MLPerf Inference 1.0 Results

Inference is where we interact with AI. Chat bots, digital assistants, recommendation engines, fraud protection services, and other applications that you use every day—all are powered by AI. Those deployed applications use inference to get you the information that you need. Given the wide array of usages for AI inference, evaluating performance poses numerous challenges … Continued

Inference is where we interact with AI. Chat bots, digital assistants, recommendation engines, fraud protection services, and other applications that you use every day—all are powered by AI. Those deployed applications use inference to get you the information that you need.

Given the wide array of usages for AI inference, evaluating performance poses numerous challenges for developers and infrastructure managers. Industry-standard benchmarks have long played a critical role in that evaluation process. For AI inference on data center, edge, and mobile platforms, MLPerf Inference 1.0 measures performance across computer vision, medical imaging, natural language, and recommender systems. These benchmarks were developed by a consortium of AI industry leaders. They provide the most comprehensive set of performance data available today, both for AI training and inference.

Version 1.0 of MLPerf Inference introduces some incremental but important new features. These include tests to measure power and energy efficiency and increasing test runtimes from 1 minute to 10 to better exercise the unit under test.

To perform well on the wide test array in this benchmark, it takes a full-stack platform with great ecosystem support, both for frameworks and networks. NVIDIA was the only company to make submissions for all data center and edge tests and deliver the best performance on all. One of the great byproducts of this work is that many of these optimizations found their way into inference developer tools like TensorRT and Triton.

In this post, we step through some of these optimizations, including the use of Triton Inference Server and the A100 Multi-Instance GPU (MIG) feature.

MLPerf 1.0 results

This round of MLPerf Inference saw the debut of two new GPUs from NVIDIA: A10 and A30. These mainstream GPUs join the flagship NVIDIA A100 GPU, and each has a particular role to play in the portfolio. A10 is designed for AI and visual computing and A30 is designed for AI and compute workloads. The following chart shows the Data Center scenario submissions:

NVIDIA' delivers winning results across the board in the Data Center category of MLPerf Inference 1.0.
Figure 1. MLPerf Inference 1.0 Data Center scenario performance.

In the Edge scenario, NVIDIA again delivered leadership performance across the board.

NVIDIA' delivers winning results across the board in Edge category of MLPerf Inference 1.0.
Figure 2. MLPerf Inference 1.0 Edge scenario performance.

Optimizations behind the results

AI training generally requires precisions like FP32, TF32, or mixed precision (FP16/FP32). However, inference can often use reduced precision to achieve better performance and lower latency while preserving required accuracy. Nearly all NVIDIA submissions used INT8 accuracy. In the case of the RNN-T speech-to-text model, we converted the encoder LSTM cell to INT8. Previously, in v0.7, we used FP16. We also made several other optimizations to make best use of the IMMA (INT8 using Tensor Cores) instructions across different workloads.

Layer fusion is another optimization technique where the math operations from multiple network layers are combined to reduce computational load to achieve the same or better result. We used layer fusion to improve performance on the 3D-UNet medical imaging workload, combining deconvolution and concatenation operations into a single kernel.

Triton

As with the previous round, we made many submissions using Triton Inference Server, which simplifies deployment of AI models at scale in production. This open-source inference serving software lets you deploy trained AI models from any framework on any GPU– or CPU-based infrastructure: cloud, data center, or edge. You can use a variety of possible inference backends, including TensorRT for NVIDIA GPU and OpenVINO for Intel CPU.

In this round, the team made several optimizations that are available from the triton-inference-server GitHub repo. These include a multithreaded gather kernel to prepare the input for inference as well as using pinned CPU memory for I/O buffers to speed data movement to the GPU. Using the Triton integrated auto-batching support, the Triton-based GPU submissions achieved an average of 95% of the performance of the server scenario submissions, using custom auto-batching code.

Another great Triton feature is that it can run CPU-based inference. To demonstrate those capabilities, we made several CPU-only submissions using Triton. On data center submissions in the offline and server scenarios, Triton’s CPU submissions achieved an average of 99% of the performance of the comparable CPU submission. You can use the same inference serving software to host both GPU– and CPU-based applications. When you transition applications from CPU to GPU, you can stay on the same software platform, with only a few changes to a config file to complete the change.

MIG goes big

For this round, the team made two novel submissions to demonstrate MIG performance and versatility. A key metric for infrastructure management is overall server utilization, which includes its accelerators. A typical target value is around 80%, which gets the most out of every server while allowing some headroom to handle compute demand spikes. A100 GPUs often have much more compute capacity than a single inference workload requires. Having the MIG feature to partition the GPU into right-sized instances allows you to host multiple networks on a single GPU.

NVIDIA Multi-Instance GPU (MIG) can run the entire MLPerf Inference 1.0 benchmark at the same time, and deliver nearly the same performance as a single MIG instance running by itself.
Figure 3. Single A100 with MIG runs all MLPerf tests at the same time, with 98% performance of a single MIG instance.

The team built a MIG submission where one network’s performance was measured in a single MIG instance. Simultaneously, the other MLPerf Data Center workloads were running in the other six MIG instances. In other words, a single A100 was running the entire Data Center benchmark suite at the same time. The team repeated this for all six Data Center networks. For the network being measured, the submission showed that, on average, the network under test achieved 98% of the performance of that single MIG instance if the other six instances were idle.

MLPerf Inference drives innovation

Many of the optimizations used to achieve the winning results are available today in TensorRT, Triton Inference Server, and the MLPerf Inference GitHub repo. This round of testing debuted two new GPUs: the NVIDIA A10 and A30. It further demonstrated the great capabilities of Triton and the MIG feature. These allow you to deploy trained networks on GPUs and CPUs easily. At the same time, you’re provisioning the right-sized amount of AI acceleration for a given application and maximizing the utility of every data center processor.

In addition to the direct submissions by NVIDIA, eight partners—including Alibaba, Dell EMC, Fujitsu, Gigabyte, HPE, Inspur, Lenovo, Supermicro—also submitted using NVIDIA GPU-accelerated platforms, for over half of the total submissions. All software used for NVIDIA submissions is available from the MLPerf repo, NVIDIA GitHub repo, and NGC, the NVIDIA hub for GPU-optimized software for deep learning, machine learning, and high-performance computing.

These MLPerf Inference 1.0 results bring up to 46% more performance than the previous MLPerf 0.7 submission six months ago. They further reinforce the NVIDIA AI platform as not only the clear performance leader, but also the most versatile platform for running every kind of network: on-premises, in the cloud, or at the edge. As networks and data sets continue to grow rapidly and as real-time services continue to use AI, inference acceleration has become a must-have for applications to realize their full potential.