Categories
Misc

Destination Earth: Supercomputer Simulation to Support Europe’s Climate-Neutral Goals

To support its efforts to become climate neutral by 2050, the European Union has launched a Destination Earth initiative to build a detailed digital simulation of the planet that will help scientists map climate development and extreme weather events with high accuracy.

To support its efforts to become climate neutral by 2050, the European Union has launched a Destination Earth initiative to build a detailed digital simulation of the planet that will help scientists map climate development and extreme weather events with high accuracy.

The decade-long project will create a digital twin of the Earth, rendered at one-kilometer scale and based on continuously updated observational data from climate, atmospheric, and meteorological sensors — as well as measures of the environmental impacts of human activities. 

Led by the European Space Agency, European Centre for Medium-Range Weather Forecasts, and European Organisation for the Exploitation of Meteorological Satellites, the digital twin project is estimated to require a system with 20,000 GPUs to operate at full scale, the researchers wrote in a strategy paper published in Nature Computational Science.

Insights from the simulation will allow scientists to develop and test scenarios, informing policy decisions and sustainable development planning. Dubbed DestinE, the model could be used to assess drought risk, monitor sea level rise, and track changes in the polar regions. It will also be used for strategies around food and water supplies, as well as renewable energy initiatives including wind farms and solar plants. 

“If you are planning a two-​meter high dike in The Netherlands, for example, I can run through the data in my digital twin and check whether the dike will in all likelihood still protect against expected extreme events in 2050,” said Peter Bauer, deputy director for Research at the European Centre for Medium-​Range Weather Forecasts and co-​initiator of Destination Earth. 

Unlike traditional climate models, which represent large-scale processes and neglect the finer details essential for precise weather forecasts, the digital twin model will bring together both, enabling high-resolution simulations of the entire climate and weather system. 

The researchers plan to harness AI to help process data, represent uncertain processes, accelerate simulations, and filter out key insights from the data. The main digital modeling platform is aimed to be operational by 2023, with the digital twin fully developed and running by 2027.

“Destination Earth is a key initiative for Europe’s twin digital and green transitions,” said Thomas Skordas, the European Commission’s director for digital excellence and science infrastructure. “It will greatly enhance our ability to produce climate models with unprecedented detail and reliability, allowing policy-makers to anticipate and mitigate the effects of climate change, saving lives and alleviating economic consequences in cases of natural disasters.”

Read the research team’s papers in Nature Computational Science and Nature Climate Change.

Read more >> 

Categories
Misc

We Won’t See You There: Why Our Virtual GTC’s Bigger Than Ever

Call it an intellectual Star Wars bar. You could run into just about anything at GTC. Princeton’s William Tang would speak about using deep learning to unleash fusion energy, UC Berkeley’s Gerry Zhang would talk about hunting for alien signals, Airbus A3’s Arne Stoschek would describe flying autonomous pods. Want to catch it all? Run. Read article >

The post We Won’t See You There: Why Our Virtual GTC’s Bigger Than Ever appeared first on The Official NVIDIA Blog.

Categories
Misc

Using tensorflow for music

Looking for examples showing how to use tensorflow for music – generation, tuning or anything. I tried looking up online but could not find anything pointing towards this. I hope someone could help me out with this.

Thanks in advance for any help!

submitted by /u/trapzar
[visit reddit] [comments]

Categories
Misc

How to use .eval() inside the dataset.map function?

Hi!

I am running into a problem, I cannot solve by myself.

I am doing this example: https://www.tensorflow.org/tutorials/audio/simple_audio

But I would like do use my own stft-function, which i wrote in numpy instead of tf.signal.stft

So I changed the function “def get_spectrogram(waveform)” and I try to convert the input Tensor into a numpy array by using *.eval() and “get_spectrogram” is used in “get_spectrogram_and_label_id(audio, label)” which again is used inside a dataset map

spectrogram_ds = waveform_ds.map(get_spectrogram_and_label_id, num_parallel_calls=AUTOTUNE) 

Since this mapping is done in GraphMode, and not EagerlyMode, i cannot use .numpy() and have to use .eval() instead.

However .eval() asked for a session and it has to be the same session the map function is used for the dataset.

Has anybody ran into this problem and can help?

def get_spectrogram(waveform): # Padding for files with less than 16000 samples zero_padding = tf.zeros([16000] - tf.shape(waveform), dtype=tf.float32) # Concatenate audio with padding so that all audio clips will be of the # same length waveform = tf.cast(waveform, tf.float32) equal_length = tf.concat([waveform, zero_padding], 0) equal_length = equal_length.eval() # <-- HERE IS THE PROBLEM spectrogram = do_stft(equal_length, 512, 128) #<-- uses NUMPY return spectrogram def get_spectrogram_and_label_id(audio, label): print(sess.graph) spectrogram = get_spectrogram(audio.eval(session=sess)) spectrogram = tf.expand_dims(spectrogram, -1) label_id = tf.argmax(label == commands) return spectrogram, label_id spectrogram_ds = waveform_ds.map(get_spectrogram_and_label_id, num_parallel_calls=AUTOTUNE) 

submitted by /u/alex_bababu
[visit reddit] [comments]

Categories
Misc

Creating TensorFlow Custom Ops, Bazel, and ABI compatibility

Creating TensorFlow Custom Ops, Bazel, and ABI compatibility submitted by /u/pgaleone
[visit reddit] [comments]
Categories
Misc

German Researchers Develop Early Warning AI for Self-Driving Systems

Self-driving cars can run into critical situations where a human driver must retake control for safety reasons. Researchers from the Technical University of Munich have developed an AI early warning system that can give human drivers a seven-second heads-up about these critical driving scenarios.

Self-driving cars can run into critical situations where a human driver must retake control for safety reasons. Researchers from the Technical University of Munich have developed an AI early warning system that can give human drivers a seven-second heads-up about these critical driving scenarios. 

Studied in cooperation with the BMW Group, the AI system learned from 2,500 real traffic situations using vehicle sensor data encompassing road conditions, weather, speed, visibility and steering wheel angle. 

The researchers used NVIDIA GPUs for both training and inference of the failure prediction  AI models. The model recognizes when patterns in a driving situation’s sensor data look similar to scenarios the self-driving system was unable to navigate in the past, and issues an early warning to the driver. 

When tested on public roads using autonomous development vehicles from the BMW Group, the AI model was able to predict situations that self-driving cars would be unable to handle alone seven seconds ahead of time, and with over 85 percent accuracy. These results outperform state-of-the-art failure prediction by more than 15 percent.

“The big advantage of our technology: we completely ignore what the car thinks. Instead we limit ourselves to the data based on what actually happens and look for patterns,” said Eckehard Steinbach, Chair of Media Technology and member of the Board of Directors of the Munich School of Robotics and Machine Intelligence. “In this way, the AI discovers potentially critical situations that models may not be capable of recognizing, or have yet to discover.”

Early warning could help human drivers more quickly react to critical situations such as crowded intersections, sudden braking or dangerous swerving. 

The AI can be improved with larger quantities of data from autonomous vehicles that are under development and undergoing testing on the road. The learnings collected from each individual vehicle can be used for future iterations of the model, and deployed across the entire fleet of cars.

“Every time a potentially critical situation comes up on a test drive, we end up with a new training example,” said Chrisopher Kuhn, an author on the study.

Read more >> 

Find the full paper in IEEE Transactions on Intelligent Transportation Systems

Categories
Misc

Essential Ray Tracing SDKs for Game and Professional Development

Game and professional visualization developers need the best tools to create the best games and real-time interactive content. Read this article to find out what NVIDIA technologies will provide developers optimal real time ray tracing within their workflows.

Game and professional visualization developers need the best tools to create the best games and real-time interactive content. 

To help them achieve this goal, NVIDIA has pioneered real-time ray tracing hardware with the launch of the RTX 20 series.

Today, we continue to develop and expand powerful tools for developers by creating SDKs that run on RTX GPUs.

The following NVIDIA technologies will provide developers optimal real time ray tracing within their workflows:

RTX Direct Illumination (RTXDI)

RTXDI offers realistic lighting and shadows of dynamic scenes involving millions of lights which, until now, would have been prohibitively expensive for real-time applications. Traditionally, most lighting is baked offline, computing just a handful of “hero” dynamic lights at runtime. RTXDI pushes past those limits and allows developers to elevate the visual fidelity in their games.

RTX Global Illumination (RTXGI)

RTXGI provides developers with a scalable solution for multi-bounce indirect lighting without light leakage, time-intensive offline lightmap baking, or expensive per-frame costs. RTXGI’s dynamic, real-time global illumination is not only beautiful in action, but it streamlines the content creation process by removing barriers that previously prevented artists from rapidly iterating. With a low performance cost and massive productivity gains, RTXGI is an ideal starting point to bring the benefits of ray tracing to your content.

NVIDIA Real-Time Denoisers (NRD)

Get the optimal real-time ray tracing performance with NRD, a library of spatial and spatio-temporal API-agnostic denoisers. From the beginning, NRD was specifically designed to work well with low ray budgets. With NRD, developers can create visuals that rival ground-truth images with as little as a half of a ray cast per pixel.

Get the latest news and updates about these SDKs at next month’s GPU Technology Conference. Registration is free — join us and hear from experts who worked on popular game titles, including Minecraft, Cyberpunk 2077, Overwatch and LEGO Builder’s Journey.

Explore the sessions for game developers and learn how you can integrate NVIDIA tools and technologies into games.

Categories
Misc

Parsing Petabytes, SpaceML Taps Satellite Images to Help Model Wildfire Risks

When freak lightning ignited massive wildfires across Northern California last year, it also sparked efforts from data scientists to improve predictions for blazes. One effort came from SpaceML, an initiative of the Frontier Development Lab, which is an AI research lab for NASA in partnership with the SETI Institute. Dedicated to open-source research, the SpaceML Read article >

The post Parsing Petabytes, SpaceML Taps Satellite Images to Help Model Wildfire Risks appeared first on The Official NVIDIA Blog.

Categories
Misc

GPT-2 in Tensorflow 2.0?

Hey, just wanted to fine tune some data with GPT-2 155M to create some simple Discord or Twitter bots with my RTX 3080. What’s the best way to currently use the small GPT-2 with Tensorflow 2.0 currently? Looked around online, but doesn’t seem like there’s a lot of interest due to people interested in GPT moving onto 3 or Neo.

submitted by /u/RedditBadSuggestions
[visit reddit] [comments]

Categories
Misc

I cannot for the life of me get Tensorflow to recognize my GPU.

I have RTX 3070. I have Visual Studio installed, the appropriate driver installed along with cudnn and the appropriate CUDA toolkit. I have copied the bin, lib and include files into my C:Program FilesNVIDIA GPU Computing ToolkitCUDAv11.2 directory. I have python 3.8.5 installed and Tensorflow 2.4.1.

Is there another step I am missing? I am running tf.test.is_built_with_cuda() and getting True, while tf.test.is_gpu_available(cuda_only = False) returns false.

submitted by /u/WastefulMice
[visit reddit] [comments]