Categories
Misc

AI of Earthshaking Magnitude: DeepShake Predicts Quake Intensity

In a major earthquake, even a few seconds of advance warning can help people prepare — so Stanford University researchers have turned to deep learning to predict strong shaking and issue early alerts.

In a major earthquake, even a few seconds of advance warning can help people prepare — so Stanford University researchers have turned to deep learning to predict strong shaking and issue early alerts.

DeepShake, a spatiotemporal neural network trained on seismic recordings from around 30,000 earthquakes, analyzes seismic signals in real time. By observing the earliest detected waves from an earthquake, the neural network can predict ground shaking intensity and send alerts throughout the area. 

Geophysics and computer science researchers at Stanford used a university cluster of NVIDIA GPUs to develop the model, using data from the 2019 Ridgecrest sequence of earthquakes in Southern Califonia. 

When tested with seismic data from Ridgecrest’s 7.1 magnitude earthquake, DeepShake provided simulated alerts to nearby seismic stations 7 to 13 seconds before the arrival of high intensity ground shaking.

Most early warning systems pull multiple information sources, first determining the location and magnitude of an earthquake before calculating ground motion for a specific area. 

“Each of these steps can introduce error that can degrade the ground shaking forecast,” said Stanford student Daniel Wu, who presented the project at the 2021 Annual Meeting of the Seismological Society of America. 

Instead, the DeepShake network relies solely on seismic waveforms for its rapid early warning and forecasting system. The unsupervised neural network learned which features of seismic waveform data best forecast the strength of future shaking. 

“We’ve noticed from building other neural networks for use in seismology that they can learn all sorts of interesting things, and so they might not need the epicenter and magnitude of the earthquake to make a good forecast,” said Wu. “DeepShake is trained on a preselected network of seismic stations, so that the local characteristics of those stations become part of the training data.”

Given 15 seconds of measured ground shaking, the model can predict future shaking intensity at all seismic stations in its network — with no prior knowledge of station locations.  

The team plans to expand the neural network to cover a broader geographical region, and cover for the possibility of fail-cases including downed stations and high network latency. The group sees DeepShake as complementary to California’s ShakeAlert warning system, operated by the United States Geological Survey. 

Read more >>

Categories
Misc

Need for Speed: Researchers Switch on World’s Fastest AI Supercomputer

It will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more. Perlmutter, officially dedicated today at the National Energy Research Scientific Computing Center (NERSC), is a supercomputer that will deliver nearly four exaflops of AI performance for more than 7,000 researchers. That makes Perlmutter the Read article >

The post Need for Speed: Researchers Switch on World’s Fastest AI Supercomputer appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA BlueField European Hackathon Fuels Data Center Innovation with Pioneering DPU-based Applications Demonstrations

At NVIDIA where non-stop innovation is our culture, we are hosting a global series of regional Data Processing Unit (DPU) software hackathons over the next 12 months, aimed at advancing research and development in data center and AI technologies.

First in a global series of NVIDIA developer events, the DPU hackathons unleashes breakthrough technologies built on NVIDIA DOCA, furthering advancements in AI, cloud and accelerated computing

“The data center is the new unit of computing. Cloud computing and AI are driving fundamental changes in the architecture of data centers.” — NVIDIA founder and CEO Jensen Huang

At NVIDIA where non-stop innovation is our culture, we are hosting a global series of regional Data Processing Unit (DPU) software hackathons over the next 12 months, aimed at advancing research and development in data center and AI technologies.

The first digital DPU hackathon was held on May 24-25 for European developers and researchers from prominent ecosystem partners, customer organizations and academia. The successful European hackathon delivered a number of groundbreaking inventions in high-performance networking, virtualization, cybersecurity, storage, accelerated AI and edge computing, video processing and more. Standing out from the crowd was the team from MTS PJSC, Russia’s largest mobile operator and a leading provider of media and digital services, taking home the gold for their video CDN edge project.

The team created a DPU-accelerated edge computing platform that is optimized for secure video streaming. The platform is hosted on a single BlueField DPU card, running NGINX for web content delivery, and leverages hardware accelerators for TLS crypto acceleration. The platform can be further enhanced with video packet pacing technology, and an optimized TCP/IP software stack. The accomplished performance target set forth by the team was serving 10K transactions per second of 100KB video payload at 10Gb/s speed.

Fostering Data Center Innovation

NVIDIA hackathons draw on our core values: innovation, excellence, speed and agility. They assemble bright minds, enabling developers to learn, collaborate, and accelerate their work under the guidance of expert mentors by their side. 

The DPU hackathon series also draws on our pioneering BlueField data center-on-a-chip architecture (DOCA) technology foundation— serving as a testament to our commitment to building a broad developer community to create revolutionizing data center infrastructure applications and services, powered by NVIDIA BlueField DPUs and DOCA software framework.

With the release of the NVIDIA DOCA 1.0 at GTC 21, developers today have an easy way to program BlueField DPUs that leverages open APIs, libraries and reference code for various applications. 

“DOCA plays a central role in NVIDIA’s data center-on-a-chip vision, providing a unified and future-proof architecture for all BlueField DPU product generations,” said Dror Goldenberg, SVP of Software Technologies. “This global series of DPU hackathons will center around innovation based on DPU and DOCA, supporting our journey to build a thriving ecosystem of DPU-accelerated applications that will reshape the data center of the future.”

First Time’s a Charm: Recapping Europe’s DPU Hackathon

The first NVIDIA DPU hackathon event drew significant attention and excitement, with applications rocketing. Our steering committee selected 14 brilliant project teams from among 60 applicants and various industries: cloud service providers and web-scalers, telecom operators, independent software vendors, and academia.

The DPU hackathon took place over Zoom for 30 hours straight. Prior to that, NVIDIA hosted an online DPU bootcamp to empower the participating teams with requisite BlueField DPU knowledge and DOCA programming skills. Most of the hackathon time was dedicated to the teams’ collaboration and execution of their projects, with mentors providing support based on various domain expertise.

As a member of the hackathon judging panel, our role was to provide constructive feedback to project teams throughout the event. One of the main things we kept an eye out for was innovative technology for solving key data center challenges, accompanied by a proof-of-concept to support the team’s claims. Another evaluation criteria was how well the technology meets data center scale and performance requirements.

Finally, we were truly amazed by the teams’ work and results. Here’s a summary of the top inventions:

  • First place won by the MTS PJSC team from Russia showcased an innovative, DPU-accelerated solution for ultra-low power (ULP) CDN edge deployments.
  • Second place went to the Datadigest B.V. Nikhef team from the Netherlands for developing a DPU-based, scalable, AI-accelerated Intrusion detection and prevention system (IDS/IPS), running on NFV architecture.
  • Tied in third place:
    • The project team from Technical University of Darmstadt, Germany, demonstrated an advanced remote access database structure in Database Management Systems (DBMS) powered by BlueField DPU.
    • The project team from GreyCortex in the Czech Republic, demonstrated a DPU-based DDoS detection and mitigation system on top of DOCA.

We’d like to congratulate our winners and thank all of the teams that participated, helping to make our first global NVIDIA DPU Hackathon such a wonderful success!

Coming Up: NVIDIA DPU Hackathons in China and North America 

With the European hackathon concluded, our developer relations team is already working on the next leg of our global DPU hackathon tour. NVIDIA is building a broad  community of DOCA developers to create innovative applications and services on top of NVIDIA BlueField DPUs to secure and accelerate modern, efficient data centers. 

Check out our corporate calendar to stay informed with future events, and take part in our journey to reshape the data center of tomorrow. To learn more about the DOCA software framework and to register for early access visit the DOCA webpage.

One of our hackathon participants put it best: “All you need is BlueField-2”

Categories
Misc

Marbles RTX Playable Sample Now Available in NVIDIA Omniverse

Here’s a chance to become a marvel at marbles: the Marbles RTX playable sample is now available from the NVIDIA Omniverse launcher. Marbles RTX is a physics-based mini-game level where a player controls a marble around a scene full of obstacles. The sample, which already has over 8,000 downloads, displays real-time physics with dynamic lighting Read article >

The post Marbles RTX Playable Sample Now Available in NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Categories
Misc

“ValueError: cannot reshape array of size 278540 into shape (256,128,3,3)” Conversion YOLOv3 .weights to .pb

I have trained a YOLO v3 Object Detection Model. To incorporate into my flutter application I am trying to convert it to .tflite, with .pb needed as intermediate. I am getting this error with every github repo I have tried. (A few linked below)

Error: ValueError: cannot reshape array of size 278540 into shape (256,128,3,3)

Following is what my classes.names file looks like:

upstairs

downstairs

I have just 2 classes. I am unable to convert. Can someone please help?

Link to my weights and config file:

A few repos that I have tried:

submitted by /u/mishaalnaeem
[visit reddit] [comments]

Categories
Misc

Adding value rules to a tf model(noob)

Hello all

TLDR: 4 columns in df, sequential model, LSTM, how to add rule column a > column b for all a,b, and get a return for all 4 columns

Thanks for your help in advance.

I’m working on a sequential model. In python. Pretty much teaching myself as I go so I apologize for any incorrect jargon or naïveté.

Assume we have a list of families histories with family members weights x each in column, heaviest to lightest.

Ie Generation. Heaviest. Med heavy. Med light. Lightest

  1. 275. 225. 180. 145
  2. 300. 250. 225. 165

I have tried two approaches to guess the weights of the next generation. I have 100 generations to iterate over.

The first approach is to just feed the whole df into tf sequential model with lstm. Now maybe I don’t understand exactly what’s happening when I do that, which I don’t, but it returns a single value, not 4. (And I’m not sure it knows that column ‘heaviest’ >’lightest’ for all generations.) So as a work around I thought oh, just split it up and pass each column through its own model and then look at the values. I’m obviously loosing way to many connections because I’m only using 25% of the data at a time and the results are well, not ordered really.

So my long short question is……. if I pass the entire 4 column df, and I want tf to guess each value of the next generation, what do I need to add to force it to guess all 4? And is there a way to simply pass a rule I already know about the data?

submitted by /u/obibongcannobi
[visit reddit] [comments]

Categories
Misc

NVIDIA Announces Financial Results for First Quarter Fiscal 2022

NVIDIA today reported record revenue for the first quarter ended May 2, 2021, of $5.66 billion, up 84 percent from a year earlier and up 13 percent from the previous quarter, with record revenue from the company’s Gaming, Data Center and Professional Visualization platforms.

Categories
Misc

The Roaring 20+: GFN Thursday Game Releases Include Biomutant, Maneater, Warhammer Age of Sigmar: Storm Ground and More

GFN Thursday comes roaring in with 22 games and support for three DLCs joining the GeForce NOW library this week. Among the 22 new releases are five day-and-date game launches: Biomutant, Maneater, King of Seas, Imagine Earth and Warhammer Age of Sigmar: Storm Ground. DLC, Without the Download GeForce NOW ensures your favorite games are Read article >

The post The Roaring 20+: GFN Thursday Game Releases Include Biomutant, Maneater, Warhammer Age of Sigmar: Storm Ground and More appeared first on The Official NVIDIA Blog.

Categories
Misc

Error Converting Image to Luma channel

I’m trying to convert an RGB image into the luma channel, similar to how it is done in PIL, but I cannot find a good way to do this.

I have tried with tensorflow_io and the values are incorrect.

with tensorflow_io

img_file = tf.io.read_file(“./img/img.jpg”) img = tf.image.decode_jpeg(img_file, channels=3) luma = tfio.experimental.color.rgb_to_ycbcr(img)[:,:,0] luma.numpy() “”” Value: array([[ 22, 22, 22, …, 21, 21, 21], [ 22, 22, 22, …, 21, 21, 21], [ 22, 22, 22, …, 21, 21, 21], …, [159, 159, 156, …, 51, 48, 48], [158, 158, 158, …, 50, 46, 46], [226, 226, 227, …, 230, 231, 231]], dtype=uint8) “””

with PIL

im = Image.open(“./img/img.jpg”) im = im.convert(“L”) np.asarray(im) “”” Value: array([[ 8, 8, 8, …, 6, 6, 6], [ 8, 8, 8, …, 6, 6, 6], [ 8, 8, 8, …, 6, 6, 6], …, [168, 167, 165, …, 41, 38, 38], [167, 167, 167, …, 42, 37, 37], [246, 246, 247, …, 251, 253, 253]], dtype=uint8) “””

Am I doing something wrong here?

submitted by /u/potato-sword
[visit reddit] [comments]

Categories
Misc

Most computational efficient way for list of random numbers in Tensroflow given a list of maxiumum values like in `np.random.randint`

For np.random.randint, you can input a list of maximum values, and get a list of random ints from 0 to those maximum values.

np.random.randint([1, 10, 100, 1000] ) >array([ 0, 7, 31, 348]) 

Tensorflow tf.random.uniform doesn’t allow lists for maxval, so you need to either create a statement for each, or run a loop. I was wondering if there was more elegant way to get these random numbers.

submitted by /u/PrudentAlternative10
[visit reddit] [comments]