Categories
Misc

Universities Expand Research Horizons with NVIDIA Systems, Networks

Just as the Dallas/Fort Worth airport became a hub for travelers crisscrossing America, the north Texas region will be a gateway to AI if folks at Southern Methodist University have their way. SMU is installing an NVIDIA DGX SuperPOD, an accelerated supercomputer it expects will power projects in machine learning for its sprawling metro community Read article >

The post Universities Expand Research Horizons with NVIDIA Systems, Networks appeared first on The Official NVIDIA Blog.

Categories
Misc

Gordon Bell Finalists Fight COVID, Advance Science With NVIDIA Technologies

Two simulations of a billion atoms, two fresh insights into how the SARS-CoV-2 virus works, and a new AI model to speed drug discovery. Those are results from finalists for Gordon Bell awards, considered a Nobel prize in high performance computing. They used AI, accelerated computing or both to advance science with NVIDIA’s technologies. A Read article >

The post Gordon Bell Finalists Fight COVID, Advance Science With NVIDIA Technologies appeared first on The Official NVIDIA Blog.

Categories
Misc

`ValueError: Data cardinality is ambiguous: ` after running `model.fit`

submitted by /u/Guacamole_is_good
[visit reddit] [comments]

Categories
Misc

Training Object Detection Model for Tensorflow Lite on Raspberry Pi

I have succesfully setup Tensorflow Lite object detection on my Raspberry Pi 3b+, I have tested it on some google sample models and can confirm it works properly.

I am looking to create my own custom Object Detection model and I am looking for the absolute easiest way to do this (preferably on Ubuntu but can use Windows). Does anyone have any good methos or tutorials. I have tried a couple Github tutorials as well as the Tensorflow Lite Model maker Colab with no luck.

Has anyone used any of these tools or have any experience/advice for training my own Tensorflow Lite Object detection Model for my Pi.

submitted by /u/MattDlr4
[visit reddit] [comments]

Categories
Misc

Input pipeline performances

Hi Reddit

I’m comparing 2 input pipelines. One is built using tf.keras.utils.image_dataset_from_directory and the other build “manually” by reading files from a list using tf.data.Dataset.from_tensor_slices. My first intuition was that the tf.data.Dataset.from_tensor_slices should be faster, as demonstrated here.

But this is not the case. The image_dataset_from_directory is approximatively x6 time faster for batches of 32 to 128 images. Similar performance factor on Collab and on my local machine (run from PyCharm).

So far, I tried to avoid the “zip” of two dataset by having a read_image to output both the image and the label at once. Did not change anything.

Can you help me to build a decent input pipeline with tf.data.Dataset.from_tensor_slices. I would like to work with a huge dataset to train a GAN, and I do not want to loose time with the data loading. Did I code something wrong or are the test from here outdated ?

To be pragmatic, I will use the fastest approach. But as an exercise, I would like to know if my input pipeline wiht tf.data.Dataset.from_tensor_slices is ok.

Here are the code. data_augmentation_train is a sequential network (same in both approaches)

================================= Approach n°1: tf.keras.utils.image_dataset_from_directory ================================= AUTOTUNE = tf.data.AUTOTUNE train_ds = tf.keras.utils.image_dataset_from_directory( trainFolder, validation_split=0.2, subset="training", seed=123, image_size=(img_height, img_width), batch_size=batch_size) class_names = train_ds.class_names print(class_names) train_ds = train_ds.cache() train_ds = train_ds.shuffle(1000) train_ds = train_ds.map(lambda x, y: (data_augmentation_train(x, training=True), y), num_parallel_calls=AUTOTUNE) train_ds.prefetch(buffer_size=AUTOTUNE) 

======================================= Approach n°2:tf.data.Dataset.from_tensor_slices ======================================= def read_image(filename): image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image, channels=3) image = tf.image.resize(image, [img_height, img_width]) return image def configure_dataset(filenames, labels, augmentation=False): dsfilename = tf.data.Dataset.from_tensor_slices(filenames) dsfile = dsfilename.map(read_image, num_parallel_calls=AUTOTUNE) if augmentation: dsfile = dsfile.map(lambda x: data_augmentation(x, training=True)) dslabels=tf.data.Dataset.from_tensor_slices(labels) ds = tf.data.Dataset.zip((dsfile,dslabels)) ds = ds.shuffle(buffer_size=1000) ds = ds.batch(batch_size) ds = ds.prefetch(buffer_size=AUTOTUNE) return ds filenames, labels, class_names = readFilesAndLabels(trainFolder) ds = configure_dataset(filenames, labels, augmentation=True) 

submitted by /u/seb59
[visit reddit] [comments]

Categories
Misc

Risky Business: Latest Benchmarks Show How Financial Industry Can Harness NVIDIA DGX Platform to Better Manage Market Uncertainty

Amid increasing market volatility, financial risk managers are looking for faster, better market analytics. Today that’s served up by advanced risk algorithms running on the fastest parallel computing systems. Boosting the state of the art for risk platforms, NVIDIA DGX A100 systems running Red Hat software can offer financial services firms performance and operational gains. Read article >

The post Risky Business: Latest Benchmarks Show How Financial Industry Can Harness NVIDIA DGX Platform to Better Manage Market Uncertainty appeared first on The Official NVIDIA Blog.

Categories
Misc

One Click to the Cloud: NVIDIA, Google Help Developers Build AI Faster

NVIDIA and Google Cloud are creating a bridge linking the tools of data science to the muscle of the cloud with one click. Most data scientists work in Jupyter Notebooks, open-source development environments that can run code, show visualizations and display text notes, too. But it can take many complex steps to move this rich Read article >

The post One Click to the Cloud: NVIDIA, Google Help Developers Build AI Faster appeared first on The Official NVIDIA Blog.

Categories
Misc

Money Talks: NVIDIA Inception Opens New VC Funding Opportunities to Startups

To better expose the 9,000+ members of NVIDIA Inception to venture capital funding, we’ve introduced a new program benefit to connect these startups with our rapidly growing community of 200 VCs and investors. The NVIDIA Inception VC Alliance will open new paths for startups to engage, create introductions and accelerate potential funding, which will facilitate Read article >

The post Money Talks: NVIDIA Inception Opens New VC Funding Opportunities to Startups appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA Unveils New Development Opportunities and Paths to Market for Millions of Developers

With NVIDIA Omniverse, millions of developers around the world can now take their workflows to the next level. At GTC, we introduced exclusive events, sessions and other resources to showcase how we’re expanding Omniverse, the multi-GPU, real-time simulation, and reference development platform for 3D workflows. Our dedicated Developer Day was an exclusive event that offered Read article >

The post NVIDIA Unveils New Development Opportunities and Paths to Market for Millions of Developers appeared first on The Official NVIDIA Blog.

Categories
Misc

November Studio Driver Releases at GTC With Support for New NVIDIA Omniverse Updates

NVIDIA GTC is live and bustling, bringing together the world’s most brilliant and creative minds who shape our world with the power of AI, computer graphics and more. At the show, we announced new features for NVIDIA Omniverse, our real-time digital-twin simulation and collaboration platform for 3D workflows. These include Omniverse VR, Remote and Showroom, Read article >

The post November Studio Driver Releases at GTC With Support for New NVIDIA Omniverse Updates appeared first on The Official NVIDIA Blog.