Categories
Misc

NVIDIA Announces Financial Results for Fourth Quarter and Fiscal 2022

Record quarterly revenue of $7.64 billion, up 53 percent from a year earlierRecord fiscal-year revenue of $26.91 billion, up 61 percentRecord quarterly and fiscal-year revenue for Gaming, Data …

Categories
Misc

TensorFlow Similarity Boost Machine Learning Model’s Accuracy Using Self-Supervised Learning

The practice of identifying raw data (such as pictures, text files, videos, etc.) and adding relevant and informative labels providing context to the given data is known as data labeling. It is employed to train the machine learning model in many use cases. For example, labels can be used in computer vision to identify whether a photograph has a bird or an automobile, in speech recognition to determine which words were spoken in an audio recording,

Overall, labeled datasets help train machine learning models to recognize and understand recurrent patterns in the input data. After being trained on labeled data, the ML models are able to recognize the same patterns in new unstructured data and produce reliable results. Continue Reading

submitted by /u/ai-lover
[visit reddit] [comments]

Categories
Misc

Different Types of Edge Computing

The types of edge computing and examples of use cases for each.

Many organizations have started their journey towards edge computing to take advantage of data produced at the edge. The definition of edge computing is quite broad. Simply stated, it is moving compute power physically closer to where data is generated, usually an edge device or IoT sensor.

This encompasses far edge scenarios like mobile devices and smart sensors, as well as more near edge use cases like micro-data centers and remote office computing. In fact, this definition is so broad that it is often talked about as anything outside of the cloud or main data center. 

With such a wide variety of use cases, it is important to understand the different types of edge computing and how they are being used by organizations today. 

Provider edge

The provider edge is a network of computing resources accessed by the Internet. It is mainly used for delivering services from telcos, service providers, media companies, or other content delivery network (CDN) operators. Examples of use cases include content delivery, online gaming​, and AI as a service (AIaaS). 

One key example of the provider edge that is expected to grow rapidly is augmented reality (AR) and virtual reality (VR). Service providers want to find ways to deliver these use cases, commonly known as eXtended Reality (XR), from the cloud to end user edge systems. 

In late 2021, Google partnered with NVIDIA to deliver high-quality XR streaming from Google Cloud NVIDIA RTX powered servers, to lightweight mobile XR displays. By using NVIDIA CloudXR to stream from the provider edge, users can securely access data from the cloud at any time and easily share high-fidelity, full graphics immersive XR experiences with other teams or customers.

Enterprise edge

The enterprise edge is an extension of the enterprise data center, consisting of things like data centers at remote office sites, micro-data centers, or even racks of servers sitting in a compute closet on a factory floor. This environment is generally owned and operated by IT as they would a traditional centralized data center, though there may be space or power limitations at the enterprise edge that change the design of these environments.

Retailers can use edge AI across their business for frictionless shopping, in store analytics as well as supply chain optimization.
Figure 1. Enterprises across all industries use edge AI to drive more intelligent use cases on site.

Looking at examples of the enterprise edge, you can see workloads like intelligent warehouses and fulfillment centers. Improved efficiency and automation of these environments requires robust information, data, and operational technologies to enable AI solutions like real-time product recognition.

Kinetic Vision helps customers build AI for these enterprise edge environments using a digital twin, or photorealistic virtual version, of a fulfillment or distribution center to train and optimize a classification model that is then deployed in the real world. This powers faster, more agile product inspections, and order fulfillments.

Industrial edge

The industrial edge, sometimes called the far edge, generally has smaller compute instances that can be one or two small, ruggedized servers or even embedded systems deployed outside of any sort of data center environment.

Industrial edge use cases include robotics, autonomous checkout, smart city capabilities like traffic control, and intelligent devices. These use cases run entirely outside of the normal data center structure, which means there are a number of unique challenges for space, cooling, security, and management.

BMW is leading the way with industrial edge by adopting robotics to redefine their factory logistics. Using different robots for parts of the process, these robots take boxes of raw parts on the line and transport them to shelves to await production. They are then taken to manufacturing, and finally returned back to the supply area when empty.

Robotics use cases require compute power both in the autonomous machine itself, as well as compute systems that sit on the factory floor. To optimize the efficiency and accelerate deployment of these solutions, NVIDIA introduced the NVIDIA Isaac Autonomous Mobile Robot (AMR) platform.

Accelerating edge computing

Each of these edge computing scenarios has different requirements, benefits, and deployment challenges. To understand if your use case would benefit from edge computing, download the Considerations for Deploying AI at the Edge whitepaper.

Sign up for Edge AI News to stay up to date with the latest trends, customers use cases, and technical walkthroughs.

Categories
Misc

Jaguar Land Rover Announces Partnership With NVIDIA

As part of Jaguar Land Rover’s Reimagine strategy, partnership will transform the modern luxury experience for customers starting in 2025Software experts from both companies will jointly …

Categories
Misc

The Greatest Podcast Ever Recorded

Is this the best podcast ever recorded? Let’s just say you don’t need a GPU to know that’s a stretch. But it’s pretty great if you’re a fan of tall tales. And better still if you’re not a fan of stretching the truth at all. That’s because detecting hyperbole may one day get more manageable, Read article >

The post The Greatest Podcast Ever Recorded appeared first on The Official NVIDIA Blog.

Categories
Misc

Reimagining Modern Luxury: NVIDIA Announces Partnership with Jaguar Land Rover

Jaguar Land Rover and NVIDIA are redefining modern luxury, infusing intelligence into the customer experience. As part of its Reimagine strategy, Jaguar Land Rover announced today that it will develop its upcoming vehicles on the full-stack NVIDIA DRIVE Hyperion 8 platform, with DRIVE Orin delivering a wide spectrum of active safety, automated driving and parking Read article >

The post Reimagining Modern Luxury: NVIDIA Announces Partnership with Jaguar Land Rover appeared first on The Official NVIDIA Blog.

Categories
Misc

Adding node and cell names for tensorboard graph

Adding node and cell names for tensorboard graph

I am trying to trace the tensorboard graph for https://github.com/promach/gdas

However, from what I can observe so far, the tensorboard graph does not really indicate user-understandable node names and cell names which makes it so difficult for tracking down the connections within the graph.

Any suggestions ?

Note: I am using torch.utils.tensorboard

https://preview.redd.it/3ou8jst067i81.png?width=1202&format=png&auto=webp&s=ee8adf7916b89371267d71a62518f3b042692c20

submitted by /u/promach
[visit reddit] [comments]

Categories
Misc

Atos Previews Energy-Efficient, AI-Augmented Hybrid Supercomputer

Stepping deeper into the era of exascale AI, Atos gave the first look at its next-generation high-performance computer. The BullSequana XH3000 combines Atos’ patented fourth-generation liquid-cooled HPC design with NVIDIA technologies to deliver both more performance and energy efficiency. Giving users a choice of Arm or x86 computing architectures, it will come in versions using Read article >

The post Atos Previews Energy-Efficient, AI-Augmented Hybrid Supercomputer appeared first on The Official NVIDIA Blog.

Categories
Misc

Tensorflow error "W tensorflow/core/data/root_dataset.cc:163] Optimization loop failed: CANCELLED: Operation was cancelled"

Hi, I’m trying to train a GAN on the mnist fashion dataset. But whenever I train the thing it keeps returning each epoch

W tensorflow/core/data/root_dataset.cc:163] Optimization loop failed: CANCELLED: Operation was cancelled W tensorflow/core/data/root_dataset.cc:163] Optimization loop failed: CANCELLED: Operation was cancelled W tensorflow/core/data/root_dataset.cc:163] Optimization loop failed: CANCELLED: Operation was cancelled W tensorflow/core/data/root_dataset.cc:163] Optimization loop failed: CANCELLED: Operation was cancelled W tensorflow/core/data/root_dataset.cc:163] Optimization loop failed: CANCELLED: Operation was cancelled W tensorflow/core/data/root_dataset.cc:163] Optimization loop failed: CANCELLED: Operation was cancelled 

It doesn’t seem to actually train every time l look at the generated images. Here is the code:

import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import numpy as np (X_train, y_train), (X_test, Y_test) = keras.datasets.fashion_mnist.load_data() X_train = X_train // 255.0 def plot_multiple_images(images, n_cols=None): n_cols = n_cols or len(images) n_rows = (len(images) - 1) // n_cols + 1 if images.shape[-1] == 1: images = np.squeeze(images, axis=-1) plt.figure(figsize=(n_cols, n_rows)) for index, image in enumerate(images): plt.subplot(n_rows, n_cols, index + 1) plt.imshow(image, cmap="binary") plt.axis("off") # np.random.seed(42) tf.random.set_seed(42) codings_size = 30 generator = keras.models.Sequential([ keras.layers.Dense(100, activation="selu", input_shape=[codings_size]), keras.layers.Dense(150, activation="selu"), keras.layers.Dense(28 * 28, activation="sigmoid"), keras.layers.Reshape([28, 28]) ]) discriminator = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(150, activation="selu"), keras.layers.Dense(100, activation="selu"), keras.layers.Dense(1, activation="sigmoid") ]) gan = keras.models.Sequential([generator, discriminator]) discriminator.compile(loss="binary_crossentropy", optimizer="rmsprop") discriminator.trainable = False gan.compile(loss="binary_crossentropy", optimizer="rmsprop") batch_size = 32 dataset = tf.data.Dataset.from_tensor_slices(X_train).shuffle(1000) dataset = dataset.batch(batch_size, drop_remainder=True).prefetch(1) batch_size = 32 def train_gan(gan, dataset, batch_size, codings_size, n_epochs=50): generator, discriminator = gan.layers for epoch in range(n_epochs): print("Epoch {}/{}".format(epoch + 1, n_epochs)) for X_batch in dataset: # phase 1 - training the discriminator noise = tf.random.normal(shape=[batch_size, codings_size]) generated_images = generator(noise) X_batch = tf.cast(X_batch, tf.float32) X_fake_and_real = tf.concat([generated_images, X_batch], axis=0) y1 = tf.constant([[0.]] * batch_size + [[1.]] * batch_size) discriminator.trainable = True discriminator.train_on_batch(X_fake_and_real, y1) # phase 2 - training the generator noise = tf.random.normal(shape=[batch_size, codings_size]) y2 = tf.constant([[1.]] * batch_size) discriminator.trainable = True gan.train_on_batch(noise, y2) # not shown plot_multiple_images(generated_images, 8) plt.show() train_gan(gan, dataset, batch_size, codings_size) 

I’m using Aurelien Geron’s “Hands-on Machine Learning with Scikit-Learn, Keras & Tensorflow”.

Here is the repository.

Hope this is the right sub for questions.

submitted by /u/General_wolffe
[visit reddit] [comments]

Categories
Misc

RNN with three or two LSTM-Layers ? Is the first LSTM-Layer simultan the Input-Layer?

RNN with three or two LSTM-Layers ? Is the first LSTM-Layer simultan the Input-Layer?

Hey guys, i am struggling with the definition of the quantity of hidden layers of these RNN (see below). Contain this model three or two hidden LSTM-Layers? I’m not sure. Is the first LSTM-Layer simultan the Input-Layer or are there an additional Input Layer before the first LSTM-Layer?

Can anyone help me?

https://preview.redd.it/68svyk5hx5i81.png?width=605&format=png&auto=webp&s=628761d42413223d7d8521c81ce438516092abc5

submitted by /u/Necessary_Radish9885
[visit reddit] [comments]