Categories
Misc

Putting the AI in Retail: Walmart’s Grant Gelvin on Prediction Analytics at Supercenter Scale

With only one U.S. state without a Walmart supercenter — and over 4,600 stores across the country — the retail giant’s prediction analytics work with data on an enormous scale. Grant Gelven, a machine learning engineer at Walmart Global Tech, joined NVIDIA AI Podcast host Noah Kravitz for the latest episode of the AI Podcast. Read article >

The post Putting the AI in Retail: Walmart’s Grant Gelvin on Prediction Analytics at Supercenter Scale appeared first on The Official NVIDIA Blog.

Categories
Misc

BMW Brings Together Art, Artificial Intelligence for Virtual Installation Using NVIDIA StyleGAN

BMW today unveiled a virtual art installation that projects AI-generated artwork onto a virtual rendition of the automaker’s 8 Series Gran Coupe.  Dubbed “The Ultimate AI Masterpiece,” the installation harnessed NVIDIA StyleGAN — a generative model for high-resolution images — to create original artwork projection-mapped onto the virtual vehicle. The project debuts in conjunction with … Continued

BMW today unveiled a virtual art installation that projects AI-generated artwork onto a virtual rendition of the automaker’s 8 Series Gran Coupe. 

Dubbed “The Ultimate AI Masterpiece,” the installation harnessed NVIDIA StyleGAN — a generative model for high-resolution images — to create original artwork projection-mapped onto the virtual vehicle. The project debuts in conjunction with the contemporary art festival Frieze New York, and marks the 50th year of cultural engagement by the BMW Group.

“For 50 years, BMW has supported arts and culture through numerous initiatives as a way to engage and interact with consumers around the world in an authentic way,” said Uwe Dreher, vice president of marketing, BMW of North America. “As we continue these efforts into 2021, and look for new and creative ways to engage audiences, we shift to a virtual setting where we are combining centuries-old art and the latest AI technology to create something completely new and exciting.”

Collaborators Gary Yeh, founder of the art media company ArtDrunk, and Nathan Shipley, director of creative technology at Goodby, Silverstein & Partners, trained NVIDIA StyleGAN on 50,000 images of art across nine centuries as well as 50 contemporary works from artists BMW has worked with in past years. The trained model merges the learnings from classical art along with styles from the contemporary artists. 

“AI is an emerging medium of creative expression. It’s a fascinating space where art meets algorithm,” said Shipley. “Combining the historical works with the curated modern works and projecting the evolving images onto the 8 Series Gran Coupe serves a direct nod to BMW’s history of uniting automobiles, art, and technology.” 

The project uses the BMW car as a canvas to showcase each creator’s style — like that of South Korean charcoal artist Lee Bae. 

“In this case the AI learned from Lee Bae’s work. In a way, it sees those textures,” Shipley said. “And then on its own the AI generates this evolving stream of new textures. They’re informed by his work, but they’re also unique.”

Developed by NVIDIA Research, StyleGAN has been adopted for digital storytelling, art exhibits, manga illustrations and reimagined historical portraits.

For more AI-inspired artwork, visit the AI Art Gallery featured at the recent NVIDIA GPU Technology Conference.

Categories
Misc

Streaming Everything with NVIDIA Rivermax

NVIDIA Rivermax 1.5, the newest release of the IP-based video and data streaming library, includes key features and capabilities enabling performance boosts and quicker integrations.

In 2020, many of us adopted a work-from-home routine, and this new norm has been stressing IT networks. It shouldn’t be a surprise that the sudden boost in remote working drives the need for a more dynamic IT environment, one that can pull in resources on demand.

Over the past few years, we’ve focused on the Media & Entertainment (M&E) market, supporting the global industry as it evolves from proprietary SDI to cost-effective Ethernet/IP infrastructure solutions. NVIDIA technologies are enabling M&E to take the next transformational step toward cloud computing, while meeting compliance with the most stringent SMPTE ST-2110-21 specification requirements.

On the journey to modernize M&E network interconnect, we introduced NVIDIA Rivermax, an optimized, standard-compliant software library API for streaming data. Rivermax software runs on NVIDIA ConnectX-5 or later network adapters, enabling the use of common off-the-shelf (COTS) servers for streaming SD, HD, and up to Ultra-HD video flows. The Rivermax-ConnectX-5 adapter card combination also enables compliance with M&E specifications, such as the SMPTE 2110-21; reduces CPU utilization for video data streaming; and removes bottlenecks for the highest throughput. It can reach 82 Gbps of streamed video with a single CPU core.

As our partners have rolled out new Rivermax-based, full-IP solutions rigorously tested in their labs, we’re excited to share the fruits of these collaborative investments in Rivermax 1.5, the latest release of the streaming library. Rivermax 1.5 includes key features and capabilities enabling performance boosts and quicker integrations. One of these new features allows Rivermax-accelerated applications to stream not only video, audio, and ancillary data but other data stream formats as well, enabling Rivermax accelerations and CPU savings in many new markets and applications:

  • Compressed video
  • Healthcare imaging (DICOM-RTV)
  • Cloud gaming
  • Autonomous car sensor streaming (video/LiDAR/RADAR)
  • And more

Another good piece of news is that Rivermax 1.5 recently passed the JT-NM Tested program (March 16 – 20, 2020), allowing for integration and interoperability with multiple other market vendors.

Rivermax 1.5 release contents

The Rivermax 1.5 release contains the following updates and features:

  • Virtualized Rivermax over vmware ESXi and Linux OpenStack (currently in beta-level support)
  • Rivermax API updates:
    • Replaced TX pause API with a flag to commit API
    • Changed structure of in-buffer attributes
    • Changed function signature of in-query buffer API
  • New 802.1Q VLAN tagging support
  • New SDK code examples:
    • Media sender:
      • Real video content, interlace, 59.94, 29.97
    • Media receiver:
      • GPU-CUDA support for color space conversion (from YCBCR to RGB): Display or playback a video stream on screen or through X11 SSH
      • Interlace video formats
      • 2022-7 Rx SW sample code to get you started quickly on software implementation of 2022-7, which will be offloaded to ConnectX-6 Dx hardware with future releases
  • Generic API (beta version): For streaming any type of data. Get all the goodies of Rivermax, like traffic shaping (accurate packet pacing), high bandwidth for any type of UDP-based data stream with low CPU utilization and supporting both Linux and Windows.
  • Introduce Rivermax for Mellanox ConnectX-6 Dx in beta-level support over Linux OS (with feature parity to ConnectX-5)
  • NVIDIA-Jetson platform software image (as presented at IBC2019)
    • Based on Rivermax 1.5 release
    • Demos running Rivermax on NVIDIA-Jetson platform
    • Includes sender and receiver examples
    • GPU is integrated with the Media_receiver for both CSC and on-screen rendering
    • AnalyzeX (SMPTE ST2110-20 verification software) while running video viewers

Want to discuss Rivermax? Comment below or reach out to your local account/support team.

Here’s to seeing you at the next M&E show!

Categories
Misc

Easiest way to get/set flattened array of trainable weights (and biases)

For example i want to be able to do something like this…

weights = model.get_trainable_weights() weights *= 2 model.set_trainable_weights(weights) 

I’ve googled it and seems like getting trainable weights might be pretty straightforward, but i’m not finding anything on being able to supply a flat array of weights for the model to set.

Right now I’m manually tracking the shapes, calculating which part of the flat array is for this tensor then taking that subset and reshaping it. It’s seems more difficult than it needs to be plus its also pretty expensive computationally taking as long as a tenth of a second just to set weights.

submitted by /u/Yogi_DMT
[visit reddit] [comments]

Categories
Misc

Handy data augmentation toolkit for image classification put in a single efficient TensorFlow op

Handy data augmentation toolkit for image classification put in a single efficient TensorFlow op submitted by /u/lnstadrum
[visit reddit] [comments]
Categories
Offsites

Do Wide and Deep Networks Learn the Same Things?

A common practice to improve a neural network’s performance and tailor it to available computational resources is to adjust the architecture depth and width. Indeed, popular families of neural networks, including EfficientNet, ResNet and Transformers, consist of a set of architectures of flexible depths and widths. However, beyond the effect on accuracy, there is limited understanding of how these fundamental choices of architecture design affect the model, such as the impact on its internal representations.

In “Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth”, we perform a systematic study of the similarity between wide and deep networks from the same architectural family through the lens of their hidden representations and final outputs. In very wide or very deep models, we find a characteristic block structure in their internal representations, and establish a connection between this phenomenon and model overparameterization. Comparisons across models demonstrate that those without the block structure show significant similarity between representations in corresponding layers, but those containing the block structure exhibit highly dissimilar representations. These properties of the internal representations in turn translate to systematically different errors at the class and example levels for wide and deep models when they are evaluated on the same test set.

Comparing Representation Similarity with CKA
We extended prior work on analyzing representations by leveraging our previously developed Centered Kernel Alignment (CKA) technique, which provides a robust, scalable way to determine the similarity between the representations learned by any pair of neural network layers. CKA takes as input the representations (i.e., the activation matrices) from two layers, and outputs a similarity score between 0 (not at all similar) and 1 (identical representations).

We apply CKA to a family of ResNets of varying depths and widths, trained on common benchmark datasets (CIFAR-10, CIFAR-100 and ImageNet), and use representation heatmaps to illustrate the results. The x and y axes of each heatmap index the layers of the model(s) in consideration, going from input to output, and each entry (i, j) is the CKA similarity score between layer i and layer j.

We use CKA to compute the representation similarity for all pairs of layers within a single model (i.e., when network 1 and network 2 are identical), and across models (i.e., when network 1 and network 2 are trained with different random initializations, or have different architectures altogether).

Below is an example of the resulting heatmap when we compare representations of each layer to every other layer within a single ResNet of depth 26 and width multiplier 1. In the design convention used here, the stated depth only refers to the number of convolutional layers in the network, but we analyze all layers present, and the width multiplier applies to the number of filters in each convolution. Notice the checkerboard pattern in the heatmap, which is caused by skip connections (shortcuts between layers) in the architecture.

The Emergence of the Block Structure
What stands out from the representation heatmaps of deeper or wider networks is the emergence of a large set of consecutive layers with highly similar representations, which appears in the heatmaps as a yellow square (i.e., a region with high CKA scores). This phenomenon, which we call the block structure, suggests that the underlying layers may not be as efficient at progressively refining the network’s representations as we expect. Indeed, we show that the task performance becomes stagnant inside the block structure, and that it is possible to prune some underlying layers without affecting the final performance.

Block structure — a large, contiguous set of layers with highly similar representations — emerges with increasing width or depth. Each heatmap panel shows the CKA similarity between all pairs of layers within a single neural network. While its size and position can vary across different training runs, the block structure is a robust phenomenon that arises consistently in larger models.

With additional experiments, we show that the block structure has less to do with the absolute model size, than with the size of the model relative to the size of the training dataset. As we reduce the training dataset size, the block structure starts to appear in shallower and narrower networks:

With increasing network width (towards the right along each row) and decreasing dataset size (down each column), the relative model capacity (with respect to a given task) is effectively inflated, and the block structure begins to appear in smaller models.

Through further analysis, we are also able to demonstrate that the block structure arises from preserving and propagating the dominant principal components of its underlying representations. Refer to our paper for more details.

Comparing Representations Across Models
Going further, we study the implications of depth and width on representations across models of different random initializations and different architectures, and find that the presence of block structure makes a significant difference in this context as well. Despite having different architectures, wide and deep models without the block structure do exhibit representation similarity with each other, with corresponding layers broadly being of the same proportional depth in the model. However, when the block structure is present, its representations are unique to each model. This suggests that despite having similar overall performance, each wide or deep model with the block structure picks up a unique mapping from the input to the output.

For smaller models (e.g., ResNet-38 1×), CKA across different initializations (off the diagonal) closely resembles CKA within a single model (on the diagonal). In contrast, representations within the block structure of wider and deeper models (e.g., ResNet-38 10×, ResNet-164 1×) are highly dissimilar across training runs.

Error Analysis of Wide and Deep Models
Having explored the properties of the learned representations of wide and deep models, we next turn to understanding how they influence the diversity of the output predictions. We train populations of networks of different architectures and determine on which test set examples each architecture configuration tends to make errors.

On both CIFAR-10 and ImageNet datasets, wide and deep models that have the same average accuracy still demonstrate statistically significant differences in example-level predictions. The same observation holds for class-level errors on ImageNet, with wide models exhibiting a small advantage in identifying classes corresponding to scenes, and deep networks being relatively more accurate on consumer goods.

Per-class differences on ImageNet between models with increased width (y-axis) or depth (x-axis). Orange dots reflect differences between two sets of 50 different random initializations of ResNet-83 (1×).

Conclusions
In studying the effects of depth and width on internal representations, we uncover a block structure phenomenon, and demonstrate its connection to model capacity. We also show that wide and deep models exhibit systematic output differences at class and example levels. Check out the paper for full details on these results and additional insights! We’re excited about the many interesting open questions these findings suggest, such as how the block structure arises during training, whether the phenomenon occurs in domains beyond image classification, and ways these insights on internal representations can inform model efficiency and generalization.

Acknowledgements
This is a joint work with Maithra Raghu and Simon Kornblith. We would like to thank Tom Small for the visualizations of the representation heatmap.

Categories
Misc

AI Gone Global: Why 20,000+ Developers from Emerging Markets Signed Up for GTC

Major tech conferences are typically hosted in highly industrialized countries. But the appetite for AI and data science resources spans the globe — with an estimated 3 million developers in emerging markets. Our recent GPU Technology Conference — virtual, free to register, and featuring 24/7 content — for the first time featured a dedicated track on Read article >

The post AI Gone Global: Why 20,000+ Developers from Emerging Markets Signed Up for GTC appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA Merlin Latest Enhancements Streamlines Recommender Workflows with .5 Release

The latest Merlin .5 update includes a data generator for training, multi-GPU dataloader, and initial support for session-based recommenders.

Billions of people in the world are online. Many discrete moments online are spent browsing, shopping, streaming entertainment, or engaging with social media. Each discrete moment, or session, online is an opportunity for recommenders to make informed decisions a bit easier, faster, and more personalized for an individual person.Yet, when considering scale, this translates into recommenders potentially supporting billions of people interacting with trillions of things online.

At GTC Spring 2021, NVIDIA shared how retail, entertainment, on-demand, and social companies are building and utilizing recommenders at scale including early adopters of NVIDIA Merlin. Merlin open source components include NVTabular for ETL, HugeCTR for training, and Triton for inference. The NVIDIA Merlin team continues to ingest feedback from early adopters to streamline recommender workflows for machine learning engineers. The latest Merlin .5 update includes a data generator for training, multi-GPU dataloader, and initial support for session-based recommenders. Also, the update continuously reaffirms NVIDIA’s commitment to democratizing and streamlining recommender workflows. 

Supporting Experimentation and Streamlining Recommender Workflows 

Ongoing experimentation is vital for fine tuning recommender models performance before models are deployed to production. A configurable data generator, using synthetic data, helps machine learning engineers calculate the probability distribution to be uniform or power-law for categorical features, without modifying the configuration file. Merlin HugeCTR’s new data generator considers categorical data and is particularly helpful for benchmarking and research purposes. 

Merlin .5’s inclusion of a multi-GPU dataloader was based on feedback from Merlin early adopters and also helps streamline workflows. Machine learning engineers are able to use the Merlin NVTabular TensorFlow (TF) dataloader for multi-GPU training on a  single node using TF Distributed. Merlin NVTabular utilizes Dask and Dask-cuDF to scale easily to multi-GPU and multi-node as well as provide a high-performance recommender specific ETL pipeline.

Merlin Session-Based Recommenders Support: Just A Beginning 

Data scientists and machine learning engineers at the forefront of e-commerce, news, and social media recommender work have added, or are considering to add, session-based recommenders. While collaborative filtering and content-based filtering are established recommender methods, session-based recommenders are gaining attention due to the potential increased accuracy of predictions when users interests are dynamic and specific to a shorter time frame (i.e., within a session). With Merlin .5, NVTabular provides new preprocessing functionality needed to transform and group data for session based-recommenders.

 Download and Try Merlin’s Latest Update

The latest preprocessing and training enhancements to NVIDIA Merlin reaffirms NVIDIA’s commitment to democratizing and accelerating recommender workflows. As machine learning engineers and data scientists use a hybrid of libraries, packages, tools, and techniques to create effective and impactful recommenders, Merlin components are designed to be easy-to-use and interoperable with existing recommender workflows. 

To discover hands-on how Merlin components streamline recommender workflows, download and try Merlin NVTabular for ETL, HugeCTR for training, and Triton for inference.

Categories
Misc

Creating Tensorflow Dataset for Object Recognition in Keras

Hi,

I was wondering if someone could aid me in solving this problem. I have been following this tutorial, which uses the COCO dataset from tfds.

I am interested in using a different dataset, but I am having trouble adapting the code.

My dataset consists of a number of images, with corresponding bounding box annotations in a .csv. It can be summarized as this: [filename, xmin, ymin, xmax, ymax, class].

The code in this tutorial uses (to my understanding) a tensor dataset with the format [image_array, xmin, ymin, xmax, ymax, class].

How can I load this data in this format? I have been having a great deal of trouble finding any resources. Any help is greatly appreciated! I will mention how I have been approaching this below.

Summary of things applied:

Essentially, I have been able to load everything into a pandas dataframe with 6 columns, consisting of [filename, xmin, ymin, xmax, ymax, class]. However, I feel this to be inefficient, and I cannot get the last step (conversion to a tensor).

I try: d = tf.data.Dataset.from_tensors((df.values))

and get: ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray).

submitted by /u/Puzzled_Supports
[visit reddit] [comments]

Categories
Misc

How can I do grid to grid transitions with Tensorflow?

This is a complete newbie question. I’ve used Tensorflow before but only with Keras for classifying. I’m not all that knowledgeable about Tensorflow’s capabilities beyond that.

I have a problem where I want to computer the next step in a sequence. The input and output data are both in the form of 2d grids.

eg:

0 1 0 0 0 0 0 1 0 -> 1 1 1 0 1 0 0 0 0 

The problem is a bit more complex than that, but I think that’s the simplest example. The actual domain is hydrodynamics, how waves and currents interact with objects, land, and ships. This can be done with a simulation, but that is extremely computationally expensive. If I can get a “near enough” result in a much shorter time span that is acceptable.

There are many sub problems within this domain, and practically any simulation solution is a compromise.

I have been thinking about this problem seeing this video https://www.youtube.com/watch?v=2Bw5f4vYL98& – but I’m no AI researcher so the paper is beyond my level.

Do you think this is a problem that Tensorflow can be used to tackle?

submitted by /u/fgyoysgaxt
[visit reddit] [comments]