Categories
Misc

TFlite model maker vs Tensorflow object detection api for edge inference

I have used Tensorflow object detection API ( https://github.com/tensorflow/models/tree/master/research/object_detection ) when I needed to do transfer learning of object detection models in the past 2 years. In most cases, I have used the trained models both in Tensorflow during development ( full version not TFLite ) on desktop as well as in TFLite after converting them to run on edge.

Some of the edge applications require a high FPS and therefore need to accelerate the inference using a Coral edge TPU. A constant issue with this approach has been that most model architectures in the Tensorflow object detection zoo are not possible to quantize and use with the Coral TPU. Some SSD models even fail or throw an Exception when trying to convert them to TFLite without quantization, although the documentation states that SSD models are supported.

I saw that the Tensorflow Lite Model maker ( https://www.tensorflow.org/lite/tutorials/model_maker_object_detection ) nowadays has support for transfer learning of EfficientDet models, including quantization and compilation for Coral. TFLite model maker also supports saving to “saved model” format. If I am not mistaken, It should then be possible to save the trained model both as .tflite for use in TFLite with Coral on edge and as saved_model for use with Tensorflow on desktop during development.

Does anyone have experience to share from working with Tensorflow lite model maker for object detection and then deployment on edge with Coral TPU? It would be valuable to hear what works well and what surprises / bugs to expect.

Thanks!

submitted by /u/NilViktor
[visit reddit] [comments]

Categories
Misc

TFLite model showing many bad detections, should detect the yellow box and the tape markers on the floor. Same detections no matter what video I test. Not sure where to look to solve this. Any help appreciated!

TFLite model showing many bad detections, should detect the yellow box and the tape markers on the floor. Same detections no matter what video I test. Not sure where to look to solve this. Any help appreciated! submitted by /u/kbennett1999
[visit reddit] [comments]
Categories
Misc

Discover the Latest in Machine Learning, Graphics, HPC, and IoT at AWS re:Invent

Split screen of a man and his avatar.NVIDIA created content for AWS re:Invent, helping developers learn more about applying the power of GPUs to reach their goals faster and more easily.Split screen of a man and his avatar.

See the latest innovations spanning from the cloud to the edge at AWS re:Invent. Plus, learn more about the NVIDIA NGC catalog—a comprehensive collection of GPU-optimized software.

Working closely together, NVIDIA and AWS developed a session and workshop focused on learning more about NVIDIA GPUs and providing hands-on training on NVIDIA Jetson modules.

Register now for the virtual AWS re:Invent. >>

More information 

How to Select the Right Amazon EC2 GPU Instance and Optimize Performance for Deep Learning

Session ID: CMP328-S

Get all the information you need to make an informed choice for which Amazon EC2 NVIDIA GPU instance to use and how to get the most out of it by using GPU-optimized software for your training and inference workloads.

This NVIDIA-sponsored session—delivered by Shashank Prasanna, an AI and ML evangelist at AWS—focuses on helping engineers, developers, and data scientists solve challenging problems with ML.

Building a people counter with anomaly detection using AWS IoT and ML

Session ID: IOT306 

Get started with AWS IoT Greengrass v2, NVIDIA DeepStream, and Amazon SageMaker Edge Manager with computer vision in this workshop. Learn how to make and deploy a video analytics pipeline and build a people counter and deploy it to an NVIDIA Jetson Nano edge device.

This workshop is being delivered by Ryan Vanderwerf, Partner Solutions Architect, and Yuxin Yang, AI/ML IoT Architect.

Virtual content

APN.TV Segment: Simplified AI Model Deployment with Triton Inference Server

Join this session to learn how to use NVIDIA Triton in your AI workflows and maximize the AI performance on your GPUs and CPUs.

NVIDIA Triton is an open source inference-serving software to deploy deep learning and ML models from any framework (TensorFlow, TensorRT, PyTorch, OpenVINO, ONNX Runtime, XGBoost, or custom) on GPU‑ or CPU‑based infrastructure. 

Shankar Chandrasekaran, Sr. Product Marketing Manager of NVIDIA, discusses model deployment challenges, how NVIDIA Triton simplifies deployment and maximizes performance of AI models, how to use NVIDIA Triton on AWS, and a customer use case.

APN.TV Segment: Build ML Solutions Faster with NVIDIA NGC Catalog on AWS Marketplace

In this session, Abhilash Somasamudramath, NVIDIA Product Manager of AI Software, will show how to use free GPU-optimized software available on the NGC catalog in AWS Marketplace to achieve your ML goals.

ML has transformed many industries as companies adopt AI to improve operational efficiencies, increase customer satisfaction, and gain a competitive edge. However, the process of training, optimizing, and running ML models to build AI-powered applications is complex and requires expertise. 

The NVIDIA NGC catalog provides GPU-optimized AI software including frameworks, pretrained models, and industry-specific software development keys (SDKs) that accelerate workflows. This software allows data engineers, data scientists, developers, and DevOps teams to focus on building and deploying their AI solutions faster. 

theCUBE interview with Ian Buck, GM and Vice President of Accelerated Computing at NVIDIA

Hear Ian Buck discuss the latest trends in ML and AI, how NVIDIA is partnering with AWS to deliver accelerated computing solutions, and how NVIDIA makes accessing AI solutions easier than ever.

Categories
Misc

Implementing High Performance Matrix Multiplication Using CUTLASS v2.8

High performance CUTLASS template abstractions support matrix multiply operations (GEMM), Convolution AI, and improved Strided-DGrad.

NVIDIA continues to enhance CUTLASS to provide extensive support for mixed-precision computations, providing specialized data-movement, and multiply-accumulate abstractions. Today, NVIDIA is announcing the availability of CUTLASS version 2.8.

Download the free CUTLASS v2.8 software.

What’s new

  • Emulated single-precision GEMM and Convolution (up to 48TFLOPs)
  • Grouped GEMM concept
  • Improved Strided-DGrad

See the CUTLASS Release Notes for more information.

About CUTLASS

CUTLASS is a collection of CUDA C++ template abstractions for implementing high-performance matrix-multiplication (GEMM) at all levels, and scales within CUDA. It incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS.

CUTLASS decomposes these “moving parts” into reusable and modular software components abstracted by C++ template classes. These thread-wide, warp-wide, block-wide, and device-wide primitives can be specialized and tuned via custom tiling sizes, data types, and other algorithmic policy. The resulting flexibility simplifies their use as building blocks within custom kernels and applications.

To support a wide variety of applications, CUTLASS provides extensive support for mixed-precision computations, providing specialized data-movement, and multiply-accumulate abstractions for:

  • Half-precision floating point (FP16), BFloat16 (BF16), and Tensor Float 32 (TF32) data types.
  • Single-precision floating point (FP32) data type.
  • Double-precision floating point (FP64) data type.
  • Integer data types (4b and 8b).
  • Binary data types (1b).

Furthermore, CUTLASS demonstrates warp-synchronous matrix multiply operations targeting the programmable, high-throughput Tensor Cores implemented on NVIDIA Volta, Turing, and Ampere architectures.

CUTLASS implements high-performance convolution (implicit GEMM). Implicit GEMM is the formulation of a convolution operation as a GEMM. This allows CUTLASS to build convolutions by reusing highly optimized warp-wide GEMM components and below.

Learn more

Recent Developer Blog posts

Categories
Misc

Pretrained vision transformers in Tensorflow.

Have you been looking for pretrained vision transformer models in TensorFlow? Have you been frustrated that pretrained models are available only in PyTorch? And JAX…

Let me introduce TensorFlow Image Models (tfimm), a TF port of the PyTorch timm library, which in version v0.1.1 provides 37 pretrained vision transformers of the ViT and DeiT varieties.

The list of available models will grow in upcoming releases.

submitted by /u/drbottich
[visit reddit] [comments]

Categories
Misc

Using TensorFlow with Anaconda [help]

I’ve been trying to set up my tensorflow env and am having difficulty. For my project i need tensorflow, scikit-learn matplotlib, pandas and numpy.

When I go the conda forge route, and try to run my .py file, i get different errors for each iteration of the env that i have.

Some highlighting tensorflow-estimator; i’ve tried using different versions of this module, and tried different version of python and different versions of tensorflow.

Each environment yields a different error. So instead of trying to iterate through each version, i wanted to see if anyone had any clues to what I may be doing wrong.

Or

If you use Anaconda w/ tensorflow and the other packages i’ve mentioned, if you can give me a breakdown of your env/version numbers for corresponding packages, that would be great!

Thanks in advance!

Error I am getting when using env. that strictly installs through packages through anaconda navigator (I have tried pip, I have tried conda terminal etc)

ImportError: cannot import name ‘MomentumParameters’ from ‘tensorflow.python.tpu.tpu_embedding’ (C:Usersmeanaconda3envstf_envlibsite-packagestensorflowpythontputpu_embedding.py)

submitted by /u/Pos1tivity
[visit reddit] [comments]

Categories
Misc

Reach New Frontiers of Immersive Production with NVIDIA CloudXR and HTC VIVE Focus 3

Woman wearing the HTC VIVE Focus 3 headset.HTC released a CloudXR client to support their VIVE Focus 3, which provides a “best of both worlds” solution to the difficult tradeoffs VR developers face.Woman wearing the HTC VIVE Focus 3 headset.

Whether building immersive theatrical experiences or virtual training solutions, XR development continues to push the limits of both content and device performance. Often, this means having to compromise on either fidelity or mobility. But with the NVIDIA CloudXR streaming solution, this is no longer the case.

This month at NVIDIA GTC, HTC announced the release of an NVIDIA CloudXR client to support their VIVE Focus 3, available now on GitHub. This provides a “best of both worlds” solution to the difficult tradeoffs VR developers typically face.

“High fidelity, cloud-based VR streaming represents the next big evolution in the XR industry, and we’re excited to continue working closely with the teams at NVIDIA to keep pushing the industry forwards,” said Shen Ye, senior director and global head of products at HTC.

The VIVE Focus 3 is the first commercially available VR headset with a custom NVIDIA CloudXR client. With seamless remote rendering powered by NVIDIA CloudXR, creative studios like Agile Lens can design extremely high fidelity immersive experiences, which would otherwise be impossible to run on a mobile chipset.

Alex Coulombe, cofounder and creative director at Agile Lens, has wanted to bring the intimacy and power of theater to the masses using technologies like VR. His latest venture, Heavenue, is a platform that integrates NVIDIA CloudXR to deliver high fidelity immersive live performances to VR headsets from the cloud.

Heavenue’s first partner is the Actors Theatre of Louisville, which is producing an immersive rendition of the classic A Christmas Carol this December. This will be the first simultaneous live stage and virtual theater performance of its kind.

Image of actors using VR equipment to create avatars shown on a computer screen.
Figure 1. Actors Theater of Louisville combined live motion capture with virtual avatars using Heavenue for their production of A Christmas Carol.

The producers combined motion capture from live performers with facial and voice tracking from actors, overlaid onto a virtual avatar using Unreal Engine 4 to complete the experience. This is all hosted by CoreWeave, which offers powerful servers with a broad range of NVIDIA GPUs. It includes North America’s largest deployment of A40s, in the cloud and streamed to end users on standalone VR headsets such as the VIVE Focus 3, with the NVIDIA CloudXR. The result is a rich and immersive experience where users can freely move around the theater environment without tether restriction.

“NVIDIA CloudXR is like a bridge to the future,” Coulombe said. “This has never been possible before. It’s only now, with the advent of technologies like these that we can start to build a platform that democratizes the experience of an incredibly immersive, compelling, vivid, high fidelity live performance.”

With NVIDIA CloudXR, virtual productions and location-based experiences (LBE) are able to use the VIVE Focus 3 to deliver a more immersive experience. This shifts computing to centralized computers and removes the need to spend time navigating around cords. Having a centralized computing environment makes debugging and troubleshooting easy, creating a more user-friendly system and experience. All of this comes without impacting quality or graphical fidelity, taking full advantage of a 5K resolution and 120-degree field of view.

This application extends to any use case where mobility and high fidelity are required, such as in enterprise training, product and building design, or manufacturing floor planning.

HTC VIVE has open-sourced their CloudXR sample client for the VIVE Focus 3 on GitHub. Developers can extend the sample client source code to add bespoke features and customized user interface. Those less familiar with Android development, or just wanting to try it out, can download the prebuilt APK and install this directly into their headset to get started.

Learn more about NVIDIA CloudXR and download the HTC VIVE Focus 3 sample client today.

Categories
Misc

NVIDIA Merlin Extends Open Source Interoperability for Recommender Workflows with Latest Update

NVIDIA Merlin recommender workflow.Check out NVIDIA Merlin’s latest updates including Transformers4Rec and SparseOperations Kit.NVIDIA Merlin recommender workflow.

Data scientists and machine learning engineers use many methods, techniques, and tools to prep, build, train, deploy, and optimize their machine learning models. While technical leads cite the importance of leveraging open source software for recommender team workflows, the majority of popular machine learning methods, libraries, and frameworks are not designed to support and accelerate recommender workflows. 

NVIDIA Merlin is designed to streamline recommender workflows. The latest update includes Transformers4Rec, a new library that wraps HuggingFace Transformer Architectures to build pipelines for session-based recommendations. It also adds SparseOperationsKit (SOK), a new Python package that supports sparse training and inference with Deep Learning (DL). 

This latest release reaffirms the commitment of NVIDIA to help machine learning engineers and data scientists develop and optimize their recommender systems—with open source canonical building blocks. 

Merlin Transformers4Rec, designed for recommenders and solving cold-start problems

Recommender methods popularized in mainstream media often rely upon long-term user profiles or lifetime user behavior. Yet, ecommerce and media companies acquiring new ongoing active users must provide relevant recommendations to first-time and early-visit users. Relevant recommendations enable increased user engagement, retention, and conversion to subscription services. 

Utilizing session-based recommenders with Transformers4Rec, data scientists and machine learning engineers are able to solve the cold-start problem by leveraging contextual and recent user interactions to predict a user’s next action and provide relevant recommendations. The NVIDIA Merlin team designed Transformers4Rec to be used as a standalone solution or within an ensemble of recommendation models.

SparseOperationsKit, sparse training, and inference with deep learning 

Recommender teams that work with massive datasets benefit from using deep learning (DL) recommenders. Merlin HugeCTR is a DL training framework designed for recommender systems and the latest update includes SOK, a new open source Python package that supports sparse training and inference. 

It is also compatible with DL frameworks including TensorFlow. SOK provides embedding model parallelism functionality to use GPUs, including scaling from a single GPU to multiple GPUs. Most common DL frameworks do not support model-parallelism, which makes it challenging to use all available GPUs in a cluster. Yet, SOK being compatible with DL frameworks, including TensorFlow, helps fill that void. 

Download and try NVIDIA Merlin 

The latest update to NVIDIA Merlin, including Transformers4Rec and SOK, strengthens streamlining and accelerating recommender workflows with open-source interoperability and performance enhancements. 

For more information about the latest release download NVIDIA Merlin today.

Categories
Misc

‘Paint Me a Picture’: NVIDIA Research Shows GauGAN AI Art Demo Now Responds to Words

A picture worth a thousand words now takes just three or four words to create, thanks to GauGAN2, the latest version of NVIDIA Research’s wildly popular AI painting demo. The deep learning model behind GauGAN allows anyone to channel their imagination into photorealistic masterpieces — and it’s easier than ever. Simply type a phrase like Read article >

The post ‘Paint Me a Picture’: NVIDIA Research Shows GauGAN AI Art Demo Now Responds to Words appeared first on The Official NVIDIA Blog.

Categories
Misc

find the maximum number in each feature map inside a tensor

Hi everybody,

Please, is there a way to find the maximum pixel value in each feature map inside a tensor?

for example, suppose we have this shape (None,48,48,32) of a tensor x. which consist of 32 feature maps of 48×48 size. How can i find the maximum pixel value in the fifth feature map?

Thanks.

submitted by /u/Ali_Q_Saeed
[visit reddit] [comments]