Categories
Misc

How to attach normalization layer after training?

So I apply preprocessing on my dataset because I can generate new data/do normalization.
Now its time to save model, so someone else can use it. Now I need normalization.

model = Sequential()

//model.add(Lambda(lambda x: (x / 255.0) ))

model.add(…)
model.fit()

I want to attach this commented Layer before saving model. I didn’t need it for training but now
once model is trained I want to have normalization in network.
From this tutorial it is mention it is possible but I don’t see how to do this.

https://www.tensorflow.org/tutorials/images/data_augmentation

  • In this case the prepreprocessing layers will not be exported with the model when you call model.save
    . You will need to attach them to your model before saving it or reimplement them server-side. After training, you can attach the preprocessing layers before export.

submitted by /u/rejiuspride
[visit reddit] [comments]

Categories
Misc

Waste Not, Want Not: AI Startup Opseyes Revolutionizes Wastewater Analysis

What do radiology and wastewater have in common? Hopefully, not much. But at startup Opseyes, founder Bryan Arndt and data scientist Robin Schlenga are putting the AI that’s revolutionizing medical imaging to work on analyzing wastewater samples. Arndt and Schlenga spoke with NVIDIA AI Podcast host Noah Kravitz about the inspiration for Opseyes, which began Read article >

The post Waste Not, Want Not: AI Startup Opseyes Revolutionizes Wastewater Analysis appeared first on The Official NVIDIA Blog.

Categories
Misc

How do you put multiple filters in one convolution?

How do you put multiple filters in one convolution?

https://i.redd.it/n0zeo8nccj571.gif

I just started learning Tensorflow/Keras and would like to know in conv6 and conv7, how do you put 3 filters in one convolution?

I have this code for both of them, but my code creates 3 separate convolutions and based on my understand that’s only one convolution right? Also, I’m not too sure if those filters are executed in parallel or sequential from left to right (wouldn’t that be the same as having 3 separate convolutions?)

keras.layers.Conv2D(filters=1024, kernel_size=(1,1), strides=(1,1), activation=’relu’, padding=”same”),
keras.layers.Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), activation=’relu’, padding=”same”),
keras.layers.Conv2D(filters=1024, kernel_size=(1,1), strides=(1,1), activation=’relu’, padding=”same”),

Thanks for the help!

submitted by /u/Master-Cantaloupe750
[visit reddit] [comments]

Categories
Misc

How to load image data for facial keypoints detection in tensorflow?

Hello everyone

I have a dataset that contain images of random people’s faces and a csv file that has the image files names and the corresponding 68 facial keypoints, similar to this:

Image 0 1 2 3 136
/file.jpg 54 11 23 43 .. 12

How do I load dataset in tensorflow?

Thanx

submitted by /u/RepeatInfamous9988
[visit reddit] [comments]

Categories
Misc

Training Custom Object Detector and converting to TFLite leads to wrong predicted bounding boxes and weird output shape

I have used this official tutorial to train my custom traffic sign detector with the dataset from German Traffic Sign Detection Benchmark site.

I have created my PASCAL VOC format .xml files using pascal-voc-writer python lib and converted them to tf records with the resized images to 320×320. I have also scaled the bounding box coordinates as they were for the 1360×800 images. I have used the formula Rx = NEW_WIDTH/WIDTH Ry = NEW_HEIGHT/HEIGHT where NEW_WIDTH = NEW_HEIGHT = 320 and rescaled coords like so xMin = round(Rx * int(xMin)).

The pre-trained model I have used is ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8. You can also see the images used for training here and their corresponding .xml files here.

The problem is that after training and converting from saved_model to .tflite using this script, it does not recognize traffic signs and the outputs are a bit different of what I expect as instead of a list of list of normalized coordinates I get a list of a list of list of normalized coordinates. The last steps in the training process look like this. After using this script, the output image with predicted bounding boxes looks like this and the printed output is this.

What could be the problem? Thank you!

submitted by /u/morphinnas
[visit reddit] [comments]

Categories
Offsites

A Step Toward More Inclusive People Annotations in the Open Images Extended Dataset

In 2016, we introduced Open Images, a collaborative release of ~9 million images annotated with image labels spanning thousands of object categories and bounding box annotations for 600 classes. Since then, we have made several updates, including the release of crowdsourced data to the Open Images Extended collection to improve diversity of object annotations. While the labels provided with these datasets were expansive, they did not focus on sensitive attributes for people, which are critically important for many machine learning (ML) fairness tasks, such as fairness evaluations and bias mitigation. In fact, finding datasets that include thorough labeling of such sensitive attributes is difficult, particularly in the domain of computer vision.

Today, we introduce the More Inclusive Annotations for People (MIAP) dataset in the Open Images Extended collection. The collection contains more complete bounding box annotations for the person class hierarchy in 100k images containing people. Each annotation is also labeled with fairness-related attributes, including perceived gender presentation and perceived age range. With the increasing focus on reducing unfair bias as part of responsible AI research, we hope these annotations will encourage researchers already leveraging Open Images to incorporate fairness analysis in their research.

Examples of new boxes in MIAP. In each subfigure the magenta boxes are from the original Open Images dataset, while the yellow boxes are additional boxes added by the MIAP Dataset. Original photo credits — left: Boston Public Library; middle: jen robinson; right: Garin Fons; all used with permission under the CC- BY 2.0 license.

Annotations in Open Images
Each image in the original Open Images dataset contains image-level annotations that broadly describe the image and bounding boxes drawn around specific objects. To avoid drawing multiple boxes around the same object, less specific classes were temporarily pruned from the label candidate set, a process that we refer to as hierarchical de-duplication. For example, an image with labels animal, cat, and washing machine has bounding boxes annotated for cat and washing machine, but not for the redundant class animal.

The MIAP dataset addresses the five classes that are part of the person hierarchy in the original Open Images dataset: person, man, woman, boy, girl. The existence of these labels make the Open Images dataset uniquely valuable for research advancing responsible AI, allowing one to train a general person detector with access to gender- and age-range-specific labels for fairness analysis and bias mitigation.

However, we found that the combination of hierarchical de-duplication and societally imposed distinctions between woman/girl and man/boy introduced limitations in the original annotations. For example, if annotators were asked to draw boxes for the class girl, they would not draw a box around a boy in the image. They may or may not draw a box around a woman depending on their assessment of the age of the individual and their cultural understanding of the concept of girl. These decisions could be applied inconsistently between images, depending on the cultural background of the individual annotator, the appearance of an individual, and the context of the scene. Consequently, the bounding box annotations in some images were incomplete, with some people who appeared prominently not being annotated.

Annotations in MIAP
The new MIAP annotations are designed to address these limitations and fulfill the promise of Open Images as a dataset that will enable new advances in machine learning fairness research. Rather than asking annotators to draw boxes for the most specific class from the hierarchy (e.g., girl), we invert the procedure, always requesting bounding boxes for the gender- and age-agnostic person class. All person boxes are then separately associated with labels for perceived gender presentation (predominantly feminine, predominantly masculine, or unknown) and age presentation (young, middle, older, or unknown). We recognize that gender is not binary and that an individual’s gender identity may not match their perceived or intended gender presentation and, in an effort to mitigate the effects of unconscious bias on the annotations, we reminded annotators that norms around gender expression vary across cultures and have changed over time.

This procedure adds a significant number of boxes that were previously missing.

Over the 100k images that include people, the number of person bounding boxes have increased from ~358k to ~454k. The number of bounding boxes per perceived gender presentation and perceived age presentation increased consistently. These new annotations provide more complete ground truth for training a person detector as well as more accurate subgroup labels for incorporating fairness into computer vision research.

Comparison of number of person bounding boxes between the original Open Images and the new MIAP dataset.

Intended Use
We include annotations for perceived age range and gender presentation for person bounding boxes because we believe these annotations are necessary to advance the ability to better understand and work to mitigate and eliminate unfair bias or disparate performance across protected subgroups within the field of image understanding. We note that the labels capture the gender and age range presentation as assessed by a third party based on visual cues alone, rather than an individual’s self-identified gender or actual age. We do not support or condone building or deploying gender and/or age presentation classifiers trained from these annotations as we believe the risks associated with the use of these technologies outside fairness research outweigh any potential benefits.

Acknowledgements
The core team behind this work included Utsav Prabhu, Vittorio Ferrari, and Caroline Pantofaru. We would also like to thank Alex Hanna, Reena Jana, Alina Kuznetsova, Matteo Malloci, Stefano Pellegrini, Jordi Pont-Tuset, and Mahima Pushkarna, for their contributions to the project.

Categories
Misc

Tough Customer: NVIDIA Unveils Jetson AGX Xavier Industrial Module

From factories and farms to refineries and construction sites, the world is full of places that are hot, dirty, noisy, potentially dangerous — and critical to keeping industry humming. These places all need inspection and maintenance alongside their everyday operations, but, given safety concerns and working conditions, it’s not always best to send in humans. Read article >

The post Tough Customer: NVIDIA Unveils Jetson AGX Xavier Industrial Module appeared first on The Official NVIDIA Blog.

Categories
Misc

Recommended learning courses/material for new devs

Hi all,

I’m a node developer and extremly new to tensorflow.js and really the entire ml space.

I’m looking for any recommendations to courses or learning material that covers Tensorflow.js with multi labeling/classify images and computer vision?

I’ve been on Udemy but the reviews does not seem too good on those courses that I had a look at.

Thanks in advance,

submitted by /u/wbuc1
[visit reddit] [comments]

Categories
Misc

Is it possible to use CUDA Compute 3.0 now?

The docs state that it’s possible to compile with compute 3.0 support and I’ve tried to compile it but it fails stating it requires min 3.5 and building always fails when compiling the GPU section. I’ve even tried using anaconda’s tensorflow-gpu package.

I have CUDA toolkit 10.1 and cudnn 7.6 which I think are right. When running a `f.config.list_physical_devices(‘GPU’)`, I see the error output “Ignoring visible gpu device (device: 0, name: NVIDIA Quadro K2100M, pci bus id: 0000:01:00.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.”

Am I SOL?

submitted by /u/papabear_12
[visit reddit] [comments]

Categories
Misc

Telemetry Driven Network Quality and Reliability Monitoring with NVIDIA NetQ 4.0.0

NVIDIA NetQ is a highly-scalable modern network operations tool leveraging fabric-wide telemetry data for visibility and troubleshooting of the overlay and underlay network in real-time.

NVIDIA NetQ 4.0.0 was recently released with many new capabilities. NVIDIA NetQ is a highly-scalable modern network operations tool leveraging fabric-wide telemetry data for visibility and troubleshooting of the overlay and underlay network in real-time. NetQ can be deployed on customer premises, or can be consumed as cloud-based service (SaaS). For more details, refer to the NetQ datasheet.  

NetQ 4.0.0 includes the following notable new features: 

  • CI/CD validation enhancements 
  • gNMI streaming of WJH events towards third-party applications 
  • SONiC support 
  • RoCE monitoring 
  • User interface improvements 

Refer to the NetQ 4.0.0 User Guide for details and all the other capabilities introduced. 

NVIDIA NetQ 4.0.0 user interface

Validation enhancements 

In the physical production network, NetQ validations provide insight into the live state of the network and helps with  troubleshooting. NetQ 4.0.0 provides the ability to: 

  • include or exclude one or more of the various tests performed during the validation 
  • create filters to suppress false alarms or known errors and warnings 

gNMI streaming of WJH events 

NVIDIA What Just Happened (WJH) is a hardware-accelerated telemetry feature available on NVIDIA Spectrum switches, which streams detailed and contextual telemetry data for analysis. WJH provides real-time visibility into problems in the network, such as hardware packet drops due to misconfigurations, buffer congestion, ACL, or layer 1 problems.  

NetQ 4.0.0 supports gNMI ( gRPC network management interface) to collect What Just Happened data from the NetQ Agent. YANG Model details are available in the User Guide. 

SONiC support 

NetQ now monitors the switches with SONiC (Software for Open Networking in the Cloud) operating system as well as Cumulus Linux. SONiC support includes traces, validations, snapshots, events, service visibility and What Just Happened. This is an early access feature. 

RoCE Monitoring 

RDMA over Converged Ethernet (RoCE) provides the ability to write to compute or storage elements using remote direct memory access (RDMA) over an Ethernet network instead of using host CPUs. RoCE relies on congestion control and lossless Ethernet to operate. Cumulus Linux supports features that can enable lossless Ethernet for RoCE environments. NetQ allows users to view RoCE configuration and monitor RoCE counters with threshold crossing alerts. 

User interface enhancements 

The NetQ GUI is enhanced to show switch details in the topology view.  Using the GUI, premises can be renamed and deleted.  

NVIDIA AIR is updated with NetQ 4.0.0, check out and upgrade your environment to take advantage of all the new capabilities. To learn more, visit the NVIDIA ethernet switching solutions webpage.