Categories
Misc

Let It Flow: AI Researchers Create Looping Videos From Still Images

Researchers from University of Washington and Facebook used deep learning to convert still images into realistic animated looping videos.  Their approach, which will be presented at the upcoming Conference on Computer Vision and Pattern Recognition (CVPR), imitates continuous fluid motion — such as flowing water, smoke and clouds — to turn still images into short … Continued

Researchers from University of Washington and Facebook used deep learning to convert still images into realistic animated looping videos. 

Their approach, which will be presented at the upcoming Conference on Computer Vision and Pattern Recognition (CVPR), imitates continuous fluid motion — such as flowing water, smoke and clouds — to turn still images into short videos that loop seamlessly. 

“What’s special about our method is that it doesn’t require any user input or extra information,” said Aleksander Hołyński, University of Washington doctoral student in computer science and engineering and lead author on the project. “All you need is a picture. And it produces as output a high-resolution, seamlessly looping video that quite often looks like a real video.”

The team created a method known as “symmetric splatting”  to predict the past and future motion from a still image, combining that data to create a seamless animation. 

“When we see a waterfall, we know how the water should behave. The same is true for fire or smoke. These types of motions obey the same set of physical laws, and there are usually cues in the image that tell us how things should be moving,” Hołyński said. “We’d love to extend our work to operate on a wider range of objects, like animating a person’s hair blowing in the wind. I’m hoping that eventually the pictures that we share with our friends and family won’t be static images. Instead, they’ll all be dynamic animations like the ones our method produces.”

To teach their neural network to estimate motion, the team trained the model on more than 1,000 videos of fluid motion such as waterfalls, rivers and oceans. Given only the first frame of the video, the system would predict what should happen in future frames, and compare its prediction with the original video. This comparison helped the model improve its predictions of whether and how each pixel in an image should move. 

The researchers used the NVIDIA Pix2PixHD GAN model for motion estimation network training, as well as FlowNet2 and PWC-Net. NVIDIA GPUs were used for both training and inference of the model. The training data included 1196 unique videos, 1096 for training, 50 for validation and 50 for testing.

Read the University of Washington news release for more >>

The researchers’ paper is available here.

Categories
Misc

Incorrect dimensions for output of speech extraction model

Hello everyone, below I have linked a recent SO post of mine going more in depth to my problem

https://stackoverflow.com/questions/68008223/tf-model-wrong-output-dimensions

But an overview is that I am unable to get proper output from my model because of what I believe to be issues with my input and my lack of understanding regarding shape of input vs. shape of a tensor. If there is anything I can provide to give a better idea of my problem let me know. Appreciate any help I could get

submitted by /u/ythug
[visit reddit] [comments]

Categories
Offsites

Learning an Accurate Physics Simulator via Adversarial Reinforcement Learning

Simulation empowers various engineering disciplines to quickly prototype with minimal human effort. In robotics, physics simulations provide a safe and inexpensive virtual playground for robots to acquire physical skills with techniques such as deep reinforcement learning (DRL). However, as the hand-derived physics in simulations does not match the real world exactly, control policies trained entirely within simulation can fail when tested on real hardware — a challenge known as the sim-to-real gap or the domain adaptation problem. The sim-to-real gap for perception-based tasks (such as grasping) has been tackled using RL-CycleGAN and RetinaGAN, but there is still a gap caused by the dynamics of robotic systems. This prompts us to ask, can we learn a more accurate physics simulator from a handful of real robot trajectories? If so, such an improved simulator could be used to refine the robot controller using standard DRL training, so that it succeeds in the real world.

In our ICRA 2021 publication “SimGAN: Hybrid Simulator Identification for Domain Adaptation via Adversarial Reinforcement Learning”, we propose to treat the physics simulator as a learnable component that is trained by DRL with a special reward function that penalizes discrepancies between the trajectories (i.e., the movement of the robots over time) generated in simulation and a small number of trajectories that are collected on real robots. We use generative adversarial networks (GANs) to provide such a reward, and formulate a hybrid simulator that combines learnable neural networks and analytical physics equations, to balance model expressiveness and physical correctness. On robotic locomotion tasks, our method outperforms multiple strong baselines, including domain randomization.

A Learnable Hybrid Simulator
A traditional physics simulator is a program that solves differential equations to simulate the movement or interactions of objects in a virtual world. For this work, it is necessary to build different physical models to represent different environments – if a robot walks on a mattress, the deformation of the mattress needs to be taken into account (e.g., with the finite element method). However, due to the diversity of the scenarios that robots could encounter in the real world, it would be tedious (or even impossible) for such environment-specific modeling techniques, which is why it is useful to instead take an approach based on machine learning. Although simulators can be learned entirely from data, if the training data does not include a wide enough variety of situations, the learned simulator might violate the laws of physics (i.e., deviate from the real-world dynamics) if it needs to simulate situations for which it was not trained. As a result, the robot that is trained in such a limited simulator is more likely to fail in the real world.

To overcome this complication, we construct a hybrid simulator that combines both learnable neural networks and physics equations. Specifically, we replace what are often manually-defined simulator parameters — contact parameters (e.g., friction and restitution coefficients) and motor parameters (e.g., motor gains) — with a learnable simulation parameter function because the unmodeled details of contact and motor dynamics are major causes of the sim-to-real gap. Unlike conventional simulators in which these parameters are treated as constants, in the hybrid simulator they are state-dependent — they can change according to the state of the robot. For example, motors can become weaker at higher speed. These typically unmodeled physical phenomena can be captured using the state-dependent simulation parameter functions. Moreover, while contact and motor parameters are usually difficult to identify and subject to change due to wear-and-tear, our hybrid simulator can learn them automatically from data. For example, rather than having to manually specify the parameters of a robot’s foot against every possible surface it might contact, the simulation learns these parameters from training data.

Comparison between a conventional simulator and our hybrid simulator.

The other part of the hybrid simulator is made up of physics equations that ensure the simulation obeys fundamental laws of physics, such as conservation of energy, making it a closer approximation to the real world and thus reducing the sim-to-real gap.

In our earlier mattress example, the learnable hybrid simulator is able to mimic the contact forces from the mattress. Because the learned contact parameters are state-dependent, the simulator can modulate contact forces based on the distance and velocity of the robot’s feet relative to the mattress, mimicking the effect of the stiffness and damping of a deformable surface. As a result, we do not need to analytically devise a model specifically for deformable surfaces.

Using GANs for Simulator Learning
Successfully learning the simulation parameter functions discussed above would result in a hybrid simulator that can generate similar trajectories to the ones collected on the real robot. The key that enables this learning is defining a metric for the similarity between trajectories. GANs, initially designed to generate synthetic images that share the same distribution, or “style,” with a small number of real images, can be used to generate synthetic trajectories that are indistinguishable from real ones. GANs have two main parts, a generator that learns to generate new instances, and a discriminator that evaluates how similar the new instances are to the training data. In this case, the learnable hybrid simulator serves as the GAN generator, while the GAN discriminator provides the similarity scores.

The GAN discriminator provides the similarity metric that compares the movements of the simulated and the real robot.

Fitting parameters of simulation models to data collected in the real world, a process called system identification (SysID), has been a common practice in many engineering fields. For example, the stiffness parameter of a deformable surface can be identified by measuring the displacements of the surface under different pressures. This process is typically manual and tedious, but using GANs can be much more efficient. For example, SysID often requires a hand-crafted metric for the discrepancy between simulated and real trajectories. With GANs, such a metric is automatically learned by the discriminator. Furthermore, to calculate the discrepancy metric, conventional SysID requires pairing each simulated trajectory to a corresponding real-world one that is generated using the same control policy. Since the GAN discriminator takes only one trajectory as the input and calculates the likelihood that it is collected in the real world, this one-to-one pairing is not needed.

Using Reinforcement Learning (RL) to Learn the Simulator and Refine the Policy
Putting everything together, we formulate simulation learning as an RL problem. A neural network learns the state-dependent contact and motor parameters from a small number of real-world trajectories. The neural network is optimized to minimize the error between the simulated and the real trajectories. Note that it is important to minimize this error over an extended period of time — a simulation that accurately predicts a more distant future will lead to a better control policy. RL is well suited to this because it optimizes the accumulated reward over time, rather than just optimizing a single-step reward.

After the hybrid simulator is learned and becomes more accurate, we use RL again to refine the robot’s control policy within the simulation (e.g., walking across a surface, shown below).

Following the arrows clockwise: (upper left) recording a small number of robot’s failed attempts in the target domain (e.g., a real-world proxy in which the leg in red is modified to be much heavier than the source domain); (upper right) learning the hybrid simulator to match trajectories collected in the target domain; (lower right) refining control policies in this learned simulator; (lower left) testing the refined controller directly in the target domain.

Evaluation
Due to limited access to real robots during 2020, we created a second and different simulation (target domain) as a proxy of the real-world. The change of dynamics between the source and the target domains are large enough to approximate different sim-to-real gaps (e.g., making one leg heavier, walking on deformable surfaces instead of hard floor). We assessed whether our hybrid simulator, with no knowledge of these changes, could learn to match the dynamics in the target domain, and if the refined policy in this learned simulator could be successfully deployed in the target domain.

Qualitative results below show that simulation learning with less than 10 minutes of data collected in the target domain (where the floor is deformable) is able to generate a refined policy that performs much better for two robots with different morphologies and dynamics.

Comparison of performance between the initial and refined policy in the target domain (deformable floor) for the hopper and the quadruped robot.

Quantitative results below show that SimGAN outperforms multiple state-of-the-art baselines, including domain randomization (DR) and direct finetuning in target domains (FT).

Comparison of policy performance using different sim-to-real transfer methods in three different target domains for the Quadruped robot: locomotion on deformable surface, with weakened motors, and with heavier bodies.

Conclusion
The sim-to-real gap is one of the key bottlenecks that prevents robots from tapping into the power of reinforcement learning. We tackle this challenge by learning a simulator that can more faithfully model real-world dynamics, while using only a small amount of real-world data. The control policy that is refined in this simulator can be successfully deployed. To achieve this, we augment a classical physics simulator with learnable components, and train this hybrid simulator using adversarial reinforcement learning. To date we have tested its application to locomotion tasks, we hope to build on this general framework by applying it to other robot learning tasks, such as navigation and manipulation.

Categories
Misc

How to attach normalization layer after training?

So I apply preprocessing on my dataset because I can generate new data/do normalization.
Now its time to save model, so someone else can use it. Now I need normalization.

model = Sequential()

//model.add(Lambda(lambda x: (x / 255.0) ))

model.add(…)
model.fit()

I want to attach this commented Layer before saving model. I didn’t need it for training but now
once model is trained I want to have normalization in network.
From this tutorial it is mention it is possible but I don’t see how to do this.

https://www.tensorflow.org/tutorials/images/data_augmentation

  • In this case the prepreprocessing layers will not be exported with the model when you call model.save
    . You will need to attach them to your model before saving it or reimplement them server-side. After training, you can attach the preprocessing layers before export.

submitted by /u/rejiuspride
[visit reddit] [comments]

Categories
Misc

Waste Not, Want Not: AI Startup Opseyes Revolutionizes Wastewater Analysis

What do radiology and wastewater have in common? Hopefully, not much. But at startup Opseyes, founder Bryan Arndt and data scientist Robin Schlenga are putting the AI that’s revolutionizing medical imaging to work on analyzing wastewater samples. Arndt and Schlenga spoke with NVIDIA AI Podcast host Noah Kravitz about the inspiration for Opseyes, which began Read article >

The post Waste Not, Want Not: AI Startup Opseyes Revolutionizes Wastewater Analysis appeared first on The Official NVIDIA Blog.

Categories
Misc

How do you put multiple filters in one convolution?

How do you put multiple filters in one convolution?

https://i.redd.it/n0zeo8nccj571.gif

I just started learning Tensorflow/Keras and would like to know in conv6 and conv7, how do you put 3 filters in one convolution?

I have this code for both of them, but my code creates 3 separate convolutions and based on my understand that’s only one convolution right? Also, I’m not too sure if those filters are executed in parallel or sequential from left to right (wouldn’t that be the same as having 3 separate convolutions?)

keras.layers.Conv2D(filters=1024, kernel_size=(1,1), strides=(1,1), activation=’relu’, padding=”same”),
keras.layers.Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), activation=’relu’, padding=”same”),
keras.layers.Conv2D(filters=1024, kernel_size=(1,1), strides=(1,1), activation=’relu’, padding=”same”),

Thanks for the help!

submitted by /u/Master-Cantaloupe750
[visit reddit] [comments]

Categories
Misc

How to load image data for facial keypoints detection in tensorflow?

Hello everyone

I have a dataset that contain images of random people’s faces and a csv file that has the image files names and the corresponding 68 facial keypoints, similar to this:

Image 0 1 2 3 136
/file.jpg 54 11 23 43 .. 12

How do I load dataset in tensorflow?

Thanx

submitted by /u/RepeatInfamous9988
[visit reddit] [comments]

Categories
Misc

Training Custom Object Detector and converting to TFLite leads to wrong predicted bounding boxes and weird output shape

I have used this official tutorial to train my custom traffic sign detector with the dataset from German Traffic Sign Detection Benchmark site.

I have created my PASCAL VOC format .xml files using pascal-voc-writer python lib and converted them to tf records with the resized images to 320×320. I have also scaled the bounding box coordinates as they were for the 1360×800 images. I have used the formula Rx = NEW_WIDTH/WIDTH Ry = NEW_HEIGHT/HEIGHT where NEW_WIDTH = NEW_HEIGHT = 320 and rescaled coords like so xMin = round(Rx * int(xMin)).

The pre-trained model I have used is ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8. You can also see the images used for training here and their corresponding .xml files here.

The problem is that after training and converting from saved_model to .tflite using this script, it does not recognize traffic signs and the outputs are a bit different of what I expect as instead of a list of list of normalized coordinates I get a list of a list of list of normalized coordinates. The last steps in the training process look like this. After using this script, the output image with predicted bounding boxes looks like this and the printed output is this.

What could be the problem? Thank you!

submitted by /u/morphinnas
[visit reddit] [comments]

Categories
Offsites

A Step Toward More Inclusive People Annotations in the Open Images Extended Dataset

In 2016, we introduced Open Images, a collaborative release of ~9 million images annotated with image labels spanning thousands of object categories and bounding box annotations for 600 classes. Since then, we have made several updates, including the release of crowdsourced data to the Open Images Extended collection to improve diversity of object annotations. While the labels provided with these datasets were expansive, they did not focus on sensitive attributes for people, which are critically important for many machine learning (ML) fairness tasks, such as fairness evaluations and bias mitigation. In fact, finding datasets that include thorough labeling of such sensitive attributes is difficult, particularly in the domain of computer vision.

Today, we introduce the More Inclusive Annotations for People (MIAP) dataset in the Open Images Extended collection. The collection contains more complete bounding box annotations for the person class hierarchy in 100k images containing people. Each annotation is also labeled with fairness-related attributes, including perceived gender presentation and perceived age range. With the increasing focus on reducing unfair bias as part of responsible AI research, we hope these annotations will encourage researchers already leveraging Open Images to incorporate fairness analysis in their research.

Examples of new boxes in MIAP. In each subfigure the magenta boxes are from the original Open Images dataset, while the yellow boxes are additional boxes added by the MIAP Dataset. Original photo credits — left: Boston Public Library; middle: jen robinson; right: Garin Fons; all used with permission under the CC- BY 2.0 license.

Annotations in Open Images
Each image in the original Open Images dataset contains image-level annotations that broadly describe the image and bounding boxes drawn around specific objects. To avoid drawing multiple boxes around the same object, less specific classes were temporarily pruned from the label candidate set, a process that we refer to as hierarchical de-duplication. For example, an image with labels animal, cat, and washing machine has bounding boxes annotated for cat and washing machine, but not for the redundant class animal.

The MIAP dataset addresses the five classes that are part of the person hierarchy in the original Open Images dataset: person, man, woman, boy, girl. The existence of these labels make the Open Images dataset uniquely valuable for research advancing responsible AI, allowing one to train a general person detector with access to gender- and age-range-specific labels for fairness analysis and bias mitigation.

However, we found that the combination of hierarchical de-duplication and societally imposed distinctions between woman/girl and man/boy introduced limitations in the original annotations. For example, if annotators were asked to draw boxes for the class girl, they would not draw a box around a boy in the image. They may or may not draw a box around a woman depending on their assessment of the age of the individual and their cultural understanding of the concept of girl. These decisions could be applied inconsistently between images, depending on the cultural background of the individual annotator, the appearance of an individual, and the context of the scene. Consequently, the bounding box annotations in some images were incomplete, with some people who appeared prominently not being annotated.

Annotations in MIAP
The new MIAP annotations are designed to address these limitations and fulfill the promise of Open Images as a dataset that will enable new advances in machine learning fairness research. Rather than asking annotators to draw boxes for the most specific class from the hierarchy (e.g., girl), we invert the procedure, always requesting bounding boxes for the gender- and age-agnostic person class. All person boxes are then separately associated with labels for perceived gender presentation (predominantly feminine, predominantly masculine, or unknown) and age presentation (young, middle, older, or unknown). We recognize that gender is not binary and that an individual’s gender identity may not match their perceived or intended gender presentation and, in an effort to mitigate the effects of unconscious bias on the annotations, we reminded annotators that norms around gender expression vary across cultures and have changed over time.

This procedure adds a significant number of boxes that were previously missing.

Over the 100k images that include people, the number of person bounding boxes have increased from ~358k to ~454k. The number of bounding boxes per perceived gender presentation and perceived age presentation increased consistently. These new annotations provide more complete ground truth for training a person detector as well as more accurate subgroup labels for incorporating fairness into computer vision research.

Comparison of number of person bounding boxes between the original Open Images and the new MIAP dataset.

Intended Use
We include annotations for perceived age range and gender presentation for person bounding boxes because we believe these annotations are necessary to advance the ability to better understand and work to mitigate and eliminate unfair bias or disparate performance across protected subgroups within the field of image understanding. We note that the labels capture the gender and age range presentation as assessed by a third party based on visual cues alone, rather than an individual’s self-identified gender or actual age. We do not support or condone building or deploying gender and/or age presentation classifiers trained from these annotations as we believe the risks associated with the use of these technologies outside fairness research outweigh any potential benefits.

Acknowledgements
The core team behind this work included Utsav Prabhu, Vittorio Ferrari, and Caroline Pantofaru. We would also like to thank Alex Hanna, Reena Jana, Alina Kuznetsova, Matteo Malloci, Stefano Pellegrini, Jordi Pont-Tuset, and Mahima Pushkarna, for their contributions to the project.

Categories
Misc

Tough Customer: NVIDIA Unveils Jetson AGX Xavier Industrial Module

From factories and farms to refineries and construction sites, the world is full of places that are hot, dirty, noisy, potentially dangerous — and critical to keeping industry humming. These places all need inspection and maintenance alongside their everyday operations, but, given safety concerns and working conditions, it’s not always best to send in humans. Read article >

The post Tough Customer: NVIDIA Unveils Jetson AGX Xavier Industrial Module appeared first on The Official NVIDIA Blog.