Categories
Misc

VAE won’t learn color

I found the CVAE tutorial on tensorflow.org. I then repurposed it on new data however it is only reconstructing the outline in a nearly grayscale image, ignoring the colour, the loss is 14000. Please help…

#below: model

class VAE(tf.keras.Model):

def __init__(self, latent_dim):

super(VAE, self).__init__()

self.latent_dim = latent_dim

self.encoder = tf.keras.Sequential(

[

tf.keras.layers.InputLayer(input_shape=(128, 128,3)),

tf.keras.layers.Conv2D(

filters=32, kernel_size=3, strides=(2, 2), activation=’relu’),

tf.keras.layers.Conv2D(

filters=64, kernel_size=3, strides=(2, 2), activation=’relu’),

tf.keras.layers.Conv2D(

filters=128, kernel_size=3, strides=(2, 2), activation=’relu’),

# tf.keras.layers.Conv2D(

# filters=256, kernel_size=3, strides=(2, 2), activation=’relu’),

tf.keras.layers.Flatten(),

# No activation

tf.keras.layers.Dense(latent_dim + latent_dim),

]

)

self.decoder = tf.keras.Sequential(

[

tf.keras.layers.InputLayer(input_shape=(latent_dim,)),

tf.keras.layers.Dense(units=16*16*256, activation=tf.nn.relu),

tf.keras.layers.Reshape(target_shape=(16, 16, 256)),

tf.keras.layers.Conv2DTranspose(

filters=128, kernel_size=3, strides=2, padding=’same’,

activation=’relu’),

tf.keras.layers.Conv2DTranspose(

filters=64, kernel_size=3, strides=2, padding=’same’,

activation=’relu’),

tf.keras.layers.Conv2DTranspose(

filters=32, kernel_size=3, strides=2, padding=’same’,

activation=’relu’),

# No activation

tf.keras.layers.Conv2DTranspose(

filters=3, kernel_size=3, strides=1, padding=’same’),

]

)

u/tf.function

def sample(self, eps=None):

if eps is None:

eps = tf.random.normal(shape=(100, self.latent_dim))

return self.decode(eps, apply_sigmoid=True)

def encode(self, x):

mean, logvar = tf.split(self.encoder(x), num_or_size_splits=2, axis=1)

return mean, logvar

def reparameterize(self, mean, logvar):

eps = tf.random.normal(shape=mean.shape)

return eps * tf.exp(logvar * .5) + mean

def decode(self, z, apply_sigmoid=False):

logits = self.decoder(z)

if apply_sigmoid:

probs = tf.sigmoid(logits)

return probs

return logits

optimizer = tf.keras.optimizers.Adam(1e-3)

def log_normal_pdf(sample, mean, logvar, raxis=1):

log2pi = tf.math.log(2. * np.pi)

return tf.reduce_sum(

-.5 * ((sample – mean) ** 2. * tf.exp(-logvar) + logvar + log2pi),

axis=raxis)

def compute_loss(model, x):

mean, logvar = model.encode(x)

z = model.reparameterize(mean, logvar)

x_logit = model.decode(z)

cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)

logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3])

logpz = log_normal_pdf(z, 0., 0.)

logqz_x = log_normal_pdf(z, mean, logvar)

return -tf.reduce_mean(logpx_z + logpz – logqz_x)

#above: loss

def train_step(model, x, optimizer):

“””Executes one training step and returns the loss.

This function computes the loss and gradients, and uses the latter to

update the model’s parameters.

“””

with tf.GradientTape() as tape:

loss = compute_loss(model, x)

gradients = tape.gradient(loss, model.trainable_variables)

optimizer.apply_gradients(zip(gradients, model.trainable_variables))

epochs = 10

latent_dim = 64 # up or 8

num_examples_to_generate = 32

model = VAE(latent_dim)

submitted by /u/much_bad_gramer
[visit reddit] [comments]

Categories
Misc

Reconstructing thousands of particles in one go at the CERN LHC with TensorFlow

Reconstructing thousands of particles in one go at the CERN LHC with TensorFlow submitted by /u/nbortolotti
[visit reddit] [comments]
Categories
Misc

Making Movie Magic, NVIDIA Powers 13 Years of Oscar-Winning Visual Effects

For the 13th year running, NVIDIA professional GPUs have powered the dazzling visuals and cinematics behind every Academy Award nominee for Best Visual Effects. The 93rd annual Academy Awards will take place on Sunday, April 25, with five VFX nominees in the running: The Midnight Sky Tenet Mulan The One and Only Ivan Love and Read article >

The post Making Movie Magic, NVIDIA Powers 13 Years of Oscar-Winning Visual Effects appeared first on The Official NVIDIA Blog.

Categories
Offsites

Evolving Reinforcement Learning Algorithms

A long-term, overarching goal of research into reinforcement learning (RL) is to design a single general purpose learning algorithm that can solve a wide array of problems. However, because the RL algorithm taxonomy is quite large, and designing new RL algorithms requires extensive tuning and validation, this goal is a daunting one. A possible solution would be to devise a meta-learning method that could design new RL algorithms that generalize to a wide variety of tasks automatically.

In recent years, AutoML has shown great success in automating the design of machine learning components, such as neural networks architectures and model update rules. One example is Neural Architecture Search (NAS), which has been used to develop better neural network architectures for image classification and efficient architectures for running on phones and hardware accelerators. In addition to NAS, AutoML-Zero shows that it’s even possible to learn the entire algorithm from scratch using basic mathematical operations. One common theme in these approaches is that the neural network architecture or the entire algorithm is represented by a graph, and a separate algorithm is used to optimize the graph for certain objectives.

These earlier approaches were designed for supervised learning, in which the overall algorithm is more straightforward. But in RL, there are more components of the algorithm that could be potential targets for design automation (e.g., neural network architectures for agent networks, strategies for sampling from the replay buffer, overall formulation of the loss function), and it is not always clear what the best model update procedure would be to integrate these components. Prior efforts for the automation RL algorithm discovery have focused primarily on model update rules. These approaches learn the optimizer or RL update procedure itself and commonly represent the update rule with a neural network such as an RNN or CNN, which can be efficiently optimized with gradient-based methods. However, these learned rules are not interpretable or generalizable, because the learned weights are opaque and domain specific.

In our paper “Evolving Reinforcement Learning Algorithms”, accepted at ICLR 2021, we show that it’s possible to learn new, analytically interpretable and generalizable RL algorithms by using a graph representation and applying optimization techniques from the AutoML community. In particular, we represent the loss function, which is used to optimize an agent’s parameters over its experience, as a computational graph, and use Regularized Evolution to evolve a population of the computational graphs over a set of simple training environments. This results in increasingly better RL algorithms, and the discovered algorithms generalize to more complex environments, even those with visual observations like Atari games.

RL Algorithm as a Computational Graph
Inspired by ideas from NAS, which searches over the space of graphs representing neural network architectures, we meta-learn RL algorithms by representing the loss function of an RL algorithm as a computational graph. In this case, we use a directed acyclic graph for the loss function, with nodes representing inputs, operators, parameters and outputs. For example, in the computational graph for DQN, input nodes include data from the replay buffer, operator nodes include neural network operators and basic math operators, and the output node represents the loss, which will be minimized with gradient descent.

There are a few benefits of such a representation. This representation is expressive enough to define existing algorithms but also new, undiscovered algorithms. It is also interpretable. This graph representation can be analyzed in the same way as human designed RL algorithms, making it more interpretable than approaches that use black box function approximators for the entire RL update procedure. If researchers can understand why a learned algorithm is better, then they can both modify the internal components of the algorithm to improve it and transfer the beneficial components to other problems. Finally, the representation supports general algorithms that can solve a wide variety of problems.

Example computation graph for DQN which computes the squared Bellman error.

We implemented this representation using the PyGlove library, which conveniently turns the graph into a search space that can be optimized with regularized evolution.

Evolving RL Algorithms
We use an evolutionary based approach to optimize the RL algorithms of interest. First, we initialize a population of training agents with randomized graphs. This population of agents is trained in parallel over a set of training environments. The agents first train on a hurdle environment — an easy environment, such as CartPole, intended to quickly weed out poorly performing programs.

If an agent cannot solve the hurdle environment, the training is stopped early with a score of zero. Otherwise the training proceeds to more difficult environments (e.g., Lunar Lander, simple MiniGrid environments, etc.). The algorithm performance is evaluated and used to update the population, where more promising algorithms are further mutated. To reduce the search space, we use a functional equivalence checker which will skip over newly proposed algorithms if they are functionally the same as previously examined algorithms. This loop continues as new mutated candidate algorithms are trained and evaluated. At the end of training, we select the best algorithm and evaluate its performance over a set of unseen test environments.

The population size in the experiments was around 300 agents, and we observed the evolution of good candidate loss functions after 20-50 thousand mutations, requiring about three days of training. We were able to train on CPUs because the training environments were simple, controlling for the computational and energy cost of training. To further control the cost of training, we seeded the initial population with human-designed RL algorithms such as DQN.

Overview of meta-learning method. Newly proposed algorithms must first perform well on a hurdle environment before being trained on a set of harder environments. Algorithm performance is used to update a population where better performing algorithms are further mutated into new algorithms. At the end of training, the best performing algorithm is evaluated on test environments.

Learned Algorithms
We highlight two discovered algorithms that exhibit good generalization performance. The first is DQNReg, which builds on DQN by adding a weighted penalty on the Q-values to the normal squared Bellman error. The second learned loss function, DQNClipped, is more complex, although its dominating term has a simple form — the max of the Q-value and the squared Bellman error (modulo a constant). Both algorithms can be viewed as a way to regularize the Q-values. While DQNReg adds a soft constraint, DQNClipped can be interpreted as a kind of constrained optimization that will minimize the Q-values if they become too large. We show that this learned constraint kicks in during the early stage of training when overestimating the Q-values is a potential issue. Once this constraint is satisfied, the loss will instead minimize the original squared Bellman error.

A closer analysis shows that while baselines like DQN commonly overestimate Q-values, our learned algorithms address this issue in different ways. DQNReg underestimates the Q-values, while DQNClipped has similar behavior to double dqn in that it slowly approaches the ground truth without overestimating it.

It’s worth pointing out that these two algorithms consistently emerge when the evolution is seeded with DQN. Learning from scratch, the method rediscovers the TD algorithm. For completeness, we release a dataset of top 1000 performing algorithms discovered during evolution. Curious readers could further investigate the properties of these learned loss functions.

Overestimated values are generally a problem in value-based RL. Our method learns algorithms that have found a way to regularize the Q-values and thus reduce overestimation.

Learned Algorithms Generalization Performance
Normally in RL, generalization refers to a trained policy generalizing across tasks. However, in this work we’re interested in algorithmic generalization performance, which means how well an algorithm works over a set of environments. On a set of classical control environments, the learned algorithms can match baselines on the dense reward tasks (CartPole, Acrobot, LunarLander) and outperform DQN on the sparser reward task, MountainCar.

Performance of learned algorithms versus baselines on classical control environments.

On a set of sparse reward MiniGrid environments, which test a variety of different tasks, we see that DQNReg greatly outperforms baselines on both the training and test environments, in terms of sample efficiency and final performance. In fact, the effect is even more pronounced on the test environments, which vary in size, configuration, and existence of new obstacles, such as lava.

Training environment performance versus training steps as measured by episode return over 10 training seeds. DQNReg can match or outperform baselines in sample efficiency and final performance.
DQNReg can greatly outperform baselines on unseen test environments.

We visualize the performance of normal DDQN vs. the learned algorithm DQNReg on a few MiniGrid environments. The starting location, wall configuration, and object configuration of these environments are randomized at each reset, which requires the agent to generalize instead of simply memorizing the environment. While DDQN often struggles to learn any meaningful behavior, DQNReg can learn the optimal behavior efficiently.

DDQN
DQNReg (Learned) 

Even on image-based Atari environments we observe improved performance, even though training was on non-image-based environments. This suggests that meta-training on a set of cheap but diverse training environments with a generalizable algorithm representation could enable radical algorithmic generalization.

Env DQN DDQN PPO DQNReg
Asteroid 1364.5 734.7 2097.5 2390.4
Bowling 50.4 68.1 40.1 80.5
Boxing 88.0 91.6 94.6 100.0
RoadRunner   39544.0     44127.0     35466.0     65516.0  
Performance of learned algorithm, DQNReg, against baselines on several Atari games. Performance is evaluated over 200 test episodes every 1 million steps.

Conclusion
In this post, we’ve discussed learning new interpretable RL algorithms by representing their loss functions as computational graphs and evolving a population of agents over this representation. The computational graph formulation allows researchers to both build upon human-designed algorithms and study the learned algorithms using the same mathematical toolset as the existing algorithms. We analyzed a few of the learned algorithms and can interpret them as a form of entropy regularization to prevent value overestimation. These learned algorithms can outperform baselines and generalize to unseen environments. The top performing algorithms are available for further analytical study.

We hope that future work will extend to more varied RL settings such as actor critic algorithms or offline RL. Furthermore we hope that this work can lead to machine assisted algorithm development where computational meta-learning can help researchers find new directions to pursue and incorporate learned algorithms into their own work.

Acknowledgements
We thank our co-authors Daiyi Peng, Esteban Real, Sergey Levine, Quoc V. Le, Honglak Lee, and Aleksandra Faust. We also thank Luke Metz for helpful early discussions and feedback on the paper, Hanjun Dai for early discussions on related research ideas, Xingyou Song, Krzysztof Choromanski, and Kevin Wu for helping with infrastructure, and Jongwook Choi for helping with environment selection. Finally we thank Tom Small for designing animations for this post.

Categories
Misc

FlavorGraph Serves Up Food Pairings with AI, Molecular Science

A new ingredient mapping tool by Sony AI and Korea University uses molecular science and recipe data to predict how two ingredients will pair together.

It’s not just gourmet chefs who can discover new flavor combinations— a new ingredient mapping tool by Sony AI and Korea University uses molecular science and recipe data to predict how two ingredients will pair together and suggest new mash-ups. 

Dubbed FlavorGraph, the graph embedding model was trained on a million recipes and chemical structure data from more than 1,500 flavor molecules. The researchers used PyTorch, CUDA and an NVIDIA TITAN GPU to train and test their large-scale food graph.

Researchers have previously used molecular science to explain classic flavor pairings such as garlic and ginger, cheese and tomato, or pork and apple — determining that ingredients with common dominant flavor molecules combine well. In the FlavorGraph database, flavor molecule information was grouped into profiles such as bitter, fruity, and sweet. 

But other ingredient pairings have different chemical makeups, prompting the team to incorporate recipes into the database as well, giving the model insight into ways flavors have been combined in the past.

FlavorGraph is a large-scale graph network of food and chemical compound nodes.

“The outcome is pairing suggestions that achieve better results than ever before,” wrote Korea University researcher Donghyeon Park and Fred Gifford, strategy and partnerships manager at Sony. “These suggestions can be used to predict relationships between compounds and foods, hinting at new and exciting recipe techniques and driving new perspectives on food science in general.” 

Featuring in Scientific Reports, FlavorGraph shows the connections between flavor profiles and the underlying chemical compounds in specific foods. It’s based on the metapath2vec model, and outperforms other baseline methods for food clustering.

The researchers hope the project will lead to the discovery of new recipes, more interesting flavor combinations, and potential substitutes for unhealthy or unsustainable ingredients. 

“We hope that projects like this will continue to complement both the complex ingredient systems fossilized over time through cultural evolution, as well as the electric ingenuity of modern innovators and chefs,” the team wrote. 

Read the full paper in Scientific Reports, and find the data and trained food representations on GitHub

Read more from Sony >> 

Categories
Misc

Cultivating AI: AgTech Industry Taps NVIDIA GPUs to Protect the Planet

What began as a budding academic movement into farm AI projects has now blossomed into a field of startups creating agriculture technology with a positive social impact for Earth. Whether it’s the threat to honey bees worldwide from varroa mites, devastation to citrus markets from citrus greening, or contamination of groundwater caused from agrochemicals — Read article >

The post Cultivating AI: AgTech Industry Taps NVIDIA GPUs to Protect the Planet appeared first on The Official NVIDIA Blog.

Categories
Misc

Mooning Over Selene: NVIDIA’s Julie Bernauer Talks Setting Up One of World’s Fastest Supercomputers

Though admittedly prone to breaking kitchen appliances like ovens and microwaves, Julie Bernauer — senior solutions architect for machine learning and deep learning at NVIDIA — led the small team that successfully built Selene, the world’s fifth-fastest supercomputer. Adding to an already impressive feat, Bernauer’s team brought up Selene as the world went into lockdown Read article >

The post Mooning Over Selene: NVIDIA’s Julie Bernauer Talks Setting Up One of World’s Fastest Supercomputers appeared first on The Official NVIDIA Blog.

Categories
Misc

GFN Thursday Drops the Hammer with ‘Vermintide 2 Chaos Wastes’ Free Expansion, ‘Immortals Fenyx Rising The Lost Gods’ DLC

GFN Thursday is our ongoing commitment to bringing great PC games and service updates to our members each week. Every Thursday, we share updates on what’s new in the cloud — games, exclusive features, and news on GeForce NOW. This week, it includes the latest updates for two popular games: Fatshark’s free expansion Warhammer: Vermintide Read article >

The post GFN Thursday Drops the Hammer with ‘Vermintide 2 Chaos Wastes’ Free Expansion, ‘Immortals Fenyx Rising The Lost Gods’ DLC appeared first on The Official NVIDIA Blog.

Categories
Misc

Green for Good: How We’re Supporting Sustainability Efforts in India

When a community embraces sustainability, it can reap multiple benefits: gainful employment for vulnerable populations, more resilient local ecosystems and a cleaner environment. This Earth Day, we’re announcing our four latest corporate social responsibility investments in India, home to more than 2,700 NVIDIANs. These initiatives are part of our multi-year efforts in the country, which Read article >

The post Green for Good: How We’re Supporting Sustainability Efforts in India appeared first on The Official NVIDIA Blog.

Categories
Misc

Hello World!!! I built a Course on Udemy where I teach you how to build 8 reinforcement learning agents in environments like Mario, Flappy Bird, Stocks and Much More!! You only need to know Python! (FREE FOR LIMITED TIME)

In this course I will teach you 8 agents including:

· Space Invaders Agent using Keras-RL

· Autonomous Taxi using Q-Learning built from scratch

· Flappy Bird Agent using Deep Q Network that we build from scratch

· Mario Agent using Deep Q Network that we build from scratch

· A reinforcement Learning S&P 500 stock trading agent that is rewarded with making money off the stock market!

· Another Reinforcement Learning Stock Trading Agent using 89 different Technical indicators (you can pair these to make a lot of money off the stock market 😉)

· 3 Car agents that learn to maneuver roundabouts, parking lots, & merge onto a highway

The only thing you need to know is Python! If you are interested in cutting edge technology, then this is the course for you! Check it out!

https://www.udemy.com/course/practical-reinforcement-learning/?couponCode=150336130778173C0A71

submitted by /u/samboylansajous
[visit reddit] [comments]