Categories
Offsites

A2C Advantage Actor Critic in TensorFlow 2

In a previous post, I gave an introduction to Policy Gradient reinforcement learning. Policy gradient-based reinforcement learning relies on using neural networks to learn an action policy for the control of agents in an environment. This is opposed to controlling agents based on neural network estimations of a value-based function, such as the Q value in deep Q learning. However, there are problems with straight Monte-Carlo based methods of policy gradient learning as covered in the previously mentioned policy gradient post. In particular, one significant problem is a high variance in the learning. This problem can be solved by a process called baselining, with the most effective baselining method being the Advantage Actor Critic method or A2c. In this post, I’ll review the theory of the A2c method, and demonstrate how to build an A2c algorithm in TensorFlow 2.

All code shown in this tutorial can be found at this site’s Github repository, in the ac2_tf2_cartpole.py file.

A quick recap of some important concepts

In the A2C algorithm, notice the title “Advantage Actor” – this refers first to the actor, the part of the neural network that is used to determine the actions of the agent. The “advantage” is a concept that expresses the relative benefit of taking a certain action at time t ($a_t$) from a certain state $s_t$. Note that it is not the “absolute” benefit, but the “relative” benefit. This will become clearer when I discuss the concept of “value”. The advantage is expressed as:

$$A(s_t, a_t) = Q(s_t, a_t) – V(s_t)$$

The Q value (discussed in other posts, for instance here, here and here) is the expected future rewards of taking action $a_t$ from state $s_t$. The value $V(s_t)$ is the expected value of the agent being in that state and operating under a certain action policy $pi$. It can be expressed as:

$$V^{pi}(s) = mathbb{E} left[sum_{i=1}^T gamma^{i-1}r_{i}right]$$

Here $mathbb{E}$ is the expectation operator, and the value $V^{pi}(s)$ can be read as the expected value of future discounted rewards that will be gathered by the agent, operating under a certain action policy $pi$. So, the Q value is the expected value of taking a certain action from the current state, whereas V is the expected value of simply being in the current state, under a certain action policy.

The advantage then is the relative benefit of taking a certain action from the current state. It’s kind of like a normalized Q value. For example, let’s consider the last state in a game, where after the next action the game ends. There are three possible actions from this state, with rewards of (51, 50, 49). Let’s also assume that the action selection policy $pi$ is simply random, so there is an equal chance of any of the three actions being selected. The value of this state, then, is 50 ((51+50+49) / 3). If the first action is randomly selected (reward=51), the Q value is 51. However, the advantage is only equal to 1 (Q-V = 51-50). As can be observed and as stated above, the advantage is a kind of normalized or relative Q value.

Why is this important? If we are using Q values in some way to train our action-taking policy, in the example above the first action would send a “signal” or contribution of 51 to the gradient optimizer, which may be significant enough to push the parameters of the neural network significantly in a certain direction. However, given the other two actions possible from this state also have a high reward (50 and 49), the signal or contribution is really higher than it should be – it is not that much better to take action 1 instead of action 3. Therefore, Q values can be a source of high variance in the training process, and it is much better to use the normalized or baseline Q values i.e. the advantage, in training. For more discussion of Q, values, and advantages, see my post on dueling Q networks.

Policy gradient reinforcement learning and its problems

In a previous post, I presented the policy gradient reinforcement learning algorithm. For details on this algorithm, please consult that post. However, the A2C algorithm shares important similarities with the PG algorithm, and therefore it is necessary to recap some of the theory. First, it has to be recalled that PG-based algorithms involve a neural network that directly outputs estimates of the probability distribution of the best next action to take in a given state. So, for instance, if we have an environment with 4 possible actions, the output from the neural network could be something like [0.5, 0.25, 0.1, 0.15], with the first action being currently favored. In the PG case, then, the neural network is the direct instantiation of the policy of the agent $pi_{theta}$ – where this policy is controlled by the parameters of the neural network $theta$. This is opposed to Q based RL algorithms, where the neural network estimates the Q value in a given state for each possible action. In these algorithms, the action policy is generally an epsilon-greedy policy, where the best action is that action with the highest Q value (with some random choices involved to improve exploration).

The gradient of the loss function for the policy gradient algorithm is as follows:

$$nabla_theta J(theta) sim left(sum_{t=0}^{T-1} log P_{pi_{theta}}(a_t|s_t)right)left(sum_{t’= t + 1}^{T} gamma^{t’-t-1} r_{t’} right)$$

Note that the term:

$$G_t = left(sum_{t’= t + 1}^{T} gamma^{t’-t-1} r_{t’} right)$$

Is just the discounted sum of the rewards onwards from state $s_t$. In other words, it is an estimate of the true value function $V^{pi}(s)$. Remember that in the PG algorithm, the network can only be trained after each full episode, and this is because of the term above. Therefore, note that the $G_t$ term above is an estimate of the true value function as it is based on only a single trajectory of the agent through the game.

Now, because it is based on samples of reward trajectories, which aren’t “normalized” or baselined in any way, the PG algorithm suffers from variance issues, resulting in slower and more erratic training progress. A better solution is to replace the $G_t$ function above with the Advantage – $A(s_t, a_t)$, and this is what the Advantage-Actor Critic method does.

The A2C algorithm

Replacing the $G_t$ function with the advantage, we come up with the following gradient function which can be used in training the neural network:

$$nabla_theta J(theta) sim left(sum_{t=0}^{T-1} log P_{pi_{theta}}(a_t|s_t)right)A(s_t, a_t)$$

Now, as shown above, the advantage is:

$$A(s_t, a_t) = Q(s_t, a_t) – V(s_t)$$

However, using Bellman’s equation, the Q value can be expressed purely in terms of the rewards and the value function:

$$Q(s_t, a_t) = mathbb{E}left[r_{t+1} + gamma V(s_{t+1})right]$$

Therefore, the advantage can now be estimated as:

$$A(s_t, a_t) = r_{t+1} + gamma V(s_{t+1}) – V(s_t)$$

As can be seen from the above, there is a requirement to be able to estimate the value function V. We could estimate it by running our agents through full episodes, in the same way we did in the policy gradient method. However, it would be better to be able to just collect batches of game-steps and train whenever the batch buffer was full, rather than having to wait for an episode to finish. That way, the agent could actually learn “on-the-go” during the middle of an episode/game.

So, do we build another neural network to estimate V? We could have two networks, one to learn the policy and produce actions, and another to estimate the state values. A more efficient solution is to create one network, but with two output channels, and this is how the A2C method is outworked. The figure below shows the network architecture for an A2C neural network:

A2C architecture

A2C architecture

This architecture is based on an A2C method that takes game images as the state input, hence the convolutional neural network layers at the beginning of the network (for more on CNNs, see my post here). This network architecture also resembles the Dueling Q network architecture (see my Dueling Q post). The point to note about the architecture above is that most of the network is shared, with a late bifurcation between the policy part and the value part. The outputs $P(s, a_i)$ are the action probabilities of the policy (generated from the neural network) – $P(a_t|s_t)$. The other output channel is the value estimation – a scalar output which is the predicted value of state s – $V(s)$. The two dense channels disambiguate the policy and the value outputs from the front-end of the neural network.

In this example, we’ll just be demonstrating the A2C algorithm on the Cartpole OpenAI Gym environment which doesn’t require a visual state input (i.e. a set of pixels as the input to the NN), and therefore the two output channels will simply share some dense layers, rather than a series of CNN layers.

The A2C loss functions

There are actually three loss values that need to be calculated in the A2C algorithm. Each of these losses is in practice given a weighting, and then they are summed together (with the entropy loss having a negative sign, see below).

The Critic loss

The loss function of the Critic i.e. the value estimating output of the neural network $V(s)$, needs to be trained so that it predicts more and more closely the actual value of the state. As shown before, the value of a state is calculated as:

$$V^{pi}(s) = mathbb{E} left[sum_{i=1}^T gamma^{i-1}r_{i}right]$$

So $V^{pi}(s)$ is the expected value of the discounted future rewards obtained by outworking a trajectory through the game based on a certain operating policy $pi$. We can therefore compare the predicted $V(s)$ at each state in the game, and the actual sampled discounted rewards that were gathered, and the difference between the two is the Critic loss. In this example, we’ll use a mean squared error function as the loss function, between the discounted rewards and the predicted values ($(V(s) – DR)^2$).

Now, given that, under the A2C algorithm, we collect state, action and reward tuples until a batch buffer is filled, how are we meant to figure out this discounted rewards sum? Let’s say we progress 3 states through a game, and we collect:

$(V(s_0), r_0), (V(s_1), r_1), (V(s_2), r_2)$

For the first Critic loss, we could calculate it as:

$$MSE(V(s_0), r_0 + gamma r_1 + gamma^2 r_2)$$

But that is missing all the following rewards $r_3, r_4, …., r_n$ until the game terminates. We didn’t have this problem in the Policy Gradient method, because in that method, we made sure a full run through the game had completed before training the neural network. In the A2c method, we use a trick called bootstrapping. To replace all the discounted $r_3, r_4, …., r_n$ values, we get the network to estimate the value for state 3, $V(s_3)$, and this will be an estimate for all the discounted future rewards beyond that point in the game. So, for the first Critic loss, we would have:

$$MSE(V(s_0), r_0 + gamma r_1 + gamma^2 r_2 + gamma^3 V(s_3))$$

Where $V(s_3)$ is a bootstrapped estimate of the value of the next state $s_3$.

This will be explained more in the code-walkthrough to follow.

The Actor loss

The second loss function needs to train the Actor (i.e. the action policy). Recall that the advantage weighted policy loss is:

$$nabla_theta J(theta) sim left(sum_{t=0}^{T-1} log P_{pi_{theta}}(a_t|s_t)right)A(s_t, a_t)$$

Let’s start with the advantage – $A(s_t, a_t) = r_{t+1} + gamma V(s_{t+1}) – V(s_t)$

This is simply the bootstrapped discounted rewards minus the predicted state values $V(s_t)$ that we gathered up while playing the game. So calculating the advantage is quite straight-forward, once we have the bootstrapped discounted rewards, as will be seen in the code walk-through shortly.

Now, with regards to the $log P_{pi_{theta}}(a_t|s_t)$ statement, in this instance, we can just calculate the log of the softmax probability estimate for whatever action was taken. So, for instance, if in state 1 ($s_1$) the network softmax output produces {0.1, 0.9} (for a 2-action environment), and the second action was actually taken by the agent, we would want to calculate log(0.9). We can make use of the TensorFlow-Keras SparseCategoricalCrossEntropy calculation, which takes the action as an integer, and this specifies which softmax output value to apply the log to. So in this example, y_pred = [1] and y_target = [0.1, 0.9] and the answer would be -log(0.9) = 0.105.

Another handy feature with the SpareCategoricalCrossEntropy loss in Keras is that it can be called with a “sample_weight” argument. This basically multiplies the log calculation with a value. So, in this example, we can supply the advantages as the sample weights, and it will calculate  $nabla_theta J(theta) sim left(sum_{t=0}^{T-1} log P_{pi_{theta}}(a_t|s_t)right)A(s_t, a_t)$ for us. This will be shown below, but the call will look like:

policy_loss = sparse_ce(actions, policy_logits, sample_weight=advantages)

Entropy loss

In many implementations of the A2c algorithm, another loss term is subtracted – the entropy loss. Entropy is a measure, broadly speaking, of randomness. The higher the entropy, the more random the state of affairs, the lower the entropy, the more ordered the state of affairs. In the case of A2c, entropy is calculated on the softmax policy action ($P(a_t|s_t)$) output of the neural network. Let’s go back to our two action example from above. In the case of a probability output of {0.1, 0.9} for the two possible actions, this is an ordered, less-random selection of actions. In other words, there will be a consistent selection of action 2, and only rarely will action 1 be taken. The entropy formula is:

$$E = -sum p(x) log(p(x))$$

So in this case, the entropy of that output would be 0.325. However, if the probability output was instead {0.5, 0.5}, the entropy would be 0.693. The 50-50 action probability distribution will produce more random actions, and therefore the entropy is higher.

By subtracting the entropy calculation from the total loss (or giving the entropy loss a negative sign), it encourages more randomness and therefore more exploration. The A2c algorithm can have a tendency of converging on particular actions, so this subtraction of the entropy encourages a better exploration of alternative actions, though making the weighting on this component of the loss too large can also reduce training performance.

Again, we can use an already existing Keras loss function to calculate the entropy. The Keras categorical cross-entropy performs the following calculation:

Keras output of cross-entropy loss function

Keras output of cross-entropy loss function

If we just pass in the probability outputs as both target and output to this function, then it will calculate the entropy for us. This will be shown in the code below.

The total loss

The total loss function for the A2C algorithm is:

Loss = Actor Loss + Critic Loss * CRITIC_WEIGHT – Entropy Loss * ENTROPY_WEIGHT

A common value for the critic weight is 0.5, and the entropy weight is usually quite low (i.e. on the order of 0.01-0.001), though these hyperparameters can be adjusted and experimented with depending on the environment and network.

Implementing A2C in TensorFlow 2

In the following section, I will provide a walk-through of some code to implement the A2C methodology in TensorFlow 2. The code for this can be found at this site’s Github repository, in the ac2_tf2_cartpole.py file.

First, we perform the usual imports, set some constants, initialize the environment and finally create the neural network model which instantiates the A2C architecture:

import tensorflow as tf
from tensorflow import keras
import numpy as np
import gym
import datetime as dt

STORE_PATH = '/Users/andrewthomas/Adventures in ML/TensorFlowBook/TensorBoard/A2CCartPole'
CRITIC_LOSS_WEIGHT = 0.5
ACTOR_LOSS_WEIGHT = 1.0
ENTROPY_LOSS_WEIGHT = 0.05
BATCH_SIZE = 64
GAMMA = 0.95

env = gym.make("CartPole-v0")
state_size = 4
num_actions = env.action_space.n


class Model(keras.Model):
    def __init__(self, num_actions):
        super().__init__()
        self.num_actions = num_actions
        self.dense1 = keras.layers.Dense(64, activation='relu',
                                         kernel_initializer=keras.initializers.he_normal())
        self.dense2 = keras.layers.Dense(64, activation='relu',
                                         kernel_initializer=keras.initializers.he_normal())
        self.value = keras.layers.Dense(1)
        self.policy_logits = keras.layers.Dense(num_actions)

    def call(self, inputs):
        x = self.dense1(inputs)
        x = self.dense2(x)
        return self.value(x), self.policy_logits(x)

    def action_value(self, state):
        value, logits = self.predict_on_batch(state)
        action = tf.random.categorical(logits, 1)[0]
        return action, value

As can be seen, for this example I have set the critic, actor and entropy loss weights to 0.5, 1.0 and 0.05 respectively. Next the environment is setup, and then the model class is created.

This class inherits from keras.Model, which enables it to be integrated into the streamlined Keras methods of training and evaluating (for more information, see this Keras tutorial). In the initialization of the class, we see that 2 dense layers have been created, with 64 nodes in each. Then a value layer with one output is created, which evaluates $V(s)$, and finally the policy layer output with a size equal to the number of available actions. Note that this layer produces logits only, the softmax function which creates pseudo-probabilities ($P(a_t, s_t)$) will be applied within the various TensorFlow functions, as will be seen.

Next, the call function is defined – this function is run whenever a state needs to be “run” through the model, to produce a value and policy logits output. The Keras model API will use this function in its predict functions and also its training functions. In this function, it can be observed that the input is passed through the two common dense layers, and then the function returns first the value output, then the policy logits output.

The next function is the action_value function. This function is called upon when an action needs to be chosen from the model. As can be seen, the first step of the function is to run the predict_on_batch Keras model API function. This function just runs the model.call function defined above. The output is both the values and the policy logits. An action is then selected by randomly choosing an action based on the action probabilities. Note that tf.random.categorical takes as input logits, not softmax outputs. The next function, outside of the Model class, is the function that calculates the critic loss:

def critic_loss(discounted_rewards, predicted_values):
    return keras.losses.mean_squared_error(discounted_rewards, predicted_values) * CRITIC_LOSS_WEIGHT

As explained above, the critic loss comprises of the mean squared error between the discounted rewards (which is calculated in another function, soon to be discussed) and the values predicted from the value output of the model (which are accumulated in a list during the agent’s trajectory through the game).

The following function shows the actor loss function:

def actor_loss(combined, policy_logits):
    actions = combined[:, 0]
    advantages = combined[:, 1]
    sparse_ce = keras.losses.SparseCategoricalCrossentropy(
        from_logits=True, reduction=tf.keras.losses.Reduction.SUM
    )

    actions = tf.cast(actions, tf.int32)
    policy_loss = sparse_ce(actions, policy_logits, sample_weight=advantages)

    probs = tf.nn.softmax(policy_logits)
    entropy_loss = keras.losses.categorical_crossentropy(probs, probs)

    return policy_loss * ACTOR_LOSS_WEIGHT - entropy_loss * ENTROPY_LOSS_WEIGHT

The first argument to the actor_loss function is an array with two columns (and BATCH_SIZE rows). The first column corresponds to the recorded actions of the agent as it traversed the game. The second column is the calculated advantages – the calculation of which will be shown shortly. Next, the sparse categorical cross-entropy function class is created. The arguments specify that the input to the function is logits (i.e. they don’t have softmax applied to them yet), and it also specifies the reduction to apply to the BATCH_SIZE number of calculated losses – in this case, a sum() function which aligns with the summation in:

$$nabla_theta J(theta) sim left(sum_{t=0}^{T-1} log P_{pi_{theta}}(a_t|s_t)right)A(s_t, a_t)$$

Next, the actions are cast to be integers (rather than floats) and finally, the policy loss is calculated based on the sparse_ce function. As discussed above, the sparse categorical cross-entropy function will select those policy probabilities that correspond to the actions actually taken in the game, and weight them by the advantage values. By applying a summation reduction, the formula above will be implemented in this function.

Next, the actual probabilities for action are estimated by applying the softmax function to the logits, and the entropy loss is calculated by applying the categorical cross-entropy function. See the previous discussion on how this works.

The following function calculates the discounted reward values and the advantages:

def discounted_rewards_advantages(rewards, dones, values, next_value):
    discounted_rewards = np.array(rewards + [next_value[0]])

    for t in reversed(range(len(rewards))):
        discounted_rewards[t] = rewards[t] + GAMMA * discounted_rewards[t+1] * (1-dones[t])
    discounted_rewards = discounted_rewards[:-1]
    # advantages are bootstrapped discounted rewards - values, using Bellman's equation
    advantages = discounted_rewards - np.stack(values)[:, 0]
    return discounted_rewards, advantages

The first input value to this function is a list of all the rewards that were accumulated..

Categories
Misc

Training in NVIDIA Isaac Sim Closes the Sim2Real Gap

Interested in designing, testing, or training your robot in a virtual environment? This can all be done with Issac Sim on the NVIDIA Omniverse simulation platform.

Interested in designing, testing, or training your robot in a virtual environment? This can all be done with Issac Sim on the NVIDIA Omniverse simulation platform. The ability to work with robots in a virtual world is one of the key enabling technologies that will drive a robotics revolution.

Here are are some of the key aspects needed to train robots in simulation, as well as some of the cool new features in the upcoming open beta of Isaac Sim.

Sim2Real

Using a simulator to train a robot requires a minimization of the Sim2Real gap — that is the difference between the real domain and virtual domain. If sufficiently small, the simulator saves development time and avoids potentially dangerous situations by first allowing the robot to be trained in the virtual world before being deployed in the real world. Leveraging the newest features of NVIDIA Isaac Sim makes closing the Sim2Real gap a reality.

Importing Diverse Sets Robots into Isaac Sim

Before starting to train a robot in a simulated environment, the first task is to get the robot imported and functioning properly. Isaac Sim imports robots seamlessly and supports multiple input formats including URDF, STEP, and OnShape to bring the robot into the simulator. Isaac Sim also provides accurate physics modeling to ensure the imported robot will perform as it is supposed to once properly rigged.

Domain Randomization

The domain randomization techniques of Isaac Sim allows you to train the robot in a very broad set of conditions. This data generalization ensures we can cover the desired conditions of the real domain when deploying the robot. In other words, if the randomized virtual domain is broad enough to cover the real domain, then it will be successful in training the robot.

Digital Twin

Another interesting feature of Isaac Sim gives you the ability to control a robot in the real world from the simulation environment. In this case, you have twin robots, one real and one virtual, each mirroring the other’s actions. With this mirroring feature, developers can test robots in simulation, deploy to the real robot, observe and reduce the Sim2Real gap.

Upcoming Features

The next release of Isaac Sim is scheduled for May 2021. This will be an open beta and the first release available directly in the Omniverse launcher. The highlight features of this release will be multi-camera support and the ability to support simulation of ROS2-based robots. Additionally, the release will add force sensor support, enhanced domain randomization, and many other new capabilities. 

Be sure to register for GTC and attend the session, “Sim-to-Real in Isaac Sim [S31824]” to learn more about the new upcoming open beta release and about closing the Sim2Real gap.

Categories
Misc

ML and Stocks

I am currently thinking about doing some Predictions using historical Stock Data and Tensorflow for an university Project . I know that using ML with historical Stock Data doesn’t really make sense because what the Market did before isn’t an indicator for what is going to happen next. Does somebody has any idea for a project with ML and Stocks which really makes Sense?

submitted by /u/tiscit08
[visit reddit] [comments]

Categories
Offsites

CodeReading – 2. Flask

Code Reading은 잘 작성되어 있는 프레임워크, 라이브러리, 툴킷 등의 다양한 프로젝트의 내부를 살펴보는 시리즈 입니다. 프로젝트의 아키텍처, 디자인철학이나 코드 스타일 등을 살펴보며, 구체적으로 하나하나 살펴보는 것이 아닌 전반적이면서 간단하게 살펴봅니다.

Series.


시작하며

코드읽기 1편에 이어서 이번에 살펴보고자 하는 라이브러리는 Python Web 개발의 양대산맥 중 하나인 Flask를 다뤄보고자 합니다. 코드 읽기를 추천하는 많은 사람들이 추천하는 라이브러리이기도 합니다!

Flask를 소개하는 한줄의 문장은 다음과 같습니다.

Flask is a lightweight WSGI web application framework.

Flask가 lightweight 라고 말을 하는 이유는 사용법이 정말 간단하기 때문입니다. 아래 단 몇 줄의 코드 만으로 Web Application 하나를 만들어낼 수 있습니다.

# save this as app.py
from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello, World!"

# $ flask run
# > * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

그렇다면 차근차근 Flask에 대해서 알아보도록 하겠습니다.

WSGI

코드를 읽기 전에 먼저 대략적인 개념들을 잡고가는 것이 도움이 될 것 같습니다. Flask는 Web 개발에 사용되는 프레임워크이며, WSGI(Web Server Gateway Interface)라는 통신규격을 지키고 있습니다. 이는 기존의 웹서버와 웹 애플리케이션 간의 제약사항을 허물어 주면서, 각각의 역할에 따라서 동작할 수 있도록 하였습니다.

images

출처: https://medium.com/analytics-vidhya/what-is-wsgi-web-server-gateway-interface-ed2d290449e

이렇게 구조를 가져가면서 얻을 수 있는 장점은 주로 1) 성능 향상2) 확장성 에 있습니다. 이 포스트에서는 코드를 읽는 것이 주이므로, 간단하게만 언급하고 넘어가도록 하겠습니다.

위 인터페이스의 규격은 아래의 코드로 확인할 수 있습니다. Request 라는 Callable Object 를 Web Application에서 받아서 Response 라는 결과를 반환하는 것입니다.
(아래의 예제 코드의 werkzeugflask 의 코어가 되는 라이브러리로, 같은 개발자가 만들었습니다.)

from werkzeug.wrappers import Request, Response

@Request.application
def application(request):
    return Response('Hello, World!')

if __name__ == '__main__':
    from werkzeug.serving import run_simple
    run_simple('localhost', 4000, application)

The Pallets Projects

현재 flask는 The Pallets Projects 라는 Organization에서 관리가 되고 있습니다. 이 페이지에 들어가 보시면, 사용해봤을 법한 몇가지 유명한 라이브러리들이 같이 만들어졌음을 알 수 있습니다.

  • click : Command Line Interface Creation Kit 의 약자로서, 간단하게 cli 를 조작할 수 있습니다. (10.6k Star)
  • jinja : HTML 템플릿 엔진으로, 다양한 기능들과 Python과 비슷한 syntax 그리고 데이터 연결이 가능하게 해줍니다. (7.6k Star)
  • werkzeug : WSGI Web 개발에 핵심이 되는 라이브러리 (5.7k Star)

위의 3가지 프로젝트가 대표적이죠. flask 뿐만 아니라, 어디선가 봤던 유명한 라이브러리들이 포함되어 있다는 것이 신기하였습니다. 또한 이 모든 프로젝트들이 Armin Ronacher 에 의해서 만들어졌다는 것 입니다. 개발자들의 생산성은 개개인마다 굉장히 차이가 크다는 것은 실험 결과를 통해 입증되었지만, 이렇게 개인이 만든 라이브러리들이 전세계적으로 사용되는 것을 보면 참 대단하다는 생각이 듭니다.

만든이가 동일한 사람이기 때문에, 라이브러리의 성격들이 비슷하다. click과 flask를 보면, 사용성에 초점이 확 맞춰져 있는 것을 느낄 수 있다. 위 모든 프로젝트에 들어가있는 A Simple Example 을 보면 확인할 수 있는 점이기도 합니다.

Flask

(Flask 의 코드는 ‘1.1.2’ Tag를 기준으로 살펴봅니다.)

위에서 살펴본 것처럼, Flask는 The Pallets Projects의 다양한 라이브러리를 통해서 이루어져 있습니다. 간단하게는 werkzeug를 감싸서 만든 Web Application Framework 라고 할 수 있습니다.

install_requires=[
    "Werkzeug>=0.15",
    "Jinja2>=2.10.1",
    "itsdangerous>=0.24",
    "click>=5.1",
],

디자인 철학

프레임워크는 전반적으로 단순함을 기초로 하고 있습니다. 의존성을 최소로 하고 있는데, 흔히 Web 개발에 사용되는 DB 또한 내장되어 있지 않고 (Django는 postgreSQL을 기본으로 사용하도록 되어있습니다.) 최소한으로 템플릿은 사용하므로 위의 Jinja2를 템플릿 엔진으로 의존하고 있는 정도입니다. 확장은 사용자에게 달려있는 것이죠.

그리고 명시적인 Application 객체를 사용하고 있습니다. 여기에 대해서는 다음 3가지 이유가 있습니다.

  1. 암시적인 Application 객체가 한 번에 하나의 인스턴스만 있어야 한다는 것.
  2. 패키지 이름과 연동하여 리소스 할당.
  3. 일반적으로 명시적인 것이 암묵적인 것보다 낫기 때문.

만약에 명시적인 Application 객체를 사용하지 않으면, 아래와 같은 코드가 될 것 입니다.

from hypothetical_flask import route

@route('/')
def index():
    return 'Hello World!'

Application의 범주를 정의하는 것이 모호할 것이며, 하나의 객체에 여러개의 application이 동작하면서 혼동을 줄 수도 있을 것 입니다. 그리고 명시적인 Application 객체는 최소한으로 기능을 할당해서 Unit test도 진행할 수 있는 장점이 있다고 저자는 말하고 있습니다.

Code

코드는 전반적으로 함수와 객체가 역할에 맞춰서 잘 나뉘어 있으며, Python 언어가 제공하는 기본 기능들을 최대한 활용하여 (Pythonic한), 가독성에 굉장히 신경을 쓴 것을 코드 전반적으로 느낄 수 있습니다. 다음은 몇가지 세부 코드들은 들어가서 살펴보겠습니다.

helpers.py : 다양한 유틸 함수 및 클래스를 선언

# https://github.com/pallets/flask/blob/1.1.2/src/flask/helpers.py

def get_env():
    """Get the environment the app is running in, indicated by the
    :envvar:`FLASK_ENV` environment variable. The default is
    ``'production'``.
    """
    return os.environ.get("FLASK_ENV") or "production"

def get_debug_flag():
    """Get whether debug mode should be enabled for the app, indicated
    by the :envvar:`FLASK_DEBUG` environment variable. The default is
    ``True`` if :func:`.get_env` returns ``'development'``, or ``False``
    otherwise.
    """
    val = os.environ.get("FLASK_DEBUG")

    if not val:
        return get_env() == "development"

    return val.lower() not in ("0", "false", "no")

def url_for(endpoint, **values):
    ...

def get_template_attribute(template_name, attribute):
    ...

def total_seconds(td):
    """Returns the total seconds from a timedelta object.
    :param timedelta td: the timedelta to be converted in seconds
    :returns: number of seconds
    :rtype: int
    """
    return td.days * 60 * 60 * 24 + td.seconds

위 코드들에서 확인할 수 있는 것처럼, 자주 사용하는 유틸리티성의 함수들이 직관적인 네이밍과 docstring과 함께 작성이 되어있습니다. docstring 에는 입력에 대한 type과 출력에 대한 type 또한 명시가 되어있습니다. 이 부분들은 Python 3.5 버전부터 추가된 typing 라이브러리를 활용하는 것도 가능하겠네요.

  • templating.py : Jinja2를 통해서 템플릿 랜더링 (코드 작성 방식)

이 부분은 템플릿 엔진이 동작하는 로직을 담고 있습니다. 여기서 다루고자 하는 부분은 코드의 작성 방식입니다. «클린코드» 에서는 코드를 위에서 아래로 보는, 신문 기사에 비유를 합니다. 먼저 핵심 요약을 위에서 다루고, 아래에서 세부적인 내용들을 다루는 것 입니다.

def get_source(self, environment, template):
    if self.app.config["EXPLAIN_TEMPLATE_LOADING"]:
        return self._get_source_explained(environment, template)
    return self._get_source_fast(environment, template)

def _get_source_explained(self, environment, template):
    ...

def _get_source_fast(self, environment, template):
    ...

Flask 역시 위에서 아래로 읽을 수 있도록 코드가 작성되어 있고, 객체지향에서 접근권한을 명시하는 방식 또한 적용되어 있는 것을 보실 수 있습니다. Python은 코드차원에서 이를 지원하지는 않지만, 암묵적인 규칙이 있습니다.

def method_name(): # public
def _method_name():  # protected or private
def __method_name(): # private

wrappers.py : werkzeug 를 감싸서 객체 선언 (객체의 확장방식)

이 코드에서는 1편 PyTorch의 Function의 객체의 확장과 거의 같은 방식을 보실 수 있습니다.

# https://github.com/pallets/flask/blob/1.1.2/src/flask/wrappers.py

from werkzeug.wrappers import Request as RequestBase
from werkzeug.wrappers import Response as ResponseBase

class JSONMixin(_JSONMixin):
    ...

class Request(RequestBase, JSONMixin):
    ...

class Response(ResponseBase, JSONMixin):
    ...

바로 Base 객체를 두고 Mixin 을 통해서 인터페이스 및 속성을 확장해 나가는 방식입니다. (이 Mixin에 대해서는 1편 PyTorch 에서 간단하게 설명하고 있습니다.)

한가지 재미있는 점은 Mixin 뿐만 아니라, Meta Class를 활용하는 방식 또한 PyTorch에서 봤던 것과 동일합니다.

# https://github.com/pallets/flask/blob/1.1.2/src/flask/_compat.py#L60

def with_metaclass(meta, *bases):
    """Create a base class with a metaclass."""
    # This requires a bit of explanation: the basic idea is to make a
    # dummy metaclass for one level of class instantiation that replaces
    # itself with the actual metaclass.
    class metaclass(type):
        def __new__(metacls, name, this_bases, d):
            return meta(name, bases, d)

    return type.__new__(metaclass, "temporary_class", (), {})

with_metaclass 는 객체의 빌트인 함수들의 동작을 바꿔서, 원하는 방식대로 동작하는 객체를 만드는 데 사용이 됩니다. PyTorch에서 Function 객체를 위 메서드를 통해서 정의하고 있죠. (자세한 것은 1편을 참고해주세요.) 프로젝트 시기상 Flask가 훨씬 먼저 만들어지고, 이후에 PyTorch가 만들어졌기 때문에.. PyTorch의 메인테이너들은 Flask의 코드스타일을 비슷하게 따라갔고 몇가지 그대로 사용하는 케이스도 있었던 것으로 생각이 됩니다. Flask의 코드들이 가독성이 좋고 사용성 또한 직관적이였던 것처럼, PyTorch 역시 그럴마한 이유가 있었네요.

signals.py : Application이 동작하는 시그널을 감지

# https://github.com/pallets/flask/blob/1.1.2/src/flask/signals.py#L54

# Core signals.  For usage examples grep the source code or consult
# the API documentation in docs/api.rst as well as docs/signals.rst
template_rendered = _signals.signal("template-rendered")
before_render_template = _signals.signal("before-render-template")
request_started = _signals.signal("request-started")
request_finished = _signals.signal("request-finished")
request_tearing_down = _signals.signal("request-tearing-down")
got_request_exception = _signals.signal("got-request-exception")
appcontext_tearing_down = _signals.signal("appcontext-tearing-down")
appcontext_pushed = _signals.signal("appcontext-pushed")
appcontext_popped = _signals.signal("appcontext-popped")
message_flashed = _signals.signal("message-flashed")

이 부분은 동작에 대한 신호를 판별하는 부분으로, 거의 유일하게 Python의 내장 라이브러리와 자신이 만든 라이브러리를 제외하고 다른 사람이 만든 것을 활용하고 있습니다. 그리고 그 마저도 꼭 필요한 dependency는 아니기도 합니다.

라이브러리를 개발할때, 필요하다면 이와 같이 Blinker 라는 dispatching system을 사용할 수 있을 것 같습니다.

**werkzeug/datastructures.py** : (번외로) werkzeug 에서 유틸리티처럼 정의한 데이터구조 클래스들

이번 Flask 코드를 살펴보면서 자연스럽게 연결되어 있는 라이브러리에 대해서도 살짝 살펴보게 되었습니다. 그 중에서 한가지 참고하기에 가장 좋다고 생각되는 코드를 소개시켜드리고자 합니다.

# https://github.com/pallets/werkzeug/blob/1.0.1/src/werkzeug/datastructures.py

class ImmutableListMixin:
    ...

class ImmutableList(ImmutableListMixin, list):
    ...

class OrderedMultiDict(MultiDict):
   ...

    def __eq__(self, other):
        ...

    def __getstate__(self):
        ...

    def __setstate__(self, values):
        ...

위와 같은 방식으로 Mixin을 정의해서 필요한 데이터구조를 확장하기도 하고, 기본으로 사용하는 생성자 __init__ 이외에도 __eq__, __ne__, __getstate__, __setitem__, items, pop 등의 기본 내장되어 있는 Built-in 함수들을 필요에 맞춰서 재구현한 것들을 볼 수 있습니다.

# https://github.com/pallets/werkzeug/blob/1.0.1/src/werkzeug/datastructures.py#L919

class Headers(object):
    """An object that stores some headers.  It has a dict-like interface
    but is ordered and can store the same keys multiple times.
    This data structure is useful if you want a nicer way to handle WSGI
    headers which are stored as tuples in a list.
    ...
    """

Request에 사용되는 Headers 또한 기본 Built-in 함수들을 직관적으로 사용할 수 있도록 재구현이 되어 있습니다. 이 코드들을 보다보니, PyTorch의 Module 클래스가 생각나기도 하네요!

그 외에도 다양하게 decorator가 구현되고 직관적으로 사용되는 점들도 있으니, 코드를 살펴보시면서 참고하시면 좋겠습니다. (@setupmethod 예시)

끝으로

이번에 Flask를 살펴보면서.. 한명의 개인이 전 세계적으로 사용되는 다양한 오픈소스를 만들어내는 것도 신기하였고, 좋은 코드를 읽는데 있어서 Flask를 추천하는 것이 이해가 되었습니다. Pythonic한 코드스타일의 정말 좋은 표본이라고 생각하고 앞으로도 많이 참고를 하게 될 것 같습니다.

그리고 위에서 PyTorch가 Flask에 영향을 정말 많이 받은 만큼, Flask를 먼저 살펴보고 PyTorch로 넘어가는 것도 괜찮았을 것 같네요!

References

Categories
Misc

Destination Earth: Supercomputer Simulation to Support Europe’s Climate-Neutral Goals

To support its efforts to become climate neutral by 2050, the European Union has launched a Destination Earth initiative to build a detailed digital simulation of the planet that will help scientists map climate development and extreme weather events with high accuracy.

To support its efforts to become climate neutral by 2050, the European Union has launched a Destination Earth initiative to build a detailed digital simulation of the planet that will help scientists map climate development and extreme weather events with high accuracy.

The decade-long project will create a digital twin of the Earth, rendered at one-kilometer scale and based on continuously updated observational data from climate, atmospheric, and meteorological sensors — as well as measures of the environmental impacts of human activities. 

Led by the European Space Agency, European Centre for Medium-Range Weather Forecasts, and European Organisation for the Exploitation of Meteorological Satellites, the digital twin project is estimated to require a system with 20,000 GPUs to operate at full scale, the researchers wrote in a strategy paper published in Nature Computational Science.

Insights from the simulation will allow scientists to develop and test scenarios, informing policy decisions and sustainable development planning. Dubbed DestinE, the model could be used to assess drought risk, monitor sea level rise, and track changes in the polar regions. It will also be used for strategies around food and water supplies, as well as renewable energy initiatives including wind farms and solar plants. 

“If you are planning a two-​meter high dike in The Netherlands, for example, I can run through the data in my digital twin and check whether the dike will in all likelihood still protect against expected extreme events in 2050,” said Peter Bauer, deputy director for Research at the European Centre for Medium-​Range Weather Forecasts and co-​initiator of Destination Earth. 

Unlike traditional climate models, which represent large-scale processes and neglect the finer details essential for precise weather forecasts, the digital twin model will bring together both, enabling high-resolution simulations of the entire climate and weather system. 

The researchers plan to harness AI to help process data, represent uncertain processes, accelerate simulations, and filter out key insights from the data. The main digital modeling platform is aimed to be operational by 2023, with the digital twin fully developed and running by 2027.

“Destination Earth is a key initiative for Europe’s twin digital and green transitions,” said Thomas Skordas, the European Commission’s director for digital excellence and science infrastructure. “It will greatly enhance our ability to produce climate models with unprecedented detail and reliability, allowing policy-makers to anticipate and mitigate the effects of climate change, saving lives and alleviating economic consequences in cases of natural disasters.”

Read the research team’s papers in Nature Computational Science and Nature Climate Change.

Read more >> 

Categories
Misc

We Won’t See You There: Why Our Virtual GTC’s Bigger Than Ever

Call it an intellectual Star Wars bar. You could run into just about anything at GTC. Princeton’s William Tang would speak about using deep learning to unleash fusion energy, UC Berkeley’s Gerry Zhang would talk about hunting for alien signals, Airbus A3’s Arne Stoschek would describe flying autonomous pods. Want to catch it all? Run. Read article >

The post We Won’t See You There: Why Our Virtual GTC’s Bigger Than Ever appeared first on The Official NVIDIA Blog.

Categories
Misc

Using tensorflow for music

Looking for examples showing how to use tensorflow for music – generation, tuning or anything. I tried looking up online but could not find anything pointing towards this. I hope someone could help me out with this.

Thanks in advance for any help!

submitted by /u/trapzar
[visit reddit] [comments]

Categories
Misc

How to use .eval() inside the dataset.map function?

Hi!

I am running into a problem, I cannot solve by myself.

I am doing this example: https://www.tensorflow.org/tutorials/audio/simple_audio

But I would like do use my own stft-function, which i wrote in numpy instead of tf.signal.stft

So I changed the function “def get_spectrogram(waveform)” and I try to convert the input Tensor into a numpy array by using *.eval() and “get_spectrogram” is used in “get_spectrogram_and_label_id(audio, label)” which again is used inside a dataset map

spectrogram_ds = waveform_ds.map(get_spectrogram_and_label_id, num_parallel_calls=AUTOTUNE) 

Since this mapping is done in GraphMode, and not EagerlyMode, i cannot use .numpy() and have to use .eval() instead.

However .eval() asked for a session and it has to be the same session the map function is used for the dataset.

Has anybody ran into this problem and can help?

def get_spectrogram(waveform): # Padding for files with less than 16000 samples zero_padding = tf.zeros([16000] - tf.shape(waveform), dtype=tf.float32) # Concatenate audio with padding so that all audio clips will be of the # same length waveform = tf.cast(waveform, tf.float32) equal_length = tf.concat([waveform, zero_padding], 0) equal_length = equal_length.eval() # <-- HERE IS THE PROBLEM spectrogram = do_stft(equal_length, 512, 128) #<-- uses NUMPY return spectrogram def get_spectrogram_and_label_id(audio, label): print(sess.graph) spectrogram = get_spectrogram(audio.eval(session=sess)) spectrogram = tf.expand_dims(spectrogram, -1) label_id = tf.argmax(label == commands) return spectrogram, label_id spectrogram_ds = waveform_ds.map(get_spectrogram_and_label_id, num_parallel_calls=AUTOTUNE) 

submitted by /u/alex_bababu
[visit reddit] [comments]

Categories
Misc

Creating TensorFlow Custom Ops, Bazel, and ABI compatibility

Creating TensorFlow Custom Ops, Bazel, and ABI compatibility submitted by /u/pgaleone
[visit reddit] [comments]
Categories
Misc

German Researchers Develop Early Warning AI for Self-Driving Systems

Self-driving cars can run into critical situations where a human driver must retake control for safety reasons. Researchers from the Technical University of Munich have developed an AI early warning system that can give human drivers a seven-second heads-up about these critical driving scenarios.

Self-driving cars can run into critical situations where a human driver must retake control for safety reasons. Researchers from the Technical University of Munich have developed an AI early warning system that can give human drivers a seven-second heads-up about these critical driving scenarios. 

Studied in cooperation with the BMW Group, the AI system learned from 2,500 real traffic situations using vehicle sensor data encompassing road conditions, weather, speed, visibility and steering wheel angle. 

The researchers used NVIDIA GPUs for both training and inference of the failure prediction  AI models. The model recognizes when patterns in a driving situation’s sensor data look similar to scenarios the self-driving system was unable to navigate in the past, and issues an early warning to the driver. 

When tested on public roads using autonomous development vehicles from the BMW Group, the AI model was able to predict situations that self-driving cars would be unable to handle alone seven seconds ahead of time, and with over 85 percent accuracy. These results outperform state-of-the-art failure prediction by more than 15 percent.

“The big advantage of our technology: we completely ignore what the car thinks. Instead we limit ourselves to the data based on what actually happens and look for patterns,” said Eckehard Steinbach, Chair of Media Technology and member of the Board of Directors of the Munich School of Robotics and Machine Intelligence. “In this way, the AI discovers potentially critical situations that models may not be capable of recognizing, or have yet to discover.”

Early warning could help human drivers more quickly react to critical situations such as crowded intersections, sudden braking or dangerous swerving. 

The AI can be improved with larger quantities of data from autonomous vehicles that are under development and undergoing testing on the road. The learnings collected from each individual vehicle can be used for future iterations of the model, and deployed across the entire fleet of cars.

“Every time a potentially critical situation comes up on a test drive, we end up with a new training example,” said Chrisopher Kuhn, an author on the study.

Read more >> 

Find the full paper in IEEE Transactions on Intelligent Transportation Systems