In this relatively short post, I’m going to show you how to deal with metrics and summaries in TensorFlow 2. Metrics, which can be used to monitor various important variables during the training of deep learning networks (such as accuracy or various losses), were somewhat unwieldy in TensorFlow 1.X. Thankfully in the new TensorFlow 2.0 they are much easier to use. Summary logging, for visualization of training in the TensorBoard interface, has also undergone some changes in TensorFlow 2 that I will be demonstrating. Please note – at time of writing, only the alpha version of TensorFlow 2 is available, but it is probably safe to assume that the syntax and forms demonstrated in this tutorial will remain the same in TensorFlow 2.0. To install the alpha version, use the following command:
pip install tensorflow==2.0.0-alpha0
In this tutorial, I’ll be using a generic MNIST Convolutional Neural Network example, but utilizing full TensorFlow 2 design paradigms. To learn more about CNNs, see
this tutorial – to understand more about TensorFlow 2 paradigms, see
this tutorial. All the code for this tutorial can be found as a Google Colaboratory file on
my Github repository.
Eager to build deep learning systems? Get the book here
TensorFlow 2 metrics
Metrics in TensorFlow 2 can be found in the TensorFlow Keras distribution – tf.keras.metrics. Metrics, along with the rest of TensorFlow 2, are now computed in an Eager fashion. In TensorFlow 1.X, metrics were gathered and computed using the imperative declaration, tf.Session style. All that is required now is to declare the metrics as a Python variable, use the method
update_state() to add a state to the metric,
result() to summarize the metric, and finally
reset_states() to reset all the states of the metric. The code below shows a simple implementation of a Mean metric:
mean_metric = tf.keras.metrics.Mean()
mean_metric.update_state(2.0)
mean_metric.update_state(3.0)
mean_metric.update_state(4.0)
print(mean_metric.result().numpy())
This will print the average result -> 3.0. As can be observed, there is an internal memory for the metric, which can be appended to using
update_state(). The Mean metric operation is executed when
result() is called. Finally, to reset the memory of the metric, we can use
reset_states() as follows:
mean_metric.reset_states()
print(mean_metric.result().numpy())
This will print the default response of an empty metric – 0.0.
TensorFlow 2 summaries
Metrics fit hand-in-glove with summaries in TensorFlow 2. In order to log summaries in TensorFlow 2, the developer uses the
with Python context manager. First, one creates a summary_writer object like so:
summary_writer = tf.summary.create_file_writer('/log')
To log something to the summary writer, the developer must first enclose the “space” within your code which does the logging with a Python
with statement. The logging looks like so:
with summary_writer.as_default():
tf.summary.scalar('mean', mean_metric.result(), step=1)
The
with context can surround the full training loop, or just the area of the code where you are storing the summaries. As can be observed, the logged scalar value is set by using the metric
result() method. The
step value needs to be provided to the summary – this allows TensorBoard to plot the variation of various values, images etc. between training steps. The step number can be tracked manually, but the easiest way is to use the
iterations property of whatever optimizer you are using. This will be demonstrated in the example below.
TensorFlow 2 metrics and summaries – CNN example
In this example, I’ll show how to use metrics and summaries in the context of a CNN MNIST classification example. In this example, I’ll use a custom training loop, rather than a Keras
fit loop. In the next section, I’ll show you how to implement custom metrics even within the Keras
fit functionality.
As usual for any machine learning task, the first step is to prepare the training and validation data. In this case, we’ll be using the prepackaged Keras MNIST dataset, then converting the numpy data arrays into a TensorFlow dataset (for more on TensorFlow datasets, see
here and
here). This looks like the following:
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
BATCH_SIZE=64
# first the training set
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE).shuffle(10000)
train_dataset = train_dataset.map(lambda x, y: (tf.cast(x, tf.float32) / 255.0, y))
train_dataset = train_dataset.map(lambda x, y: (tf.expand_dims(x, -1) / 255.0, y))
train_dataset = train_dataset.repeat()
# now the validation set
valid_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(5000).shuffle(10000)
valid_dataset = valid_dataset.map(lambda x, y: (tf.cast(x, tf.float32) / 255.0, y))
valid_dataset = valid_dataset.map(lambda x, y: (tf.expand_dims(x, -1) / 255.0, y))
valid_dataset = valid_dataset.repeat()
In the lines above, some preprocessing is applied to the image data to normalize it (divide the pixel values by 255, make the tensors 4D for consumption into CNN layers). Next I define the CNN model, using the Keras sequential paradigm:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(32, 2, 1, activation='relu', input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPool2D(2))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Conv2D(32, 2, 1, activation='relu'))
model.add(tf.keras.layers.MaxPool2D(2))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(10))
The model declaration above is all standard Keras – for more on the sequential model type of Keras, see
here. Next, we create a custom training loop function in TensorFlow. It is now
best practice to encapsulate core parts of your code in Python functions – this is so that the @tf.function decorator can be applied easily to the function. This signals to TensorFlow to perform Just In Time (JIT) compilation of the relevant code into a graph, which allows the performance benefits of a static graph as per TensorFlow 1.X. Otherwise, the code will execute eagerly, which is not a big deal, but if one is building production or performance dependent code it is better to decorate with @tf.function.
Here’s the training loop and optimization/loss function definitions:
optimizer = tf.keras.optimizers.Adam()
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def train(ds_train, optimizer, loss_fn, model, num_batches, log_freq=10):
avg_loss = tf.keras.metrics.Mean()
avg_acc = tf.keras.metrics.SparseCategoricalAccuracy()
batch_idx = 0
for batch_idx, (images, labels) in enumerate(ds_train):
images = tf.expand_dims(images, -1)
with tf.GradientTape() as tape:
logits = model(images)
loss_value = loss_fn(labels, logits)
grads = tape.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
avg_loss.update_state(loss_value)
avg_acc.update_state(labels, logits)
if batch_idx % log_freq == 0:
print(f"Batch {batch_idx}, average loss is {avg_loss.result().numpy()}, average accuracy is {avg_acc.result().numpy()}")
tf.summary.scalar('loss', avg_loss.result(), step=optimizer.iterations)
tf.summary.scalar('acc', avg_acc.result(), step=optimizer.iterations)
avg_loss.reset_states()
avg_acc.reset_states()
if batch_idx > num_batches:
break
As can be observed, I have created two metrics for use in this training loop –
avg_loss and
avg_acc. These are Mean and SparseCategoricalAccuracy metrics, respectively. The Mean metric has been discussed previously. The SparseCategoricalAccuracy metric takes, as input, the training labels and logits (raw, unactivated outputs from your model). Because it is a
sparse categorical accuracy measure, it can take the training labels in scalar integer form, rather than one-hot encoded label vectors. Calling
result() on this metric will calculate the average accuracy of all the labels/logits pairs passed during the
update_state() call – see line 15 above.
Every
log_freq number of batches, the results of the metrics are printed and also passed as summary scalars. After the metrics are logged in the summaries, their states are reset. You will notice that I have not provided a
with context for these summaries – this is applied in the outer
epoch loop is shown below:
num_epochs = 10
summary_writer = tf.summary.create_file_writer('./log/{}'.format(dt.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")))
for i in range(num_epochs):
print(f"Epoch {i + 1} of {num_epochs}")
with summary_writer.as_default():
train(train_dataset, optimizer, loss_fn, model, 10000//BATCH_SIZE)
As can be observed, the
summary_writer.as_default() is supplied as context to the whole
train function.
So far so good. However, this is utilizing a “manual” TensorFlow training loop, which is no longer the easiest way to train in TensorFlow 2, given the tight Keras integration. In the next example, I’ll show you how to include run of the mill metrics in the Keras API, but also custom metrics.
TensorFlow 2 Keras metrics and summaries
To include normal metrics such as the accuracy in Keras is straight-forward – one supplies a list of metrics to be logged in the
compile statement like so:
metric_model.compile(optimizer=tf.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
However, if one wishes to log more complicated or custom metrics, it becomes difficult to see how to set this up in Keras. One easy way of doing so is by creating a custom Keras layer whose sole purpose is to add a metric to the model / training. In the example below, I have created a custom layer which adds the standard deviation of the kernel weights as a metric:
class MetricLayer(tf.keras.layers.Layer):
def __init__(self, layer_to_log):
super(MetricLayer, self).__init__()
self.layer_to_log = layer_to_log
def call(self, input):
self.add_metric(tf.keras.backend.std(self.layer_to_log.variables[0]),
name=f'std_of_{self.layer_to_log.name}_kernel',
aggregation='mean')
return input
A few things to notice about the creation of the custom layer above. First, notice that the layer is defined as a Python class object which inherits from the
keras.layers.Layer object. The only variable passed to the initialization of this custom class is the layer with the kernel weights which we wish to log. The
call method tells Keras / TensorFlow what to do when the layer is called in a feed forward pass. In this case, the input is passed straight through to the output – it is, in essence, a dummy layer. However, you’ll notice within the
call a metric is added.
The value of the metric is the standard deviation of
layer_to_log.variables[0]. For a CNN layer, the zero index [0] of the layer variables is the kernel weights. A name is provided to the metric for ease of viewing during training, and finally the aggregation method of the metric is specified – in this case, a ‘mean’ aggregation of the standard deviations.
To include this layer, one can just add it as a sequential element in the Keras model. In the below I take the existing CNN model created in the previous example, and create a new model with the custom metric layer appended to the end:
metric_model = tf.keras.Sequential()
metric_model.add(model)
metric_model.add(MetricLayer(model.layers[0]))
As can be observed in the above, the first layer of the previous model is passed to the custom MetricLayer. Running the
fit training method on this model will now generate both the SparseCategoricalAccuracy metric, along with the custom standard deviation from the first layer. To monitor in TensorBoard, one must also include the TensorBoard callback. All of this looks like the following:
metric_model.compile(optimizer=tf.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
callbacks = [
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./log/{}'.format(dt.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")), update_freq='batch')
]
metric_model.fit(train_dataset, steps_per_epoch=10000//BATCH_SIZE, epochs=5,
validation_data=valid_dataset, validation_steps=5,
callbacks=callbacks)
The code above will perform the training and ensure all the metrics (including the metric added in the custom metric layer) are output to TensorBoard via the TensorBoard callback.
This concludes my quick introduction to metrics and summaries in TensorFlow 2. Watch out for future posts and updates of existing posts as the transition to TensorFlow 2 develops.
Eager to build deep learning systems? Get the book here
The post Metrics and summaries in TensorFlow 2 appeared first on Adventures in Machine Learning.