Categories
Misc

model.fit() with only 1 training sample and 1 epoch is triggering the architecture twice.

I have adapted this autoencoder code from one of the tutorials and is as below. I am training the network on mnist images.

I found while experimenting with the network that model.fit() fires encoder-decoder network twice; even when the number of training sample is just 1 and number of epochs selected is also 1 with batch_size is None

import numpy as np import tensorflow as tf import tensorflow.keras as k import matplotlib.pyplot as plt from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, UpSampling2D # seed values np.random.seed(111) tf.random.set_seed(111) 

Prepare dataset

# download dataset (x_train, _), (x_test, _) = k.datasets.mnist.load_data() # process dataset x_train = x_train / 255. x_test = x_test / 255. x_train = x_train.astype(np.float32) x_test = x_test.astype(np.float32) # reshape the dataset to show number of channels = 1 x_train = np.reshape(x_train, (*(x_train.shape), 1)) # * operator dereferences tuple. x_test = np.reshape(x_test, (*(x_test.shape), 1)) # * operator dereferences tuple. # add gaussian noise. noise = np.random.normal(loc=0.0, scale=1.0, size = x_train.shape) x_train_noisy = x_train + noise noise = np.random.normal(loc=0.0, scale=1.0, size = x_test.shape) x_test_noisy = x_test + noise # clip the values to 0.0 and 1.0 x_train_noisy = np.clip(x_train_noisy, 0.0, 1.0) x_test_noisy = np.clip(x_test_noisy, 0.0, 1.0) 

Prepare Encoder, Decoder, and Autoencoder classes

# Encoder Network class Encoder(k.layers.Layer): def __init__(self): super(Encoder, self).__init__() self.conv1 = Conv2D(filters=32, kernel_size=3, strides=1, activation = 'relu', padding='same') self.conv2 = Conv2D(filters=32, kernel_size=3, strides=1, activation='relu', padding='same') self.conv3 = Conv2D(filters=16, kernel_size=3, strides=1, activation='relu', padding='same') self.pool = MaxPooling2D(padding='same') def call(self, input_features): x = self.conv1(input_features) x = self.pool(x) x = self.conv2(x) x = self.pool(x) x = self.conv3(x) x = self.pool(x) return x # Decoder Network class Decoder(k.layers.Layer): def __init__(self): super(Decoder, self).__init__() self.conv1 = Conv2D(filters=16, kernel_size=3, strides=1, activation='relu', padding='same') self.conv2 = Conv2D(filters=32, kernel_size=3, strides=1, activation='relu', padding='same') self.conv3 = Conv2D(filters=32, kernel_size=3, strides=1, activation='relu', padding='valid') self.conv4 = Conv2D(filters = 1, kernel_size=3, strides=1, activation='softmax', padding='same') self.upsample = UpSampling2D(size=(2,2)) def call(self, encoded_features): x = self.conv1(encoded_features) x = self.upsample(x) x = self.conv2(x) x = self.upsample(x) x = self.conv3(x) x = self.upsample(x) x = self.conv4(x) return x # Autoencoder Network class Autoencoder(k.Model): def __init__(self): super(Autoencoder, self).__init__() self.encoder = Encoder() self.decoder = Decoder() def call(self, input_features): print("Autoencoder call") encode = self.encoder(input_features) decode = self.decoder(encode) return decode 

Train the model

model = Autoencoder() model.compile(loss='binary_crossentropy', optimizer='adam') sample = np.expand_dims(x_train[1], axis=0) sample_noisy = np.expand_dims(x_train_noisy[1], axis=0) print("shape of sample: {}".format(sample.shape)) print("shape of sample_noisy: {}n".format(sample_noisy.shape)) loss = model.fit(x=sample_noisy, y=sample, epochs=1) 

I am training the model on only one sample for only 1 iteration. However, the print statements shows that my autoencoder.call() function is getting called twice.

shape of sample: (1, 28, 28, 1) shape of sample_noisy: (1, 28, 28, 1) Autoencoder call Autoencoder call 1/1 [==============================] - 1s 1s/step - loss: 0.6934 

Can any of you please help me what concept I am failing to understand?

Thanks,

submitted by /u/__hy23__
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *