Categories
Misc

Autoencoder TensorFlow2 – ValueError

I am trying to train an Autoencoder using TensorFlow2.5 and Python3.8 as follows: Inception NetV3 was used to perform feature extraction using an image dataset containing 289229 images. The final output of Inception NetV3 is 2048-d vector. I pickled all of them in a Python3 list and load it along with the filenames:

 # Read pickled Python3 list containing 2048-d extracted feature representation per image- features_list = pickle.load(open("DeepFashion_features_inceptionnetv3.pickle", "rb")) # Convert from Python3 list to numpy array- features_list_np = np.asarray(features_list) features_list_np.shape # (289229, 2048) del features_list # Read pickled Python3 list containing abolute path and filenames- filenames_list = pickle.load(open("DeepFashion_filenames_inceptionnetv3.pickle", "rb")) len(features_list), len(filenames_list) # (289229, 289229) # Note that the absolute path contains Google colab path- filenames_list[1] # '/content/img/1981_Graphic_Ringer_Tee/img_00000002.jpg' # Create 'tf.data.Dataset' using np array- batch_size = 32 features_list_dataset = tf.data.Dataset.from_tensor_slices(features_list_np).batch(batch_size) x = next(iter(features_list_dataset)) # 2021-06-28 13:10:00.229937: W tensorflow/core/kernels/data/model_dataset_op.cc:205] Optimization loop failed: Cancelled: Operation was cancelled x.shape # TensorShape([32, 2048]) 

My first question is why does it give the message “Optimization loop failed”? I am using Nvidia RTX 3080 with 16GB GPU. Note that since this is an autoencoder, there are no accompanying labels for the given data!

Is there any other better way of feeding this Python3 list as input to a TF2 neural network that I am missing?

I am checking for available GPU:

 num_gpus = len(tf.config.list_physical_devices('GPU')) print(f"number of GPUs available = {num_gpus}") # number of GPUs available = 1 

Second, I coded an autoencoder with the architecture:

 class FeatureExtractor(Model): def __init__(self): super(FeatureExtractor, self).__init__() self.encoder = Sequential([ Dense( units = 2048, activation = 'relu', kernel_initializer = tf.keras.initializers.glorot_normal(), input_shape = (2048,) ), Dense( units = 1024, activation = 'relu', kernel_initializer = tf.keras.initializers.glorot_normal() ), Dense( units = 512, activation = 'relu', kernel_initializer = tf.keras.initializers.glorot_normal() ), Dense( units = 256, activation = 'relu', kernel_initializer = tf.keras.initializers.glorot_normal() ), Dense( units = 100, activation = 'relu', kernel_initializer = tf.keras.initializers.glorot_normal() ), ] ) self.decoder = Sequential([ Dense( units = 256, activation = 'relu', kernel_initializer = tf.keras.initializers.glorot_normal() ), Dense( units = 512, activation = 'relu', kernel_initializer = tf.keras.initializers.glorot_normal() ), Dense( units = 1024, activation = 'relu', kernel_initializer = tf.keras.initializers.glorot_normal() ), Dense( units = 2048, activation = 'relu', kernel_initializer = tf.keras.initializers.glorot_normal() ), ] ) def call(self, x): encoded = self.encoder(x) decoded = self.decoder(encoded) return decoded # Initialize an instance of Autoencoder- autoencoder = FeatureExtractor() autoencoder.build(input_shape = (None, 2048)) # Compile model- autoencoder.compile( optimizer = tf.keras.optimizers.Adam(learning_rate = 0.001), loss = tf.keras.losses.MeanSquaredError() ) # Sanity check- autoencoder(x).shape # TensorShape([32, 2048]) x.shape # TensorShape([32, 2048]) 

But, when I try to train the model:

 # Train model- history_autoencoder = autoencoder.fit( features_list_dataset, epochs = 20 ) 

It gives me the error:

ValueError: No gradients provided for any variable:

[‘dense_10/kernel:0’, ‘dense_10/bias:0’, ‘dense_11/kernel:0’,

‘dense_11/bias:0’, ‘dense_12/kernel:0’, ‘dense_12/bias:0’,

‘dense_13/kernel:0’, ‘dense_13/bias:0’, ‘dense_14/kernel:0’,

‘dense_14/bias:0’, ‘dense_15/kernel:0’, ‘dense_15/bias:0’,

‘dense_16/kernel:0’, ‘dense_16/bias:0’, ‘dense_17/kernel:0’,

‘dense_17/bias:0’, ‘dense_18/kernel:0’, ‘dense_18/bias:0’].

What is going wrong?

Thanks!

submitted by /u/grid_world
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *