Custom Dynamic Loss function: No gradients provided for any variable:

Hey all!

I am using an RGB dataset for my x train and the loss is calculated in a dynamic loss function that gets the distances of pairs and compares them against the ideal distance dist_train. Here is the model:

class MyModel(Model): def __init__(self): super(MyModel, self).__init__() self.d1 = Dense(3, activation='relu') self.flatten = Flatten() self.d2 = Dense(3, activation='relu') self.d3 = Dense(2) def call(self, x): x = self.d1(x) x = self.flatten(x) x = self.d2(x) return self.d3(x) # Create an instance of the model model = MyModel() optimizer = tf.keras.optimizers.Adam() train_loss = tf.keras.metrics.Mean(name='train_loss') test_loss = tf.keras.metrics.Mean(name='test_loss') @tf.function def train_step(rgb): with tf.GradientTape() as tape: predictions = model(rgb, training=True) loss = tf_function(predictions) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) train_loss(loss) 

Here is the loss function and the tf.function wrapping it:

def mahal_loss(output): mahal = sp.spatial.distance.pdist(output, metric='mahalanobis') mahal = sp.spatial.distance.squareform(mahal, force='no', checks=True) new_distance = [] mahal =, mask=mahal==0) for i in range(len(mahal)): pw_dist = mahal[i, indices_train[i]] new_distance.append(pw_dist) mahal_loss = np.mean((dist_train - new_distance)**2) return mahal_loss @tf.function(input_signature=[tf.TensorSpec(None, tf.float32)]) def tf_function(pred): y = tf.numpy_function(mahal_loss, [pred], tf.float32) return y 

Running the model:

EPOCHS = 5 for epoch in range(EPOCHS): train_loss.reset_states() test_loss.reset_states() for i in x_train: train_step(i) print( f'Epoch {epoch + 1}, ' f'Loss: {train_loss.result()}, ' f'Test Loss: {test_loss.result()}, ' ) 

I believe the reason I am running into problems lies in the dynamic loss function, as I need to calculate the distance between certain pairs to get the results I expect. This means that inside the loss function I have to calculate the mahalanobis distance of each pair to get the ones I will compare against the correct distances. The error I get is the following:

 in user code: <ipython-input-23-0e975da5cbc2>:15 train_step * optimizer.apply_gradients(zip(gradients, model.trainable_variables)) apply_gradients ** grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars) filter_empty_gradients raise ValueError("No gradients provided for any variable: %s." % ValueError: No gradients provided for any variable: ['my_model/dense/kernel:0', 'my_model/dense/bias:0', 'my_model/dense_1/kernel:0', 'my_model/dense_1/bias:0', 'my_model/dense_2/kernel:0', 'my_model/dense_2/bias:0']. 

submitted by /u/Acusee
[visit reddit] [comments]

Leave a Reply

Your email address will not be published.