(TensorFlow 2.4.1 and np 1.19.2) – For a defined convolutional layer as follows:

conv = Conv2D( filters = 3, kernel_size = (3, 3), activation='relu', kernel_initializer = tf.initializers.GlorotNormal(), bias_initializer = tf.ones_initializer, strides = (1, 1), padding = 'same', data_format = 'channels_last' ) # and a sample input data- x = tf.random.normal(shape = (1, 5, 5, 3), mean = 1.0, stddev = 0.5) x.shape # TensorShape([1, 5, 5, 3]) # Get output from the conv layer- out = conv(x) out.shape # TensorShape([1, 5, 5, 3]) out = tf.squeeze(out) out.shape # TensorShape([5, 5, 3])

Here, the three filters can be accessed as: conv.weights[0][:, :, :, 0], conv.weights[0][:, :, :, 1] and conv.weights[0][:, :, :, 2] respectively.

If I want to compute the L2 norms for all of the three filters/kernels, I am using the code:

# Compute L2 norms- # Using numpy- np.linalg.norm(conv.weights[0][:, :, :, 0], ord = None) # 0.85089666 # Using tensorflow- tf.norm(conv.weights[0][:, :, :, 0], ord = 'euclidean').numpy() # 0.85089666 # Using numpy- np.linalg.norm(conv.weights[0][:, :, :, 1], ord = None) # 1.0733316 # Using tensorflow- tf.norm(conv.weights[0][:, :, :, 1], ord = 'euclidean').numpy() # 1.0733316 # Using numpy- np.linalg.norm(conv.weights[0][:, :, :, 2], ord = None) # 1.0259292 # Using tensorflow- tf.norm(conv.weights[0][:, :, :, 2], ord = 'euclidean').numpy() # 1.0259292

How can I compute L2 norm for the given conv layer’s kernels (by using ‘conv.weights’)?

Also, what’s the correct way for computing L1 norm for the same conv layer’s kernels?

submitted by /u/grid_world

[visit reddit] [comments]