Categories
Misc

Cannot quantize custom model with BatchNorm, but MobileNet can

I defined a model using tf.keras (v2.3.0) and I want to perfrom quantization aware training in this way:

import tensorflow as tf from tensorflow.keras import layers #i tried to replace with tf.python.keras.layers.VersionAwareLayers import tensorflow_model_optimization as tfmot def build_model(): inputs = tf.keras.Input() x = layers.Conv2D(24, 5, 2, 'relu')(inputs) x = layers.BatchNormalization()(x) # more layers... logits = layers.Softmax()(x) model = tf.keras.Model(inputs=inputs, outputs=logits) return model model = build_model() # training code q_aware_model = tfmot.quantization.keras.quantize_model(model) 

I get this error:

RuntimeError: Layer batch_normalization_2:<class ‘tensorflow.python.keras.layers.normalization_v2.BatchNormalization’> is not supported. You can quantize this layer by passing a ‘tfmot.quantization.keras.QuantizeConfig’ instance to the ‘quantize_anotate_layer’ API

However, if I define the model as a keras MobilenetV2, which contains the same BatchNormalization layer, everything works fine. Where is the difference? How can I fix this problem?

submitted by /u/fralbalbero
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *