I am doing transfer learning with google audioset embeddings. The audioset corpus consists of pre-trained embeddings that are pre-activation (the final layers are removed).
While studying transfer learning, I have noticed that the checkpoint weights are used and the layers that produced these weights are frozen while the a model is built on top of the previous model.
Should I use the pre-trained weights of Audioset while using the embeddings of audioset itself for training the new model? It does not sound right as these embeddings are the bi product of these weights already. Please correct me if I am wrong.