Categories
Misc

How to use an optimizer in tensorflow 2.5?

Hello everyone,

I want to use Adam optimizer in tensorflow

I understand that you need to create the forward propagation, and let tensorflow deals with the backward propagation

my model goes like this

We start with initialization (HE method)

def initialize_HE(arr):
params={}

for i in range(1,len(arr)):
l=str(i)
params[‘W’+l] = tf.Variable(tf.random.normal((arr[i],arr[i-1])),name=’W’+l) * np.sqrt(2 / arr[i-1])
params[‘b’+l] = tf.Variable(tf.zeros((arr[i],1)),name=’b’+l)

return params

this will give me a dictionary of W1,W2,b1,b2 …etc

the forward goes like this

def forward(params,X,types):
L = len(types)
out = {}
out[‘A0’]=X

for i in range(1,L+1):
l = str(i)
l0 = str(i-1)
out[‘Z’+l] = params[‘W’+l] @ out[‘A’+l0] + params[‘b’+l]
if types[i-1] ==’relu’:
out[‘A’+l] = tf.nn.relu(out[‘Z’+l])
if types[i-1] == ‘sigmoid’:
out[‘A’+ l] = tf.nn.sigmoid(out[‘Z’+l])

return out[‘A’+ l]

This will give me the last layer output, let’s call it y_hat

so far I’m only replacing numpy variables with tensorflow’s

Here is the loss function

bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)

loss = bce(train_Y,Y_hat)

I want to minimize this loss then get the parameters after some amount of iterations

the tutorials says I need to do something like this

opt = tf.keras.optimizers.Adam(learning_rate=0.1)

opt.minimize(cost,params)

this gives an error of

`tape` is required when a `Tensor` loss is passed.

if I did this

with tf.GradientTape() as tape:
Y_hat =forward(params,train_X,types)
cost = bce(train_Y,Y_hat)
grads = tape.gradient(cost,var_list)

opt.apply_gradients(zip(grads,var_list))

I get

Tensor.name is meaningless when eager execution is enabled.

I understand the sequential api can do all of that for me, right now I just want to use the optimizer by itself

Thank you

submitted by /u/RepeatInfamous9988
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *