Tensorflow DQN execution time keeps on increasing

Hello. I have a question regarding tensorflow. I was working on a Deep Q Network problem using Tensorflow. The code is as follows:


g = tf.Graph() with g.as_default(): w_1 = tf.Variable(tf.truncated_normal([n_input, n_hidden_1], stddev=0.1)) w_1_p = tf.Variable(tf.truncated_normal([n_input, n_hidden_1], stddev=0.1)) ## There are other parameters too but they are excluded for simplicity

def update_target_q_network(sess): “”” Update target q network once in a while “””

for i_episode in range(n_episode): …….. #Code removed for simplicity if i_episode%10 == 0: update_target_q_network(centralsess) ……..


Basically after every specific number of n_episodes (10 in this case), the parameter w_1 is copied to w_1_p.

The issue with the code is that the time it takes to run the function update_target_q_network keeps on increasing as the n_episodes increase. So for example it takes 0-1 second for 100th episode however the time increase to 220 seconds for 7500th episode. Can anyone kindly tell how can the running time of the code can be improved? I tried reading the reason (the graph keeps on becoming larger) but I am not sure about that or how or change code to reduce time. Thank you for your help.

submitted by /u/FarzanUllah
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *