Categories
Misc

How to stop CUDA from re-initializing for every subprocess which trains a keras model?

I am using CUDA/CUDNN to train multiple tensorflow keras models
on my GPU (for an evolutionary algorithm attempting to optimize
hyperparameters). Initially, the program would crash with an Out of
Memory error after a couple generations. Eventually, I found that
using a new sub-process for every model would clear the GPU memory
automatically.

However, each process seems to reinitialize CUDA (loading
dynamic libraries from the .dll files), which is incredibly
time-consuming. Is there any method to avoid this?

Code is pasted below. The function “fitness_wrapper” is called
for each individual.

def fitness_wrapper(indiv): fit = multi.processing.Value('d', 0.0) if __name__ == '__main__': process = multiprocessing.Process(target=fitness, args=(indiv, fit)) process.start() process.join() return (fit.value,) def fitness(indiv, fit): model = tf.keras.Sequential.from_config(indiv['architecture']) optimizer_dict = indiv['optimizer'] opt = tf.keras.optimizers.Adam(learning_rate=optimizer_dict['lr'], beta_1=optimizer_dict['b1'], beta_2=optimizer_dict['b2'], epsilon=optimizer_dict['epsilon']) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) model.fit(data_split[0], data_split[2], batch_size=32, epochs=5) fit = model.evaluate(data_split[1], data_split[3])[1] 

submitted by /u/BadassGhost

[visit reddit]

[comments]

Leave a Reply

Your email address will not be published. Required fields are marked *