Categories
Misc

Is there a way to reduce Tensorflow’s RAM usage?

I was monitoring my system RAM using free -m in the linux terminal after every cell execution. One single round of training in Federated Learning using the following code snippet used 4 GB of my RAM space (I have 8 in total!)

federated_train_data = get_new_federated_data() # Tensorflow Datasets of 4 clients state, metrics = iterative_process.next(state, federated_train_data) print('round 1, metrics={}'.format(metrics)) 

Is there a way to reduce/optimize ram space allocation? I don’t have CUDA set up. If that is the solution, I would really appreciate if someone could guide me with the steps to install the same on Debian since I have setup my project in a conda virtual environment and don’t have much knowledge about this OS.

GPU: Nvidia GTX 750 Ti

CPU: i5 4460

RAM: 8 GB

submitted by /u/ChaosAdm
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *