How to retrofit an existing TF setup to use an onboard GPU?

Hi all,

I’ve got a Lenovo Legion laptop with an onboard GeForce GTX 1660 GPU. Here’s some setup details:

– Ubuntu 21.10

– Python 3.9.7

– using pip (not Conda)

– Tensorflow 2.7.0 (from Python: “tf.__version__” returns 2.7.0)

– TF doesn’t yet recognize GPU existence: “tf.config.list_physical_devices(‘GPU’) returns []

– I think I have CUDA installed: (cat /proc/driver/nvidia/version):

NVRM version: NVIDIA UNIX x86_64 Kernel Module 495.29.05 Thu Sep 30 16:00:29 UTC 2021

GCC version: gcc version 11.2.0 (Ubuntu 11.2.0-7ubuntu2)

I’m doing a TensorFlow tutorial (with PyTorch to come) & have reached a point where I need the GPU. How can I get TF to recognize it?

Before you ask: yes, I *could* download a Docker container or use Colab. I’m going this route because it seems dumb to have a GPU at my fingertips and not use it.

Thanks all & HNY…

submitted by /u/PullThisFinger
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *