Categories
Misc

Getting the data labels from ImageDataGenerator.flow_from_directory() for all batches

I am using an ImageDataGenerator with the method flow_from_directory() to grab the images for my CNN. According to the documentation flow_from_directory() returns a tuple (x, y) where x is the data and y the labels for every item in the batch.

I tried to get the labels of every batch with the next() method and a loop but received the ValueError: too many values to unpack (expected 2).

What’s the recommended way to get all the matching labels for every image? I couldn’t find anything online except the approach with next(), which only worked for a single batch without a loop.

test_datagen = ImageDataGenerator(rescale=1./255) test_df = test_datagen.flow_from_directory( path, target_size=(512, 512), batch_size=32, class_mode='categorical') y = [] steps = test_df.n//32 #My approach that wasn't working for i in range(steps): a, b = test_df.next() y.extend(b) 

submitted by /u/obskure_Gestalt
[visit reddit] [comments]

Categories
Misc

String as input to a tensorflow NN

Hello! I am trying to train a model to recognize plural and singular nouns; input is a noun and output is either 1 or 2, 1 for singular and 2 for plural. Truth be told, I am not sure entirely how to tackle this… I saw a few tutorials about TF NN and image processing, but I don’t know how does that relate. Every time I try to run model.fit(nouns, labels, epoc=N) it either doesn’t do anything or it fails due to bad input.

The challenges I am facing are as follows: * Can I have a variable sized input? * How can I get the text, stored in a CSV, to a form that can be input into the NN model?

The code I have so far is something like this: “`python model = keras.models.Sequential() model.add(keras.layers.Input(INPUT_LENGTH,)) ## I am padding the string to have this length model.add(keras.layers.Dense(10, activation=’relu’, name=”First_Layer”)) model.add(keras.layers.Dense(2, activation=’relu’, name=”Output_Layer”))

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') # model.summary() model.fit(nouns_array, labels_array, epochs=10) 

“`

I couldn’t find any tutorials or documentation, that I can clearly understand, talking about inputting string to a NN. Any advice or links would be appreciated.

—- Addendum:

I followed the linked YouTube tutorial to turn the text into tokens and it worked great. I didn’t use the suggested embedded layer and just stuck with the ordinary input dense dense model. Thanks everyone!

submitted by /u/Muscle_Man1993
[visit reddit] [comments]

Categories
Misc

TFLite optimization best practices for deployment on Android?

Hi everyone. I’m deploying a resnet based 928×928 UNet on an android device. Performance is suboptimal even with GPU. Currently I’m only optimizing the models using the tf.lite.Optimize.DEFAULT flag. I was wondering if any of you have had experience with more intricate optimization techniques aimed specifically at latency and not neccesarily size reduction.

submitted by /u/ronsap123
[visit reddit] [comments]

Categories
Misc

/tensorflow Subdirect Statistics

submitted by /u/_kiminara
[visit reddit] [comments]

Categories
Misc

Best way to improve inference throughput

I see multiple options on the internet to optimize inference, and i don’t know which would be the best fit for me. My goal is to maximize throughput on GPU, and preferably reduce GPU memory usage.

I have a reinforcement learning project, where i have multiple cpu processes generating input data in batches and sending them over to a single GPU for inference. Each process loads the same resnet model with two different weight configurations at a time. The weights used get updated about every 30 minutes and get distributed between the processes. I use Python and Tensorflow 2.7 on Windows(don’t judge) and the only optimization is use right now is the built-in XLA optimizations. My GPU does not support FP-16.

I have seen TensorRT being suggested to optimize inference, i have also seen TensorflowLite, Intel has an optimization tool too, and then there is Tensorflow Serve. What option do you think would fit my needs best?

submitted by /u/LikvidJozsi
[visit reddit] [comments]

Categories
Misc

VentureBeat: How to discover AI code, know-how with CatalyzeX

VentureBeat: How to discover AI code, know-how with CatalyzeX submitted by /u/fullerhouse570
[visit reddit] [comments]
Categories
Misc

Detecting CUDA libraries on Windows

Detecting CUDA libraries on Windows

So I’ve had things working fine on linux and now i’m trying to set up the same on windows so I can use the newer gpu in a new machine. The problem is that after install of both CUDA toolkit and cuDNN, the libraries are never picked up even after several restarts. I’ve searched quite a bit and haven’t turned up anything that works, and don’t know of a way to get Windows to look in the new PATH variable that the installer did properly set up.

These are the offending libraries

One thing to note is that when I copy the offending dynamic libs to System32, my run on the command line picks up the libraries and detects my gpu as it should. So something’s happening with searching for them in another PATH, I just don’t know how to fix it. This sort of thing rarely happens on linux and even when it does, ldconfig is usually the answer.

Update: I tried something I didn’t think would work, but it turns out it did. I originally downloaded python from the windows app store because I figured it would save some time and python using scoop as the installer wasn’t working. I uninstalled the windows app store version of Python 3.8 and installed the same version from the python organization’s website, and now everything is working.

I’m not sure what the issue is with windows app store downloads, but i’ve had an incident with Slack via the same method. The issue with Slack was completely different, but from what I can tell, python was installed in a different location from the windows store than it normally would’ve been, and I think that contributed somehow. On linux, there’s 4 folders anything could ever be installed to automatically by convention, so we don’t run into this specific problem on the platform. That’s why troubleshooting this was so tiresome.

submitted by /u/CrashOverride332
[visit reddit] [comments]

Categories
Misc

How to retrofit an existing TF setup to use an onboard GPU?

Hi all,

I’ve got a Lenovo Legion laptop with an onboard GeForce GTX 1660 GPU. Here’s some setup details:

– Ubuntu 21.10

– Python 3.9.7

– using pip (not Conda)

– Tensorflow 2.7.0 (from Python: “tf.__version__” returns 2.7.0)

– TF doesn’t yet recognize GPU existence: “tf.config.list_physical_devices(‘GPU’) returns []

– I think I have CUDA installed: (cat /proc/driver/nvidia/version):

NVRM version: NVIDIA UNIX x86_64 Kernel Module 495.29.05 Thu Sep 30 16:00:29 UTC 2021

GCC version: gcc version 11.2.0 (Ubuntu 11.2.0-7ubuntu2)

I’m doing a TensorFlow tutorial (with PyTorch to come) & have reached a point where I need the GPU. How can I get TF to recognize it?

Before you ask: yes, I *could* download a Docker container or use Colab. I’m going this route because it seems dumb to have a GPU at my fingertips and not use it.

Thanks all & HNY…

submitted by /u/PullThisFinger
[visit reddit] [comments]

Categories
Misc

Advent of Code 2021 in pure TensorFlow – day 9. Image gradients, 4-neighborhood, and flood fill algorithms in pure TensorFlow.

Advent of Code 2021 in pure TensorFlow - day 9. Image gradients, 4-neighborhood, and flood fill algorithms in pure TensorFlow. submitted by /u/pgaleone
[visit reddit] [comments]
Categories
Misc

Advent of Code 2021 in pure TensorFlow – day 8

Advent of Code 2021 in pure TensorFlow - day 8 submitted by /u/pgaleone
[visit reddit] [comments]