Categories
Misc

Tensorflow.js Graph Object Detection

Im currently building a web-app that uses tensorflow to scan graph data and turn the data into a visualisation, at the minute the app can only detect a small number of objects based on the coco-ssd trained model (Person, phone, bottle) and I’m struggling with 1) finding other tensorflow models that I can implement to improve what can be detected. 2) tensorflow models that can scan for objects and data within a graph and 3) how to add another model into my code without breaking what already works. I’m very new to using tensorflow and machine learning but below is the code for the web-app that requires the tensorflow model.

code snippets on stack overflow

https://stackoverflow.com/questions/66015902/tensorflow-js-graph-object-detection

submitted by /u/Fawcett_C
[visit reddit] [comments]

Categories
Misc

Installing TensorFlow GPU on Windows 10 with compatible CUDA and cuDNN versions can be a cumbersome task. However, there is a little know fact that it can be done by just two commands if we are using Anaconda!! and I hope it equally works for Linux too.

Installing TensorFlow GPU on Windows 10 with compatible CUDA and cuDNN versions can be a cumbersome task. However, there is a little know fact that it can be done by just two commands if we are using Anaconda!! and I hope it equally works for Linux too. submitted by /u/TheCodingBug
[visit reddit] [comments]
Categories
Misc

Some help with dataset from images

Hello guys, I’m a bit new on tensorflow, I’m trying to make a dataset from ONE folder but the only thing I could made is make a dataset with separate folders using flow_from_directory, which will made a dataset in wich each class is a folder, but I want to make it just from one folder, could you please tell me some way in which I can make it?

submitted by /u/engdiazmu
[visit reddit] [comments]

Categories
Misc

bigger Dataset resulting in loss of NaN without exeeding RAM limits

I’m currently trying to build a model that can authenticate a person on their movement data (accelleration etc)

The dataset is built by me and stored in a JSON file for training in google colab. Sample Notebook

Now older versions of the dataset with less worked out fine. But the new version I got has more entries and sudenly I only get a Loss of NaN and Accuracy of 0.5, no matter what I do.

RAM seems to be an obvious reason, but the RAM usage tracker in colab shows normal levels (2-4gb of the available 13) Also I mocked up dummy datasets with the same, or even bigger sizes and they worked out fine.

Do you guys have any Idea what is going on here? My only idea going forward is to move over to TFRecords instead of the JSON file.

submitted by /u/Cha-Dao_Tech
[visit reddit] [comments]

Categories
Misc

Couple Questions about TF Serving

I’ve been reading about TF Serving quite a bit, trying to decide if it makes sense to be using it for some applications that I’m working on. As I’ve been studying up on it , I’ve run into a few things that I can’t seem to answer myself, so I thought I would turn to you beautiful people to see if I could find some answers that I haven’t been able to figure out so far.

1) Trying to build the Docker image in the first place. I read through the documentation on https://www.tensorflow.org/tfx/serving/docker and followed the directions to get my model into a Docker image. However, due to the constraints of what I’m working on, I need to be able to build the container from a Dockerfile in the first place. I found the Dockerfile for TF Serving on Github here: https://github.com/tensorflow/serving/blob/master/tensorflow_serving/tools/docker/Dockerfile.devel But when I build that image, it’s like… 20 times the size of the 300MB one that I get when following the instructions in the docs. I’m looking for a way to have a Dockerfile that I can build into the 300MB image… so that’s one question.

2) My model currently expects an input of a multi dimensional Tensor. With TF Serving using JSON (a requirement to use instead of gRPC on this project… comes from on high and can’t do anything about it), it looks like my options are basically to use something Base64 encoded. Is there a way to circumnavigate this so that I can send a multidimensional Tensor to my model or do I have to rebuild my model so that it can take in a Base64 image? Ideally… I would like to be able to send the file path to the TF Serving Docker and it would pick it up and go from there, but it doesn’t seem like that’s an option. So I suppose the question is… is base64 the only way to get an image to the model using JSON?

Thanks for any answers… I’ve been banging my head on this off and on for the last month and would love any input that you guys can give me!

submitted by /u/TypeAskee
[visit reddit] [comments]

Categories
Misc

Linear Classifier – Training dataset is missing some categorical values which appear in Evaluation dataset. How to handle?

Hi,

I have a training dataset which is 67% of all my data. Then an evaluation dataset which is 33%.

They’ve been randomly shuffled. Somehow, there are some values in the evaluation dataset which didn’t appear in training. This is causing the following bug:

tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[4] = 54 is not in [0, 53)

Which, after some googling, is because not all the vocabulary values were found in the training dataset. I want to just extend the vocab size but I’m unsure how to do it.

The relevant lines of code would be these ones I think:

for feature_name in CATEGORICAL_COLUMNS:
vocabulary = dftrain[feature_name].unique() # gets a list of all unique values from given feature column
feature_columns.append(tf.feature_column.categorical_column_with_vocabulary_list(feature_name, vocabulary))

Which is a list, and not a scalar length. So I can’t simply add to it.

Any ideas or more information required?

submitted by /u/Cwlrs
[visit reddit] [comments]

Categories
Misc

How do I build tensorflow so it is optimised for build machine?

If I build tensorflow, will it be optimised for that host?

The command i am using is… bazel build —config=opt //tensor flow/tools/pip_package

Will it be optimised? Will it use the host’s full instruction set where it can? Will it avoid instructions the host does not support?

If this is the case. How is the formal release built? Does it override the default settings or is it built on a specified host?

I ask as the formal release uses instructions my host does not have. I am building and using in the same host. I’ve read up on why this has happened. Just not clear on trying to build for my host.

submitted by /u/BillyBag2
[visit reddit] [comments]

Categories
Misc

Saw this video under r/Python, and found it very helpful.

Saw this video under r/Python, and found it very helpful. submitted by /u/felix-thebest
[visit reddit] [comments]
Categories
Misc

Trouble running custom TFLite model on RPI4

I was able to run a sample Google TFLite model on the RPI4, but the custom one I made from Roboflow is not working.

https://www.youtube.com/watch?v=pXLLNa4IrmM&list=LL&index=1&t=1083s

This guide uses Roboflow to train a darknet model which is then converted to TFLite. When I run this model on the RPI4, I get this:

2021-01-30 21:42:03.351149: E tensorflow/core/platform/hadoop/hadoop_file_system.cc:132] HadoopFileSystem load error: libhdfs.so: cannot open shared object file: No such file or directory

Traceback (most recent call last):

File “TFLite_detection_webcam.py”, line 138, in <module>

interpreter = Interpreter(model_path=PATH_TO_CKPT)

File “/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py”, line 207, in __init__

model_path, self._custom_op_registerers))

ValueError: Didn’t find op for builtin opcode ‘RESIZE_BILINEAR’ version ‘3’

Registration failed.

——————————————————————————————————————

Does anyone know how to fix this? If not, does anyone know a better way to run a custom model on the RPI4?

submitted by /u/Fish6Chips
[visit reddit] [comments]

Categories
Misc

Usecases besides ML/AI

Hello there,

does anybody of you use tensorflow for something besides ML/AI? If yes, what do you use it for? Why not use numpy? I’ve heard that it’s used for big numerical applications, but I didn’t find any good examples online.

Have a nice day!

submitted by /u/tadachs
[visit reddit] [comments]