Hi! I am making a drowsiness application and have to extract eye landmarks. Can anybody help how can extract them? Kindly Help! Thank You!
submitted by /u/znoman09
[visit reddit] [comments]
Hi! I am making a drowsiness application and have to extract eye landmarks. Can anybody help how can extract them? Kindly Help! Thank You!
submitted by /u/znoman09
[visit reddit] [comments]
I’ve a piece of code that I believe was written in TF2, but based on a repo written in TF1.
I am trying to run it in TF 1.52
It expressively invokes a piece of LSTM code that causes an error (unknown parameter “scope”)
net = tf.keras.layers.LSTM(32, return_sequences=True, dropout=0.4, recurrent_dropout=0.4)(net, scope=’lstm1′, training=is_training)
net = tf.keras.layers.LSTM(32, dropout=0.4, recurrent_dropout=0.4)(net, scope=’lstm2′, training=is_training)
All of the other layers have their scope parameter defined as part of a custom layer definition, (with tf.variable_scope(scope, reuse=reuse_weights) as sc )
Without the scope param in the LSTM layers, the kernel fails. I believe the problem is in the lack of a custom layer definition for the LSTM layers with the scope defined accordingly, but I’m not totally sure
submitted by /u/dxjustice
[visit reddit] [comments]
Hi, I am doing optical character recognition on my own dataset, consisting of around 17k images of 11 classes (0-9 as well as $). I can train the model no problem, only 2 epochs for now as loss goes down very quickly and it works perfectly immediately after training. The issue is that I try and save the model, then try and load the model, and it is like I never loaded it at all. The classifications are terrible and it barely gets 1 or 2 of the 16 images used for inference testing (completely random).
I’m sure I am doing something wrong, but I just can’t figure out what.
I’ve been following [this](https://keras.io/examples/vision/captcha_ocr/) guide from keras, and [this](https://github.com/BenSisk/CSGO-OCR) is my GitHub repo with my specific code.
submitted by /u/BoopJoop01
[visit reddit] [comments]
Question 1: As part of a project I trained a custom model from the tensorflow model Zoo found here:
Now I retrained this model from a checkpoint, so that it would look specifically at one category from the coco dataset. However, I’m wondering if the training was even needed, seeing as the model was initially trained on the coco dataset. So my question is, does retraining on the same dataset have advantages, when looking at one particular element of the dataset? (narrowing from 90 to 1 categories).
Question 2: To remedy this, I thought I might want to train a model from ‘scratch’. They provide a link in the above link to some untrained model pre-sets.
https://github.com/tensorflow/models/tree/master/research/object_detection/configs/tf2
However I noticed in the config files they link, it still says:
Trained on COCO17, initialized from Imagenet classification checkpoint
and the model config has an entry:
fine_tune_checkpoint: “PATH_TO_BE_CONFIGURED/mobilenet_v2.ckpt-1”
Can anyone explain what’s going on here and how a model can be trained ‘from scratch?’
submitted by /u/BuckyOFair
[visit reddit] [comments]
I am doing a project classifying land cover types and I was wondering if/how you could do supervised classification. Supervised classification being the manual selection of pure pixels that are within a specific class, and tensor flow uses those values to identify the entirety of that class within a whole image.
An example would be selecting 20 groups of pixels that are all trees, 20 groups that are all grassland, and 20 pixels that are all water, and then the entire image is categorized into one of those three classes.
Thanks for the help!
submitted by /u/cheese_and_keys
[visit reddit] [comments]
Hello.
Im tring to export a model as TFLITE and I get this error.
The following operation(s) need TFLite custom op implementation(s): Custom ops: BitwiseAnd
Do you have an idea how to import BitwiseAnd function to the model?
My code to export the model is this.
converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.allow_custom_ops = True converter.experimental_new_converter =True tflite_model = converter.convert() with open('lp4.tflite', 'wb') as f: f.write(tflite_model)
Thanks for your help
submitted by /u/i_cook_bits
[visit reddit] [comments]
This package makes it easy for us to create efficient image Dataset generator.
Features
Install
python -m pip install git+https://github.com/piyop/tfaug
Supported Augmentations
submitted by /u/last_peng
[visit reddit] [comments]
Hi
I have the following kotlin code:
val mydataarray = Array(100) { FloatArray(200) } set_values_for_mydataarray(mydataarray) val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 100, 200), DataType.FLOAT32) inputFeature0.loadArray(mydataarray)
loadArray(mydataarray) is returning this error:
None of the following functions can be called with the arguments supplied:
public open fun loadArray(p0: FloatArray)
I suppose the shape of mydataarray is wrong.
Do you know how it has to be shaped?
Or do you think the problem is another?
Thanks
submitted by /u/i_cook_bits
[visit reddit] [comments]
Try Fleet Command for free on NVIDIA LaunchPad.
With the growth of AI applications being deployed at the edge, IT organizations are looking at the best way to deploy and manage their edge computing systems and software.
NVIDIA Fleet Command brings secure edge AI to enterprises of any size by transforming NVIDIA-Certified Systems into secure edge appliances and connecting them to the cloud in minutes. In the cloud, you can deploy and manage applications from the NGC Catalog or your NGC private registry, update system software over the air, and manage systems remotely with nothing but a browser and internet connection.
To help organizations test the benefits of Fleet Command, you can test the product using NVIDIA LaunchPad. Through curated labs, LaunchPad gives you access to dedicated hardware and Fleet Command software so you can walk through the entire process of deploying and managing an AI application at the edge.
In this post, I walk you through the Fleet Command trial on LaunchPad including details about who should apply, how long it takes to complete the curated lab experience, and next steps.
Fleet Command is designed for IT and OT professionals who are responsible for managing AI applications at multiple edge locations. The simplicity of the product allows it to be used by professionals of any skill level with ease.
The curated lab walks through the deployment of a demo application. For those with an imminent edge project, the demo application can be used to test the features of Fleet Command but full testing onsite is still necessary.
The Fleet Command lab experience is designed for deployment and management of AI applications at the edge. NVIDIA LaunchPad offers other labs for management of training environments with NVIDIA Base Command, and NVIDIA AI Enterprise for streamlined development and deployment of AI from the enterprise data center.
In this trial, you act as a Fleet Command administrator deploying a computer vision application for counting cars at an intersection. The whole trial should take about an hour.
Fleet Command can be accessed from anywhere through NGC, the GPU-optimized software hub for AI, allowing administrators to remotely manage edge locations, systems, and applications.
Administrators automatically have Fleet Command added to the NGC console.
A location in Fleet Command represents a real-world location where physical systems are installed. In the lab, you create one edge location, but customers can manage thousands of locations in production.
To add a new location, choose Add Location and fill in the details. Choose the latest version available.
Next, add a system to the location, which represents the physical system at the edge. Completing this step generates a code that you can use to securely provision the server onsite with the Fleet Command operating stack. You then select the location just created and choose Add System.
Add the system name and description to complete the process.
After a system is added to a location, you get a generated activation code that is used to pair Fleet Command to the physical system onsite.
NVIDIA LaunchPad provides a system console to access the server associated with the trial. Follow the prompts to complete installation. After initial setup, the system prompts for the activation code generated from creating a system in Fleet Command.
When the activation code is entered, the system finalizes pairing with Fleet Command. A check box in the Fleet Command user interface shows you that the server is running and ready to be remotely managed.
Now that the local installer has the system paired to Fleet Command, you can deploy an AI application. Applications can be hosted on your NGC private registry, or directly on the NGC Catalog.
AI applications are deployed using Helm charts, which are used to define, install, and upgrade Kubernetes applications. Choose Add Application and enter the information in the prompt.
Now that the application is ready in Fleet Command, it can be deployed onto one or many systems. Create a deployment, making sure to check the box enabling application access, by selecting the location and application that you created.
Now the application is deployed on the server and you can view the application running on the sample video data in the trial application.
That’s it. I’ve now walked you through the end-to-end process of connecting to physical systems at the edge, creating a deployment, and pushing an AI application to that edge server. In less than an hour, the trial goes from disconnected, remote systems to fully managed, secure, remote edge environments.
Fleet Command is a powerful tool for simplifying management of edge computing infrastructure without compromising on security, flexibility, or scale. To understand if Fleet Command is the right tool for managing your edge AI infrastructure, register for your NVIDIA LaunchPad trial.
Hi. So I wanted to ask if it is possible to create a speech to text using a dialect in the Philippines. I would only be using simple words of the dialect.
submitted by /u/Trick_Welder9386
[visit reddit] [comments]