Categories
Misc

Help With Saving and Loading Model

Hi, I am doing optical character recognition on my own dataset, consisting of around 17k images of 11 classes (0-9 as well as $). I can train the model no problem, only 2 epochs for now as loss goes down very quickly and it works perfectly immediately after training. The issue is that I try and save the model, then try and load the model, and it is like I never loaded it at all. The classifications are terrible and it barely gets 1 or 2 of the 16 images used for inference testing (completely random).

I’m sure I am doing something wrong, but I just can’t figure out what.

I’ve been following [this](https://keras.io/examples/vision/captcha_ocr/) guide from keras, and [this](https://github.com/BenSisk/CSGO-OCR) is my GitHub repo with my specific code.

submitted by /u/BoopJoop01
[visit reddit] [comments]

Categories
Misc

supervised land over classification

I am doing a project classifying land cover types and I was wondering if/how you could do supervised classification. Supervised classification being the manual selection of pure pixels that are within a specific class, and tensor flow uses those values to identify the entirety of that class within a whole image.

An example would be selecting 20 groups of pixels that are all trees, 20 groups that are all grassland, and 20 pixels that are all water, and then the entire image is categorized into one of those three classes.

Thanks for the help!

submitted by /u/cheese_and_keys
[visit reddit] [comments]

Categories
Misc

Questions on Transfer Learning from ModelZoo

Question 1: As part of a project I trained a custom model from the tensorflow model Zoo found here:

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md

Now I retrained this model from a checkpoint, so that it would look specifically at one category from the coco dataset. However, I’m wondering if the training was even needed, seeing as the model was initially trained on the coco dataset. So my question is, does retraining on the same dataset have advantages, when looking at one particular element of the dataset? (narrowing from 90 to 1 categories).

Question 2: To remedy this, I thought I might want to train a model from ‘scratch’. They provide a link in the above link to some untrained model pre-sets.

https://github.com/tensorflow/models/tree/master/research/object_detection/configs/tf2

However I noticed in the config files they link, it still says:

Trained on COCO17, initialized from Imagenet classification checkpoint

and the model config has an entry:

fine_tune_checkpoint: “PATH_TO_BE_CONFIGURED/mobilenet_v2.ckpt-1”

Can anyone explain what’s going on here and how a model can be trained ‘from scratch?’

submitted by /u/BuckyOFair
[visit reddit] [comments]

Categories
Misc

TFLite custom op implementation(s): Custom ops: BitwiseAnd

Hello.

Im tring to export a model as TFLITE and I get this error.

The following operation(s) need TFLite custom op implementation(s): Custom ops: BitwiseAnd

Do you have an idea how to import BitwiseAnd function to the model?

My code to export the model is this.

converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.allow_custom_ops = True converter.experimental_new_converter =True tflite_model = converter.convert() with open('lp4.tflite', 'wb') as f: f.write(tflite_model) 

Thanks for your help

submitted by /u/i_cook_bits
[visit reddit] [comments]

Categories
Misc

Error during LoadArray in Tensorflow LITE

Hi

I have the following kotlin code:

val mydataarray = Array(100) { FloatArray(200) } set_values_for_mydataarray(mydataarray) val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 100, 200), DataType.FLOAT32) inputFeature0.loadArray(mydataarray) 

loadArray(mydataarray) is returning this error:

None of the following functions can be called with the arguments supplied:

public open fun loadArray(p0: FloatArray)

I suppose the shape of mydataarray is wrong.

Do you know how it has to be shaped?

Or do you think the problem is another?

Thanks

submitted by /u/i_cook_bits
[visit reddit] [comments]

Categories
Misc

Easy way to create augmented tf.data.Dataset (generator) for image.

This package makes it easy for us to create efficient image Dataset generator.

github link

Features

  • Simple, Easy and Efficient image dataset creator for segmentation and classification models.
  • Avoiding some limitations which cause performance bottleneck of learning, generate the Dataset by one-liner.
  • Augment multiple input images and multiple label images with the same transformations.
  • Adjusting sampling ratios from multiple tfrecord files are supported.

Install

python -m pip install git+https://github.com/piyop/tfaug

Supported Augmentations

  • standardize
  • resize
  • random_rotation
  • random_flip_left_right
  • random_flip_up_down
  • random_shift
  • random_zoom
  • random_shear
  • random_brightness
  • random_saturation
  • random_hue
  • random_contrast
  • random_crop
  • random_noise

submitted by /u/last_peng
[visit reddit] [comments]

Categories
Misc

Managing Edge AI with the NVIDIA LaunchPad Free Trial

Try Fleet Command for free on NVIDIA LaunchPad.

With the growth of AI applications being deployed at the edge, IT organizations are looking at the best way to deploy and manage their edge computing systems and software.

NVIDIA Fleet Command brings secure edge AI to enterprises of any size by transforming NVIDIA-Certified Systems into secure edge appliances and connecting them to the cloud in minutes. In the cloud, you can deploy and manage applications from the NGC Catalog or your NGC private registry, update system software over the air, and manage systems remotely with nothing but a browser and internet connection.

To help organizations test the benefits of Fleet Command, you can test the product using NVIDIA LaunchPad. Through curated labs, LaunchPad gives you access to dedicated hardware and Fleet Command software so you can walk through the entire process of deploying and managing an AI application at the edge. 

In this post, I walk you through the Fleet Command trial on LaunchPad including details about who should apply, how long it takes to complete the curated lab experience, and next steps.

Who should try Fleet Command?

Fleet Command is designed for IT and OT professionals who are responsible for managing AI applications at multiple edge locations. The simplicity of the product allows it to be used by professionals of any skill level with ease. 

The curated lab walks through the deployment of a demo application. For those with an imminent edge project, the demo application can be used to test the features of Fleet Command but full testing onsite is still necessary.

The Fleet Command lab experience is designed for deployment and management of AI applications at the edge. NVIDIA LaunchPad offers other labs for management of training environments with NVIDIA Base Command, and NVIDIA AI Enterprise for streamlined development and deployment of AI from the enterprise data center.

What does the Fleet Command curated lab include?

In this trial, you act as a Fleet Command administrator deploying a computer vision application for counting cars at an intersection. The whole trial should take about an hour.

Access Fleet Command in NGC

Fleet Command can be accessed from anywhere through NGC, the GPU-optimized software hub for AI, allowing administrators to remotely manage edge locations, systems, and applications.

Administrators automatically have Fleet Command added to the NGC console.

Create an edge location

A location in Fleet Command represents a real-world location where physical systems are installed. In the lab, you create one edge location, but customers can manage thousands of locations in production. 

To add a new location, choose Add Location and fill in the details. Choose the latest version available. 

Screenshot of the user interface of how to add a location.
Figure 1. Add a location to be managed by NVIDIA Fleet Command

Add an edge system

Next, add a system to the location, which represents the physical system at the edge. Completing this step generates a code that you can use to securely provision the server onsite with the Fleet Command operating stack. You then select the location just created and choose Add System.

Screenshot of the user interface showing existing locations.
Figure 2. Add edge systems to a location

Add the system name and description to complete the process.

After a system is added to a location, you get a generated activation code that is used to pair Fleet Command to the physical system onsite.

Screenshot of the generated activation code for pairing the new system.
Figure 3. Activation code generated to connect system in Fleet Command to the edge server

Connect Fleet Command to the LaunchPad server

NVIDIA LaunchPad provides a system console to access the server associated with the trial. Follow the prompts to complete installation. After initial setup, the system prompts for the activation code generated from creating a system in Fleet Command.

Screenshot of the server boot prompt to type in the activation code.
Figure 4. Pair the edge server to Fleet Command

When the activation code is entered, the system finalizes pairing with Fleet Command. A check box in the Fleet Command user interface shows you that the server is running and ready to be remotely managed.

Screenshot of the user interface  showing the connected servers.
Figure 5. Complete pairing of the edge server, which can now be controlled in Fleet Command

Deploy an AI application

Now that the local installer has the system paired to Fleet Command, you can deploy an AI application. Applications can be hosted on your NGC private registry, or directly on the NGC Catalog.

Screenshot of the user interface with applications.
Figure 6. Add application from NGC to the location

AI applications are deployed using Helm charts, which are used to define, install, and upgrade Kubernetes applications. Choose Add Application and enter the information in the prompt.

Now that the application is ready in Fleet Command, it can be deployed onto one or many systems. Create a deployment, making sure to check the box enabling application access, by selecting the location and application that you created. 

Screenshot of the user interface to create a deployment.
Figure 7. Create a deployment

Now the application is deployed on the server and you can view the application running on the sample video data in the trial application.

That’s it. I’ve now walked you through the end-to-end process of connecting to physical systems at the edge, creating a deployment, and pushing an AI application to that edge server. In less than an hour, the trial goes from disconnected, remote systems to fully managed, secure, remote edge environments.

Next steps

Fleet Command is a powerful tool for simplifying management of edge computing infrastructure without compromising on security, flexibility, or scale. To understand if Fleet Command is the right tool for managing your edge AI infrastructure, register for your NVIDIA LaunchPad trial.

Categories
Misc

Simple Words using an uncommon dialect

Hi. So I wanted to ask if it is possible to create a speech to text using a dialect in the Philippines. I would only be using simple words of the dialect.

submitted by /u/Trick_Welder9386
[visit reddit] [comments]

Categories
Misc

Edify – we’re hiring!

Hi Everyone,

I’m working on a project called Edify, a digital classroom app. We’re looking for people who are good with TensorFlow for a project. If you want to work with us, our hiring process is simple. We don’t care about your Education, where you worked or anything similar.

I want to see what you know and the best way to demonstrate is to show. So, head on over to https://edify.ws/club/10 for a quick tutorial on how this works.

We are looking for 3 things:

  • What have you done? #tag_it and share a project.
  • How have you done this? Explain.
  • Why have you done this? Explain.

I reckon that if you’re good, you will have no trouble showing it to anyone. If you want, you can also share this challenge with some friends who might also be interested, #ShowWhatYouKnow.

submitted by /u/skobre
[visit reddit] [comments]

Categories
Misc

At the Movies: For 14th Year Running, NVIDIA Technologies Power All VFX Oscar Nominees

For the 14th consecutive year, each Academy Award nominee for the Best Visual Effects used NVIDIA technologies. The 94th annual Academy Awards ceremony, taking place Sunday, March 27, has five nominees in the running: Dune Free Guy No Time to Die Shang-Chi and the Legend of the Ten Rings Spider-Man: No Way Home NVIDIA has Read article >

The post At the Movies: For 14th Year Running, NVIDIA Technologies Power All VFX Oscar Nominees appeared first on NVIDIA Blog.