Hi, I am doing optical character recognition on my own dataset, consisting of around 17k images of 11 classes (0-9 as well as $). I can train the model no problem, only 2 epochs for now as loss goes down very quickly and it works perfectly immediately after training. The issue is that I try and save the model, then try and load the model, and it is like I never loaded it at all. The classifications are terrible and it barely gets 1 or 2 of the 16 images used for inference testing (completely random).
I’m sure I am doing something wrong, but I just can’t figure out what.
Now I retrained this model from a checkpoint, so that it would look specifically at one category from the coco dataset. However, I’m wondering if the training was even needed, seeing as the model was initially trained on the coco dataset. So my question is, does retraining on the same dataset have advantages, when looking at one particular element of the dataset? (narrowing from 90 to 1 categories).
Question 2: To remedy this, I thought I might want to train a model from ‘scratch’. They provide a link in the above link to some untrained model pre-sets.
I am doing a project classifying land cover types and I was wondering if/how you could do supervised classification. Supervised classification being the manual selection of pure pixels that are within a specific class, and tensor flow uses those values to identify the entirety of that class within a whole image.
An example would be selecting 20 groups of pixels that are all trees, 20 groups that are all grassland, and 20 pixels that are all water, and then the entire image is categorized into one of those three classes.
With the growth of AI applications being deployed at the edge, IT organizations are looking at the best way to deploy and manage their edge computing systems and software.
NVIDIA Fleet Command brings secure edge AI to enterprises of any size by transforming NVIDIA-Certified Systems into secure edge appliances and connecting them to the cloud in minutes. In the cloud, you can deploy and manage applications from the NGC Catalog or your NGC private registry, update system software over the air, and manage systems remotely with nothing but a browser and internet connection.
To help organizations test the benefits of Fleet Command, you can test the product using NVIDIA LaunchPad. Through curated labs, LaunchPad gives you access to dedicated hardware and Fleet Command software so you can walk through the entire process of deploying and managing an AI application at the edge.
In this post, I walk you through the Fleet Command trial on LaunchPad including details about who should apply, how long it takes to complete the curated lab experience, and next steps.
Who should try Fleet Command?
Fleet Command is designed for IT and OT professionals who are responsible for managing AI applications at multiple edge locations. The simplicity of the product allows it to be used by professionals of any skill level with ease.
The curated lab walks through the deployment of a demo application. For those with an imminent edge project, the demo application can be used to test the features of Fleet Command but full testing onsite is still necessary.
The Fleet Command lab experience is designed for deployment and management of AI applications at the edge. NVIDIA LaunchPad offers other labs for management of training environments with NVIDIA Base Command, and NVIDIA AI Enterprise for streamlined development and deployment of AI from the enterprise data center.
What does the Fleet Command curated lab include?
In this trial, you act as a Fleet Command administrator deploying a computer vision application for counting cars at an intersection. The whole trial should take about an hour.
Access Fleet Command in NGC
Fleet Command can be accessed from anywhere through NGC, the GPU-optimized software hub for AI, allowing administrators to remotely manage edge locations, systems, and applications.
Administrators automatically have Fleet Command added to the NGC console.
Create an edge location
A location in Fleet Command represents a real-world location where physical systems are installed. In the lab, you create one edge location, but customers can manage thousands of locations in production.
To add a new location, choose Add Location and fill in the details. Choose the latest version available.
Figure 1. Add a location to be managed by NVIDIA Fleet Command
Add an edge system
Next, add a system to the location, which represents the physical system at the edge. Completing this step generates a code that you can use to securely provision the server onsite with the Fleet Command operating stack. You then select the location just created and choose Add System.
Figure 2. Add edge systems to a location
Add the system name and description to complete the process.
After a system is added to a location, you get a generated activation code that is used to pair Fleet Command to the physical system onsite.
Figure 3. Activation code generated to connect system in Fleet Command to the edge server
Connect Fleet Command to the LaunchPad server
NVIDIA LaunchPad provides a system console to access the server associated with the trial. Follow the prompts to complete installation. After initial setup, the system prompts for the activation code generated from creating a system in Fleet Command.
Figure 4. Pair the edge server to Fleet Command
When the activation code is entered, the system finalizes pairing with Fleet Command. A check box in the Fleet Command user interface shows you that the server is running and ready to be remotely managed.
Figure 5. Complete pairing of the edge server, which can now be controlled in Fleet Command
Deploy an AI application
Now that the local installer has the system paired to Fleet Command, you can deploy an AI application. Applications can be hosted on your NGC private registry, or directly on the NGC Catalog.
Figure 6. Add application from NGC to the location
AI applications are deployed using Helm charts, which are used to define, install, and upgrade Kubernetes applications. Choose Add Application and enter the information in the prompt.
Now that the application is ready in Fleet Command, it can be deployed onto one or many systems. Create a deployment, making sure to check the box enabling application access, by selecting the location and application that you created.
Figure 7. Create a deployment
Now the application is deployed on the server and you can view the application running on the sample video data in the trial application.
Figure 8. The computer vision application for counting cars is in production
That’s it. I’ve now walked you through the end-to-end process of connecting to physical systems at the edge, creating a deployment, and pushing an AI application to that edge server. In less than an hour, the trial goes from disconnected, remote systems to fully managed, secure, remote edge environments.
Next steps
Fleet Command is a powerful tool for simplifying management of edge computing infrastructure without compromising on security, flexibility, or scale. To understand if Fleet Command is the right tool for managing your edge AI infrastructure, register for your NVIDIA LaunchPad trial.
Hi. So I wanted to ask if it is possible to create a speech to text using a dialect in the Philippines. I would only be using simple words of the dialect.
I’m working on a project called Edify, a digital classroom app. We’re looking for people who are good with TensorFlow for a project. If you want to work with us, our hiring process is simple. We don’t care about your Education, where you worked or anything similar.
I want to see what you know and the best way to demonstrate is to show. So, head on over to https://edify.ws/club/10 for a quick tutorial on how this works.
We are looking for 3 things:
What have you done? #tag_it and share a project.
How have you done this? Explain.
Why have you done this? Explain.
I reckon that if you’re good, you will have no trouble showing it to anyone. If you want, you can also share this challenge with some friends who might also be interested, #ShowWhatYouKnow.
For the 14th consecutive year, each Academy Award nominee for the Best Visual Effects used NVIDIA technologies. The 94th annual Academy Awards ceremony, taking place Sunday, March 27, has five nominees in the running: Dune Free Guy No Time to Die Shang-Chi and the Legend of the Ten Rings Spider-Man: No Way Home NVIDIA has Read article >