Categories
Misc

AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objects

Jazz is all about improvisation — and NVIDIA is paying tribute to the genre with AI research that could one day enable graphics creators to improvise with 3D objects created in the time it takes to hold a jam session. The method, NVIDIA 3D MoMa, could empower architects, designers, concept artists and game developers to Read article >

The post AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objects appeared first on NVIDIA Blog.

Categories
Misc

NVIDIA Accelerates Open Data Center Innovation

NVIDIA today became a founding member of the Linux Foundation’s Open Programmable Infrastructure (OPI) project, while making its NVIDIA DOCA networking software APIs widely available to foster innovation in the data center. Businesses are embracing open data centers, which require applications and services that are easily integrated with other solutions for simplified, lower-cost and sustainable Read article >

The post NVIDIA Accelerates Open Data Center Innovation appeared first on NVIDIA Blog.

Categories
Misc

Google is quietly replacing the backbone of its AI product strategy after its last big push for dominance got overshadowed by Meta

Google is quietly replacing the backbone of its AI product strategy after its last big push for dominance got overshadowed by Meta submitted by /u/wattnurt
[visit reddit] [comments]
Categories
Misc

Tensorflow-lite requires additional flatbuffers

I am trying to use Tensorflow-lite to run inference on a video frame by frame. This is my code so far:

#include <iostream> #include "src/VideoProcessing.h" #include <opencv2/opencv.hpp> #include <opencv2/videoio.hpp> #include "tensorflow/lite/model_builder.h" typedef cv::Point3_<float> Pixel; const uint WIDTH = 224; const uint HEIGHT = 224; const uint CHANNEL = 3; const uint OUTDIM = 128; void normalize(Pixel &pixel){ pixel.x = (pixel.x / 255.0 - 0.5) * 2.0; pixel.y = (pixel.y / 255.0 - 0.5) * 2.0; pixel.z = (pixel.z / 255.0 - 0.5) * 2.0; } int main() { int fps = VideoProcessing::getFPS("trainer.mp4"); unsigned long size = VideoProcessing::getSize("trainer.mp4"); cv::VideoCapture cap("trainer.mp4"); //Load the model std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("pose_landmark_full.tflite"); <- This line throws an error //Check if input video exists if(!cap.isOpened()){ std::cout<<"Error opening video stream or file"<<std::endl; return -1; } //Create a window to show input video cv::namedWindow("input video", cv::WINDOW_NORMAL); //Keep playing video until video is completed while(true){ cv::Mat frame; //Capture frame by frame bool success = cap.read(frame); //If frame is empty then break the loop if (!success){ std::cout << "Found the end of the video" << std::endl; break; } frame.convertTo(frame, CV_32FC3); cv::cvtColor(frame, frame, cv::COLOR_BGR2RGB); // convert to float; BGR -> RGB // normalize to -1 & 1 auto* pixel = frame.ptr<Pixel>(0,0); const Pixel* endPixel = pixel + frame.cols * frame.rows; for (; pixel != endPixel; pixel++){normalize(*pixel);} // resize image as model input cv::resize(frame, frame, cv::Size(WIDTH, HEIGHT)); //Show the current frame cv::imshow("input video", frame); if (cv::waitKey(10) == 27){ std::cout << "Esc key is pressed by user. Stopping the video" << std::endl; break; } std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("pose_landmark_full.tflite"); } //Close window after input video is completed cap.release(); //Destroy all the opened windows cv::destroyAllWindows(); std::cout << "Video file FPS: " << fps << std::endl; std::cout << "Video file size: " << size << std::endl; return 0; } 

This is how I compile my project:

g++ -L ~/tensorflow_src/bazel-bin/tensorflow/libtensorflow.so -std=c++17 main.cpp src/VideoProcessing.cpp `pkg-config --libs --cflags flatbuffers opencv4` -o result 

My tensorflow-lite is in `/usr/local/include/tensorflow/lite/`. Previously, the project required the flatbuffers installation but after successfully installing it, I had linking issues that were not resolved even when compiling it with the appropriate .so file. Now that I removed it, Tensorflow-lite requires flatbuffers.

In file included from /usr/local/include/tensorflow/lite/model.h:21, from /usr/local/include/tensorflow/lite/kernels/register.h:18, from main.cpp:8: /usr/local/include/tensorflow/lite/interpreter_builder.h:26:10: fatal error: flatbuffers/flatbuffers.h: No such file or directory 26 | #include "flatbuffers/flatbuffers.h" // from u/flatbuffers | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. 

Despite linking the correct .so file, this problem continues to persist. Where am I going wrong?

submitted by /u/janissary2016
[visit reddit] [comments]

Categories
Misc

Just Released: cuSPARSELt v0.3

The NVIDIA cuSPARSELt update expands the high-performance CUDA library support for vectors of alpha and beta scalars, GeLu scaling, Split-K Mode, and more.

Categories
Misc

Very high loss when continuing to train a model with a new dataset in object detection api, is it normal?

Firstly, I began to train the network with around 400 hundred images for 50k steps. Then, I decided to continue with the training with a new dataset with the same classes, but increased the number of steps to 110k steps; 2 more data augmentation options; dropout set to true and increased batch size from 32 to 64. It started with these loss values: loss/localization loss=1.148414 Loss/regularization loss=3695957000.0 Loss/ classification loss=508.7694 Loss/total loss=3695957500.0

Several hundred steps have passed and the losses seem to be decreasing.

Should I be worried about it starting with such high loss?

Thank you

submitted by /u/Emergency_Egg_9497
[visit reddit] [comments]

Categories
Misc

I am using object detection API to retrain my custom object. But i want to implement it with early stopping as my model either underfits or overfits. But the current implemented version is only for TF1. Can you tell me where i might be able to find implementation for TF2.0

submitted by /u/Appropriate-Tap3103
[visit reddit] [comments]

Categories
Misc

The King’s Swedish: AI Rewrites the Book in Scandinavia

If the King of Sweden wants help drafting his annual Christmas speech this year, he could ask the same AI model that’s available to his 10 million subjects. As a test, researchers prompted the model, called GPT-SW3, to draft one of the royal messages, and it did a pretty good job, according to Magnus Sahlgren, Read article >

The post The King’s Swedish: AI Rewrites the Book in Scandinavia appeared first on NVIDIA Blog.

Categories
Misc

Tensorflow-lite requires files that were not part of its installation

I am trying to use Tensorflow-lite to run inference on a video frame by frame. This is my code so far:

#include <iostream> #include "src/VideoProcessing.h" #include <cstdio> #include <opencv2/opencv.hpp> #include <opencv2/videoio.hpp> #include "tensorflow/lite/interpreter.h" #include "tensorflow/lite/kernels/register.h" #include "tensorflow/lite/model_builder.h" #include "tensorflow/lite/interpreter_builder.h" int main() { int fps = VideoProcessing::getFPS("trainer.mp4"); unsigned long size = VideoProcessing::getSize("trainer.mp4"); cv::VideoCapture cap("trainer.mp4"); //Check if input video exists if(!cap.isOpened()){ std::cout<<"Error opening video stream or file"<<std::endl; return -1; } //Create a window to show input video cv::namedWindow("input video", cv::WINDOW_NORMAL); //Keep playing video until video is completed while(true){ cv::Mat frame; //Capture frame by frame cap >> frame; //If frame is empty then break the loop if(frame.empty()){break;} //Show the current frame imshow("input video", frame); } //Close window after input video is completed cap.release(); //Destroy all the opened windows cv::destroyAllWindows(); std::cout << "Video file FPS: " << fps << std::endl; std::cout << "Video file size: " << size << std::endl; // Load the model std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("pose_landmark_full.tflite"); // Build the interpreter tflite::ops::builtin::BuiltinOpResolver resolver; std::unique_ptr<tflite::Interpreter> interpreter; tflite::InterpreterBuilder(*model, resolver)(&interpreter); if (interpreter == nullptr) { fprintf(stderr, "Failed to initiate the interpretern"); exit(-1); } return 0; } 

I use this command to run my project:

g++ -std=c++17 main.cpp src/VideoProcessing.cpp `pkg-config --libs --cflags opencv4` -o result 

My tensorflow-lite is in `/usr/local/include/tensorflow/lite/`. This is my output:

In file included from /usr/local/include/tensorflow/lite/model.h:21, from /usr/local/include/tensorflow/lite/kernels/register.h:18, from main.cpp:7: /usr/local/include/tensorflow/lite/interpreter_builder.h:26:10: fatal error: flatbuffers/flatbuffers.h: No such file or directory 26 | #include "flatbuffers/flatbuffers.h" // from u/flatbuffers | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. 

submitted by /u/janissary2016
[visit reddit] [comments]

Categories
Misc

image classification problem

Why would a trained model that performs with around 95% accuracy have worse accuracy after Transforming, duplicating & vertically flipping the original training data & adding it in with the original data and retraining it?

submitted by /u/Nothemagain
[visit reddit] [comments]