Categories
Misc

Jetson Project of the Month: OpenDog, a Gesture Controlled Robot

The project uses the NVIDIA Jetson Nano Developer Kit to recognize hand gestures and control a robot dog without a controller.

James Bruton of XRobots was awarded the ‘Jetson Project of the Month’ for OpenDog V2. This project uses the NVIDIA Jetson Nano Developer Kit to recognize hand gestures and control a robot dog without a controller. 

James, a robot inventor, thought it’d be nice if his OpenDog robot responded to hand gestures. To make this happen, he used transfer learning to retrain an existing SSD-Mobilenet object detection model using PyTorch. During the training process, he identified five hand gestures for the robot to move forward, backward, left, right and to jump. Using the camera capture tool, he captured these gestures and assigned them to the appropriate class. 

Capturing and labeling hand gesture data

He ensured that these images were captured at a specific distance from the camera to make sure the OpenDog doesn’t get distracted by hand gestures or similar patterns in the background.

James notes that the project can be improved by adding more training data which includes gestures in different indoor and outdoor backgrounds and from different users. Furthermore, he plans to convert OpenDog to a ROS robot similar to his Really Useful AI Robot. He created a series of videos to show his journey of building this project and the code is available on GitHub.

Categories
Misc

The Human Genome Center at University of Tokyo Adopts NVIDIA Clara Parabricks for Rapid Genomic Analysis

NVIDIA Clara Parabricks will be available on SHIROKANE, HGC’s fastest supercomputer for life sciences in Japan.

The Human Genome Center (HGC) at University of Tokyo announced a new genomics platform to accelerate genomic analysis by 40X, utilizing NVIDIA Clara Parabricks Pipelines genomics software powered by NVIDIA DGX A100 GPUs. The platform operates on SHIROKANE, HGC’s fastest supercomputer for life sciences in Japan, and will be available to users on April 1, 2021. SHIROKANE helps researchers quickly process massive amounts of genomic data and is incredibly powerful with many nodes, a capacity of over 400 TFLOPS, and a storage capacity of over 12PB. The ultimate goal of analyzing so much genomic data is to glean insights about germline and somatic variants to move closer to precision medicine.

Today, patients are prescribed medicines that work for the majority of people, but are often ineffective as they are not tailored to a specific patient’s genetic profile. Precision medicine aims to provide more specific therapeutics for patients, utilizing information from whole genome sequencing and other clinical data. As a national strategy, Japan’s Ministry of Health, Labor and Welfare formulated the Execution Plan for Whole Genome Analysis in December 2019, to focus on the areas of cancer and intractable diseases. The plan will take up to 3 years, aims to sequence 92,000 patients, and will ultimately help create a database that will be utilized by research institutions, pharmaceutical companies, and university hospitals for drug development and disease prevention. 

Whole genome sequencing (WGS) has been widely recognized for its comprehensive analysis, and its increasing usefulness in areas such as infectious diseases and cancer. WGS examines the complete DNA of an organism while exome sequencing examines the protein coding regions or genes, which make up about 1.5% of the human genome. WGS requires several times the sequence depth and can be done quickly with accelerated genomic analysis, like with NVIDIA CLARA Parabricks Pipelines.

Professor Kiyoya Imoto, Director of HGC said, “The Institute of Medical Science at the Human Genome Analysis Center has been working on refining whole-genome data analysis and shortening the analysis time in cancer genomic medicine. This time, we evaluated Parabricks for implementation on all GPU servers on SHIROKANE. Its high speed and functions are indispensable for the future of large-scale whole-genome analysis. The whole-genome data analysis capability is equivalent to hundreds of conventional CPU servers and was implemented on the GPU server. We will realize a state-of-the-art high-speed whole-genome data analysis environment that greatly accelerates genome research for SHIROKANE users.”

Clara Parabricks Pipelines’ accelerates genomic analysis by utilizing the parallel computing performance of GPUs. Many germline and somatic callers have been accelerated in Clara Parabricks Pipelines including Google’s DeepVariant, which identifies genome variants in sequencing data using convolutional neural networks (CNN). Previously, whole genome analysis typically would take 20 hours or more per sample in a general CPU environment, however on SHIROKANE, powered by NVIDIA DGX A100s GPUs, the analysis takes less than 30 minutes. HGC put Parabricks Pipelines in production on 16 of the 80xNVIDIA V100 GPUs installed on SHIROKANE in February 2020 and is open to users from life science companies. 

The genomic analysis proved to be faster than expected, and with the increasing number of users accessing SHIROKANE, there was a need to further super power SHIROKANE. Eight NVIDIA DGX A100 systems were recently added in 2021 to SHIROKANE, for a total of 88xGPU servers coupled with Parabricks Pipelines to accelerate large-scale genomic workloads. In addition, SHIROKANE provides free access to researchers working on SARS-CoV-2, in an effort to expedite insights about the virus and those infected by the virus. A joint research group called “The Corona Suppression Task Force” formulated at HGC will consist of experts from seven universities and research institutions to focus on various new coronavirus infections.

The Human Genome Center (HGC) SHIROKANE is the fastest supercomputer in the life-science research sector in Japan.

NVIDIA DGX A100 is the universal system for all AI workloads, offering unprecedented compute density, performance, and flexibility in the world’s first 5 petaFLOPS AI system. It features NVIDIA A100 Tensor Core GPUs, enabling customers to consolidate training, inference and analytics into a unified, easy to deploy infrastructure.

“NVIDIA has been investing for several years in anticipation of the coming era of large-scale whole-genome analysis,” commented Masataka Osaki, NVIDIA Japan Country Manager and VP Corporate Sales. “The greatest achievement, Parabricks, along with the latest DGX A100 system, is greatly helping Japan’s premier cancer genome research center. NVIDIA’s platform will be the foundation that supports whole-genome research in Japan, and it is expected that the elucidation of genes associated with cancer and intractable diseases will progress dramatically.”

Seiya IMOTO, Director of the Institute of Medical Science at the University of Tokyo, is presenting a talk titled “Realization of Genomic Medicine Based on Whole Genome Information” at the GTC21 conference April 12-16, which is free this year. Register here.

Categories
Misc

New Jetson Nano 2GB Developer Kit Grant Program Launches

NVIDIA recently launched the Jetson Nano 2GB Developer Kit Grant Program which offers limited quantities of Jetson Developer Kits to professors, educators and trainers across the globe.

NVIDIA recently launched the Jetson Nano 2GB Developer Kit Grant Program which offers limited quantities of Jetson Developer Kits to professors, educators and trainers across the globe.

Ideal for hands-on teaching, the Jetson Nano 2GB Developer Kit is the perfect tool for introducing AI and robotics to all kinds of learners, from high school students to post-graduates. We provide all of the resources that educators need to get started, including free tutorials, an active developer community and ready-to-build open-source projects

New to AI? Teachers possessing a basic familiarity with Python and Linux can get up to speed quickly by taking advantage of our online Jetson AI Courses and Certifications. We’re here to help you get fully prepared to teach AI to your students.

This program is available to educators, including professors, advisors, club organizers, and other relevant faculty members. In order to be considered for the program, applicants must share a detailed proposal including the purpose of their request and the expected impact of their planned project or curriculum. 

The NVIDIA Jetson Nano 2GB Developer Kit is ideal for learning, building, and teaching AI and robotics.

Jetson Nano 2GB Developer Kit Grant recipients are currently using Jetson to build everything from introductory robotics courses and basic autonomous vehicles to lifeguard drones and applications for monitoring aquatic diseases.

We’re on a mission to bring AI to classrooms everywhere and there’s no better way to start.

Apply today >

Categories
Misc

NVIDIA’s Marc Hamilton on Building Cambridge-1 Supercomputer During Pandemic

Since NVIDIA announced construction of the U.K.’s most powerful AI supercomputer — Cambridge-1 — Marc Hamilton, vice president of solutions architecture and engineering, has been (remotely) overseeing its building across the pond. The system, which will be available for U.K. healthcare researchers to work on pressing problems, is being built on NVIDIA DGX SuperPOD architecture Read article >

The post NVIDIA’s Marc Hamilton on Building Cambridge-1 Supercomputer During Pandemic appeared first on The Official NVIDIA Blog.

Categories
Misc

Tensorflow Object Detection API pycocotools Error

Tensorflow Object Detection API pycocotools Error

Hi guys,

need help setting up pycocotools for my training. I have installed through git, pip and even conda. Been stuck on it for the past three days. When i run my main python file, i keep getting this error:

I am using

windows 10 64bits, python 3.7 Anaconda,tensorflow 2.4.1, CUDA 11.0.2 and Cudnn 8.0.2.

python model_main_tf2.py –model_dir=models/ssd_mobilenet_v2_fpnlite –pipeline_config_path=models/ssd_mobilenet_v2_fpnlite/pipeline.config

Any help on this??

https://preview.redd.it/ri0w25ojpkk61.png?width=1441&format=png&auto=webp&s=1cb0b8ddacd815a5d823de92f738dd267c30fd7a

submitted by /u/jason_rims
[visit reddit] [comments]

Categories
Misc

TF Beginner, i made a little TFJS Web App

TF Beginner, i made a little TFJS Web App submitted by /u/DonRedditor
[visit reddit] [comments]
Categories
Misc

Starting with AI

Hi all,

It’s been some time since I’ve been flirting with the idea of joining the AI developers community. I’m a 10 year experienced .Net developer and the main thing I want to use AI for is for video detection, tracking, stats, etc..

After some digging, I’ve found TensorFlow might be exactly what I’m looking for but I wanted to take some advice regarding which training I should do first..

Python? TensorFlow? Maybe start with other theoretical concepts first?

Thanks!

submitted by /u/argenstark
[visit reddit] [comments]

Categories
Misc

Image classification FCN training

I have a dataset of posts on a website (which I have downloaded in file format data/category1 and data/category2) all are png, or jpg format files. All with unique dimensions. Is there a way to train the neural network without resizing the images? I already know I would have to train them in separate batches, but I cannot for the life of me figure out how to get them all into individual batches to be trained. Thank you in advance for your help 😀

submitted by /u/Yo1up
[visit reddit] [comments]

Categories
Misc

Error when returning tf.keras.Model

I want to create a python program for neural style transfer based on this tutorial: https://medium.com/tensorflow/neural-style-transfer-creating-art-with-deep-learning-using-tf-keras-and-eager-execution-7d541ac31398. They used tensorflow 1.* for this but I use tensorflow 2.* (gpu), so I had to change a few things. Both my version and the original version of the program raised a ValueError when I tried to return a vgg19 model. Can someone explain this error or tell me how to fix it?

“`def get_model(): vgg = tf.keras.applications.vgg19.VGG19(include_top = False, weights = ‘imagenet’) vgg.trainable = False style_outputs = [vgg.get_layer(name) for name in style_layers] content_outputs = [vgg.get_layer(name) for name in content_layers] model_outputs = style_outputs + content_outputs return tf.keras.Model(vgg.input, model_outputs)

Traceback (most recent call last): File “NSTV2.py”, line 155, in <module> main() File “NST_V2.py”, line 152, in main best, best_loss = run_style_transfer(args[‘content’], args[‘style’]) File “NST_V2.py”, line 101, in run_style_transfer model = get_model() File “NST_V2.py”, line 48, in get_model return tf.keras.Model(vgg.input, model_outputs) File “C:UsersfreddAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythontrainingtrackingbase.py”, line 517, in _method_wrapper result = method(self, args, *kwargs) File “C:UsersfreddAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythonkerasenginefunctional.py”, line 120, in __init_ self._init_graph_network(inputs, outputs) File “C:UsersfreddAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythontrainingtrackingbase.py”, line 517, in _method_wrapper result = method(self, args, *kwargs) File “C:UsersfreddAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythonkerasenginefunctional.py”, line 157, in _init_graph_network self._validate_graph_inputs_and_outputs() File “C:UsersfreddAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythonkerasenginefunctional.py”, line 727, in _validate_graph_inputs_and_outputs raise ValueError(‘Output tensors of a ‘ + cls_name + ‘ model must be ‘ ValueError: Output tensors of a Functional model must be the output of a TensorFlow Layer (thus holding past layer metadata). Found: <tensorflow.python.keras.layers.convolutional.Conv2D object at 0x000002A05C2F9D60>“`

submitted by /u/Jirne_VR
[visit reddit] [comments]

Categories
Misc

Tensorflow DQN execution time keeps on increasing

Hello. I have a question regarding tensorflow. I was working on a Deep Q Network problem using Tensorflow. The code is as follows:

“`

g = tf.Graph() with g.as_default(): w_1 = tf.Variable(tf.truncated_normal([n_input, n_hidden_1], stddev=0.1)) w_1_p = tf.Variable(tf.truncated_normal([n_input, n_hidden_1], stddev=0.1)) ## There are other parameters too but they are excluded for simplicity

def update_target_q_network(sess): “”” Update target q network once in a while “”” sess.run(w_1_p.assign(sess.run(w_1)))

for i_episode in range(n_episode): …….. #Code removed for simplicity if i_episode%10 == 0: update_target_q_network(centralsess) ……..

“`

Basically after every specific number of n_episodes (10 in this case), the parameter w_1 is copied to w_1_p.

The issue with the code is that the time it takes to run the function update_target_q_network keeps on increasing as the n_episodes increase. So for example it takes 0-1 second for 100th episode however the time increase to 220 seconds for 7500th episode. Can anyone kindly tell how can the running time of the code can be improved? I tried reading the reason (the graph keeps on becoming larger) but I am not sure about that or how or change code to reduce time. Thank you for your help.

submitted by /u/FarzanUllah
[visit reddit] [comments]