The project uses the NVIDIA Jetson Nano Developer Kit to recognize hand gestures and control a robot dog without a controller.
James Bruton of XRobots was awarded the ‘Jetson Project of the Month’ for OpenDog V2. This project uses the NVIDIA Jetson Nano Developer Kit to recognize hand gestures and control a robot dog without a controller.
James, a robot inventor, thought it’d be nice if his OpenDog robot responded to hand gestures. To make this happen, he used transfer learning to retrain an existing SSD-Mobilenet object detection model using PyTorch. During the training process, he identified five hand gestures for the robot to move forward, backward, left, right and to jump. Using the camera capture tool, he captured these gestures and assigned them to the appropriate class.
He ensured that these images were captured at a specific distance from the camera to make sure the OpenDog doesn’t get distracted by hand gestures or similar patterns in the background.
James notes that the project can be improved by adding more training data which includes gestures in different indoor and outdoor backgrounds and from different users. Furthermore, he plans to convert OpenDog to a ROS robot similar to his Really Useful AI Robot. He created a series of videos to show his journey of building this project and the code is available on GitHub.
NVIDIA Clara Parabricks will be available on SHIROKANE, HGC’s fastest supercomputer for life sciences in Japan.
The Human Genome Center (HGC) at University of Tokyo announced a new genomics platform to accelerate genomic analysis by 40X, utilizing NVIDIA Clara Parabricks Pipelines genomics software powered by NVIDIA DGX A100 GPUs. The platform operates on SHIROKANE, HGC’s fastest supercomputer for life sciences in Japan, and will be available to users on April 1, 2021. SHIROKANE helps researchers quickly process massive amounts of genomic data and is incredibly powerful with many nodes, a capacity of over 400 TFLOPS, and a storage capacity of over 12PB. The ultimate goal of analyzing so much genomic data is to glean insights about germline and somatic variants to move closer to precision medicine.
Today, patients are prescribed medicines that work for the majority of people, but are often ineffective as they are not tailored to a specific patient’s genetic profile. Precision medicine aims to provide more specific therapeutics for patients, utilizing information from whole genome sequencing and other clinical data. As a national strategy, Japan’s Ministry of Health, Labor and Welfare formulated the Execution Plan for Whole Genome Analysis in December 2019, to focus on the areas of cancer and intractable diseases. The plan will take up to 3 years, aims to sequence 92,000 patients, and will ultimately help create a database that will be utilized by research institutions, pharmaceutical companies, and university hospitals for drug development and disease prevention.
Whole genome sequencing (WGS) has been widely recognized for its comprehensive analysis, and its increasing usefulness in areas such as infectious diseases and cancer. WGS examines the complete DNA of an organism while exome sequencing examines the protein coding regions or genes, which make up about 1.5% of the human genome. WGS requires several times the sequence depth and can be done quickly with accelerated genomic analysis, like with NVIDIA CLARA Parabricks Pipelines.
Professor Kiyoya Imoto, Director of HGC said, “The Institute of Medical Science at the Human Genome Analysis Center has been working on refining whole-genome data analysis and shortening the analysis time in cancer genomic medicine. This time, we evaluated Parabricks for implementation on all GPU servers on SHIROKANE. Its high speed and functions are indispensable for the future of large-scale whole-genome analysis. The whole-genome data analysis capability is equivalent to hundreds of conventional CPU servers and was implemented on the GPU server. We will realize a state-of-the-art high-speed whole-genome data analysis environment that greatly accelerates genome research for SHIROKANE users.”
Clara Parabricks Pipelines’ accelerates genomic analysis by utilizing the parallel computing performance of GPUs. Many germline and somatic callers have been accelerated in Clara Parabricks Pipelines including Google’s DeepVariant, which identifies genome variants in sequencing data using convolutional neural networks (CNN). Previously, whole genome analysis typically would take 20 hours or more per sample in a general CPU environment, however on SHIROKANE, powered by NVIDIA DGX A100s GPUs, the analysis takes less than 30 minutes. HGC put Parabricks Pipelines in production on 16 of the 80xNVIDIA V100 GPUs installed on SHIROKANE in February 2020 and is open to users from life science companies.
The genomic analysis proved to be faster than expected, and with the increasing number of users accessing SHIROKANE, there was a need to further super power SHIROKANE. Eight NVIDIA DGX A100 systems were recently added in 2021 to SHIROKANE, for a total of 88xGPU servers coupled with Parabricks Pipelines to accelerate large-scale genomic workloads. In addition, SHIROKANE provides free access to researchers working on SARS-CoV-2, in an effort to expedite insights about the virus and those infected by the virus. A joint research group called “The Corona Suppression Task Force” formulated at HGC will consist of experts from seven universities and research institutions to focus on various new coronavirus infections.
NVIDIA DGX A100 is the universal system for all AI workloads, offering unprecedented compute density, performance, and flexibility in the world’s first 5 petaFLOPS AI system. It features NVIDIA A100 Tensor Core GPUs, enabling customers to consolidate training, inference and analytics into a unified, easy to deploy infrastructure.
“NVIDIA has been investing for several years in anticipation of the coming era of large-scale whole-genome analysis,” commented Masataka Osaki, NVIDIA Japan Country Manager and VP Corporate Sales. “The greatest achievement, Parabricks, along with the latest DGX A100 system, is greatly helping Japan’s premier cancer genome research center. NVIDIA’s platform will be the foundation that supports whole-genome research in Japan, and it is expected that the elucidation of genes associated with cancer and intractable diseases will progress dramatically.”
Ideal for hands-on teaching, the Jetson Nano 2GB Developer Kit is the perfect tool for introducing AI and robotics to all kinds of learners, from high school students to post-graduates. We provide all of the resources that educators need to get started, including free tutorials, an active developer community and ready-to-build open-source projects.
New to AI? Teachers possessing a basic familiarity with Python and Linux can get up to speed quickly by taking advantage of our online Jetson AI Courses and Certifications. We’re here to help you get fully prepared to teach AI to your students.
This program is available to educators, including professors, advisors, club organizers, and other relevant faculty members. In order to be considered for the program, applicants must share a detailed proposal including the purpose of their request and the expected impact of their planned project or curriculum.
Jetson Nano 2GB Developer Kit Grant recipients are currently using Jetson to build everything from introductory robotics courses and basic autonomous vehicles to lifeguard drones and applications for monitoring aquatic diseases.
We’re on a mission to bring AI to classrooms everywhere and there’s no better way to start.
Since NVIDIA announced construction of the U.K.’s most powerful AI supercomputer — Cambridge-1 — Marc Hamilton, vice president of solutions architecture and engineering, has been (remotely) overseeing its building across the pond. The system, which will be available for U.K. healthcare researchers to work on pressing problems, is being built on NVIDIA DGX SuperPOD architecture Read article >
It’s been some time since I’ve been flirting with the idea of joining the AI developers community. I’m a 10 year experienced .Net developer and the main thing I want to use AI for is for video detection, tracking, stats, etc..
After some digging, I’ve found TensorFlow might be exactly what I’m looking for but I wanted to take some advice regarding which training I should do first..
Python? TensorFlow? Maybe start with other theoretical concepts first?
I have a dataset of posts on a website (which I have downloaded in file format data/category1 and data/category2) all are png, or jpg format files. All with unique dimensions. Is there a way to train the neural network without resizing the images? I already know I would have to train them in separate batches, but I cannot for the life of me figure out how to get them all into individual batches to be trained. Thank you in advance for your help 😀
“`def get_model(): vgg = tf.keras.applications.vgg19.VGG19(include_top = False, weights = ‘imagenet’) vgg.trainable = False style_outputs = [vgg.get_layer(name) for name in style_layers] content_outputs = [vgg.get_layer(name) for name in content_layers] model_outputs = style_outputs + content_outputs return tf.keras.Model(vgg.input, model_outputs)
Traceback (most recent call last): File “NSTV2.py”, line 155, in <module> main() File “NST_V2.py”, line 152, in main best, best_loss = run_style_transfer(args[‘content’], args[‘style’]) File “NST_V2.py”, line 101, in run_style_transfer model = get_model() File “NST_V2.py”, line 48, in get_model return tf.keras.Model(vgg.input, model_outputs) File “C:UsersfreddAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythontrainingtrackingbase.py”, line 517, in _method_wrapper result = method(self, args, *kwargs) File “C:UsersfreddAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythonkerasenginefunctional.py”, line 120, in __init_ self._init_graph_network(inputs, outputs) File “C:UsersfreddAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythontrainingtrackingbase.py”, line 517, in _method_wrapper result = method(self, args, *kwargs) File “C:UsersfreddAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythonkerasenginefunctional.py”, line 157, in _init_graph_network self._validate_graph_inputs_and_outputs() File “C:UsersfreddAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythonkerasenginefunctional.py”, line 727, in _validate_graph_inputs_and_outputs raise ValueError(‘Output tensors of a ‘ + cls_name + ‘ model must be ‘ ValueError: Output tensors of a Functional model must be the output of a TensorFlow Layer (thus holding past layer metadata). Found: <tensorflow.python.keras.layers.convolutional.Conv2D object at 0x000002A05C2F9D60>“`
Hello. I have a question regarding tensorflow. I was working on a Deep Q Network problem using Tensorflow. The code is as follows:
g = tf.Graph() with g.as_default(): w_1 = tf.Variable(tf.truncated_normal([n_input, n_hidden_1], stddev=0.1)) w_1_p = tf.Variable(tf.truncated_normal([n_input, n_hidden_1], stddev=0.1)) ## There are other parameters too but they are excluded for simplicity
def update_target_q_network(sess): “”” Update target q network once in a while “”” sess.run(w_1_p.assign(sess.run(w_1)))
for i_episode in range(n_episode): …….. #Code removed for simplicity if i_episode%10 == 0: update_target_q_network(centralsess) ……..
Basically after every specific number of n_episodes (10 in this case), the parameter w_1 is copied to w_1_p.
The issue with the code is that the time it takes to run the function update_target_q_network keeps on increasing as the n_episodes increase. So for example it takes 0-1 second for 100th episode however the time increase to 220 seconds for 7500th episode. Can anyone kindly tell how can the running time of the code can be improved? I tried reading the reason (the graph keeps on becoming larger) but I am not sure about that or how or change code to reduce time. Thank you for your help.