Categories
Misc

NEW on NGC: Simplify and Unify Biomedical Analytics with Vyasa

Vyasa, a leading provider of tools in the field of biomedical analytics, developed a suite of products that efficiently integrate with existing compute infrastructure via an extensive RESTful API architecture.

Data is the back-bone to building state-of-the-art accurate AI models. And easy access to high-quality data sets can reduce the overall development time significantly. However, the required data may be siloed, can come from different sources (for example, sensors, images, documents) and can be in structured as well as unstructured formats. Manually moving and transforming data from different sources and formats to derive meaningful insights can be tedious and time consuming.

Vyasa, a leading provider of tools in the field of biomedical analytics, developed a suite of products that efficiently integrate with existing compute infrastructure via an extensive RESTful API architecture. Allowing users to derive insights from analytical modules including question answering, named entity recognition, PDF table extraction and image classification, irrespective of where that data resides. Vyasa technologies can integrate external data sources (for example, Pubmed, patents, and clinical trials) with a client’s internal data sources including documents, images and database content.

Simplify and Unify Biomedical Analytics with Vyasa
Figure 1. Vyasa Layar’s Unified Interface

Vyasa’s solutions are being used by data scientists, researchers, and IT managers in the field of life sciences and healthcare, ranging from pharmaceutical and biotechnology companies, to consulting firms and healthcare organizations. 

Available through the NGC catalog, NVIDIA’s GPU-optimized hub of HPC and AI software, Vyasa’s product suite includes the following features:

Layar – A secure, highly scalable, data fabric solution that can be added to existing enterprise data architectures to augment analytics capabilities or can operate as a standalone data fabric for text, image, and data stream integration and analytics.

Axon – A knowledge graph application that enables derivation of dynamically generated knowledge graphs directly from integrated data and documents sources integrated in a Layar data fabric.

Retina – An image analytics application that offers a wide range of deep learning image-related tasks, including management, annotation, and deep learning analytics on images.

Synapse – Provides “Smart Table Technology” that directly connects a user’s spreadsheet content to the analytical capabilities of Layar Data Fabrics.

Trace – Trace, a geospatial application that leverages structured data to plot businesses, assets, and intellectual property in relation to trend and document content derived from Layar Data Fabrics.[3]

Get started with Vyasa by pulling the Helm chart from the NGC catalog.

Categories
Misc

Japan’s University of Aizu Uses NVIDIA Jetson to Nurture AI and Robotics Talent

University of Aizu, a premier computer science and engineering institution in Japan, conducted a two-week intensive learning program based on the NVIDIA Jetson Nano edge AI platform and Jetson AI Certification.

University of Aizu, a premier computer science and engineering institution in Japan, conducted a two-week intensive learning program based on the NVIDIA Jetson Nano edge AI platform and Jetson AI Certification.

During the university’s annual Silicon Valley Learning Program, teams of six students worked on projects in robotics and intelligent IoT. Students were awarded Jetson AI Specialist certificates for their work during the program, which included  several unique projects listed below.

The NVIDIA Jetson AI Certification program is designed to facilitate robotics and AI learning. Two certification tracks are offered: Jetson AI Specialist for anyone, and Jetson AI Ambassador for educators and instructors.

New AI Avenues to the Future

University of Aizu is one of the first schools in Japan to focus on Jetson AI Certification. The university received free developer kits through the NVIDIA Jetson Nano 2GB Developer Kit Grant Program. With the performance and capability to run a diverse set of AI models and frameworks, the Jetson Nano 2GB is designed for hands-on teaching and learning while providing a scalable platform for creating AI applications as they evolve in the future.

Yuji Mitsunaga, Senior Associate Professor, Promotion Office, Super Global University, University of Aizu, who is in charge of this training, said:

“We believe Jetson AI Certification is the best way for students to experience edge IoT and understand the importance of AI  in the future. Since 2019, we’ve been using Jetson  platform to teach AI as part of our pre-training program for undergraduate students. Using the new NVIDIA program, not only did the students efficiently learn the  fundamentals of AI, but just a few days later they used their hands-on knowledge to develop a variety of systems that utilized Jetson’s capabilities.”

“For us, the aim of this training is to build a high level of  confidence for students learning  AI and IoT manufacturing, and NVIDIA’s Jetson AI Certification has become an important milestone that lays the foundation for our students,” Mitsunaga added. “I am confident that this experience will have a positive impact on the active development and entrepreneurship activities of the students in the future.”

NVIDIA’s Japan and US teams shared insights with the students and provided additional guidance on how to get started with Jetson. The Jetson Nano 2GB ,  proved invaluable as the students  collaborated on   ideas for their AI projects over three days.

Japan's University of Aizu Uses NVIDIA Jetson to Nurture AI and Robotics Talent

The students created the following projects:

Portable Coronavirus Diagnostic Device by Heihao Weng
Simple coronavirus diagnostic device for healthcare professionals that analyzes chest x-ray images

Application to Prevent from Pet’s Mischief by Hiroshi Tasaki
Prevents pet mischief in life with a fulfilling pet

Automatic Car Windshield Wiper by Keigo Fukasa
Learned once water lands on glass and automatically wipes off the water droplets

Reaction When Learning Plant Images in Perspective by Banri Yasui
Identifies plant type from the pattern of a leaf

AFK Minecraft by Tarun Sreepada
Converts body posture into keyboard strokes and plays games in VR-like situations

Hermit Purple by Eri Miyaoka
Displays character effects to match jojo’s bizarre adventures when shooting style poses

The students who published their projects on GitHub and successfully completed their applications for certification were certified as Jetson AI Specialists. 

Chitoku Yato, Jetson Product Marketing Manager at NVIDIA said:

“Each student’s published project uses AI at an advanced level, reaffirming that NVIDIA’s training and sample projects are being fully utilized in education. Following the example of University of Aizu, I hope more students build projects on the Jetson platform and get certified as Jetson AI Specialists. And I’m confident these initiatives will inspire youth to see themselves as builders of our AI and robotics future.”

Learn more about curriculum, grants and other offerings on the Jetson for AI Education page.

Categories
Misc

Trash to Cash: Recyclers Tap Startup with World’s Largest Recycling Network to Freshen Up Business Prospects

Matanya Horowitz smelled a problem in 2014. Fresh out of CalTech with a Ph.D., he saw that recycling centers lacked robotics and computer vision to pick through heaps of garbage-contaminated recyclables. Horowitz founded AMP Robotics that year to harness AI run on NVIDIA GPUs to turn sorting out the trash into cash. It’s a ripe Read article >

The post Trash to Cash: Recyclers Tap Startup with World’s Largest Recycling Network to Freshen Up Business Prospects appeared first on The Official NVIDIA Blog.

Categories
Misc

Any idea on how to fix this error I’m receiving?

I downloaded a software that takes images of people, and creates 3d models. I’m having an issue where the encodings fail, and I’m left with the message ‘tf.ConfigProto() AttributeError: module ‘tensorflow’ has no attribute ‘ConfigProto’

I have ZERO experience working with code/python so I’m utterly confused. I can post the full text if necessary. Been trying to fix this for hours

submitted by /u/r_hove
[visit reddit] [comments]

Categories
Misc

Loading fashion mnist test data only

I’m working in a memory constrained environment and I’m trying to optimize memory usage as much as I can. Can I load train data alone or test data alone in a jupyter notebook?

submitted by /u/aaqi2
[visit reddit] [comments]

Categories
Misc

Why does my custom cosine similarity loss lead to NaNs when it is equivalent and largely identical to Keras’ implementation?

I need to implement CosineSimilarity myself because i need to work on the individual losses before calculating the batch-wide mean.

I do it like this:  

 

 

a_n = tf.math.l2_normalize(a, axis=-1) 

 

b_n = tf.math.l2_normalize(b, axis=-1) 

 

d = -tf.math.reduce_sum(a_n * b_n, axis=-1) 

 

# Above is _identical_ to Keras' implementation. 

 

return d, tf.math.reduce_mean(d) 

 

I already compared the output to Keras’ implementation by repeatedly printing

 print(tf.math.reduce_sum(tf.math.abs(my_loss - keras_loss))) 

However, even though this outputs straight zeros (and never any NaNs), i still encounter NaNs, while with Keras’ implementation i do not. I already tried a higher epsilon in the l2_normalize, or using multiply_no_nan, to no avail.

Update: This comment.

submitted by /u/tfhwchoice
[visit reddit] [comments]

Categories
Misc

What can you do with the confidence score of a detection?

Edit: Sorry, should have read the rules first. Mods, if you take this down because its not tensorflow specific, I understand.

I’m just starting to play with neural networks, object detection, and tracking. I’m wondering what people use the confidence score of a detection for. Are there any common uses beyond simple confidence thresholding (i.e. output detection if conf > 0.5, otherwise dont)? Papers that use the confidence value in interesting ways are welcome!

For my own project, I was wondering how I might use the confidence score in the context of object tracking. For fun, and because its a super common application, i’ve been playing around with a traffic sign detector, and deploying it in a simulation. In the simulation, I get consistent and accurate predictions for real signs, and then frequent but short lived (i.e. 1-3 frame lifetime) false positives. I was thinking I could do some sort of tracking that uses the confidence values over a series of predictions to compute some kind of detection probability. I.e. if i look at a series of 30 frames, and in 20 i have 0.3 confidence of a detection, where the bounding boxes all belong to the same tracked object, then I’d argue there is more evidence that an object is there than if I look at a series of 30 frames, and have 2 detections that belong to a single object, but with a higher confidence e.g. conf=0.6. How can I leverage the confidence scores to create a more robust detection and tracking pipeline? Or am I already way off base (i’ve been trying to come up with a formula for how to do it, but probability and stochastics were never my strong suit and I know that the formulas I’ve been trying to write down implicitly assume independence, which I don’t know if that is the case here)?

Any way, how do you use the confidence values in your own projects?

submitted by /u/ItsAnApe
[visit reddit] [comments]

Categories
Misc

Converting tensorflow model and checkpoint to onnx.

I am trying to convert a pretrained model (Efficientnet) which I have trained on some custom images and new labels. But when using tf2onnx to convert it to onnx format it requires a checkpoint.meta file? But I can’t see this file anywhere? I only see a .index and .data file from the model when I have trained it.

submitted by /u/uebyte
[visit reddit] [comments]

Categories
Misc

[Video] Running TensorFlow Lite Models on Raspberry Pi

Many deep learning models created using TensorFlow require high processing capabilities to perform inference. Fortunately, there is a lite version of TensorFlow called TensorFlow Lite (TFLite for short) which allows these models to run on devices with limited capabilities. Inference is performed in less than a second.

This tutorial will go through how to prepare Raspberry Pi (RPi) to run a TFLite model for classifying images. After that, the TFLite version of the MobileNet model will be downloaded and used for making predictions on-device.

Tutorial video link: https://youtu.be/FdfxizUUQJI

Run the code on a free GPU: https://console.paperspace.com/ml-showcase/notebook/rljtgo7aadmiq7q?file=Raspberry%20Pi%20TF%20Lite%20Models.ipynb

submitted by /u/hellopaperspace
[visit reddit] [comments]

Categories
Misc

Support creation of tf.data.Dataset (data generator) and augmentation for image.

This package makes it easy for us to create efficient image Dataset generator.

github link

Supported Augmentations

  • standardize
  • resize
  • random_rotation
  • random_flip_left_right
  • random_flip_up_down
  • random_shift
  • random_zoom
  • random_shear
  • random_brightness
  • random_saturation
  • random_hue
  • random_contrast
  • random_crop
  • random_noise

submitted by /u/last_peng
[visit reddit] [comments]