Categories
Misc

Archaeologist Digs Into Photogrammetry, Creates 3D Models With NVIDIA Technology

Archaeologist Daria Dabal is bringing the past to life, with an assist from NVIDIA technology. Dabal works on various archaeological sites in the U.K., conducting field and post-excavation work. Over the last five years, photogrammetry — the use of photographs to create fully textured 3D models — has become increasingly popular in archaeology. Dabal has Read article >

The post Archaeologist Digs Into Photogrammetry, Creates 3D Models With NVIDIA Technology appeared first on The Official NVIDIA Blog.

Categories
Misc

Ready for Prime Time: Plus to Deliver Autonomous Truck Systems Powered by NVIDIA DRIVE to Amazon

Your Amazon Prime delivery just got smarter. Autonomous trucking company Plus recently signed a deal with Amazon to provide at least 1,000 self-driving systems to retrofit on the e-commerce giant’s delivery fleet. These systems are powered by NVIDIA DRIVE Xavier for high-performance, energy-efficient and centralized AI compute. The agreement follows Plus’ announcement of going public Read article >

The post Ready for Prime Time: Plus to Deliver Autonomous Truck Systems Powered by NVIDIA DRIVE to Amazon appeared first on The Official NVIDIA Blog.

Categories
Offsites

Improved Detection of Elusive Polyps via Machine Learning

With the increasing ability to consistently and accurately process large amounts of data, particularly visual data, computer-aided diagnostic systems are more frequently being used to assist physicians in their work. This, in turn, can lead to meaningful improvements in health care. An example of where this could be especially useful is in the diagnosis and treatment of colorectal cancer (CRC), which is especially deadly and results in over 900K deaths per year, globally. CRC originates in small pre-cancerous lesions in the colon, called polyps, the identification and removal of which is very successful in preventing CRC-related deaths.

The standard procedure used by gastroenterologists (GIs) to detect and remove polyps is the colonoscopy, and about 19 million such procedures are performed annually in the US alone. During a colonoscopy, the gastroenterologist uses a camera-containing probe to check the intestine for pre-cancerous polyps and early signs of cancer, and removes tissue that looks worrisome. However, complicating factors, such as incomplete detection (in which the polyp appears within the field of view, but is missed by the GI, perhaps due to its size or shape) and incomplete exploration (in which the polyp does not appear in the camera’s field of view), can lead to a high fraction of missed polyps. In fact, studies suggest that 22%–28% of polyps are missed during colonoscopies, of which 20%–24% have the potential to become cancerous (adenomas).

Today, we are sharing progress made in using machine learning (ML) to help GIs fight colorectal cancer by making colonoscopies more effective. In “Detection of Elusive Polyps via a Large Scale AI System”, we present an ML model designed to combat the problem of incomplete detection by helping the GI detect polyps that are within the field of view. This work adds to our previously published work that maximizes the coverage of the colon during the colonoscopy by flagging for GI follow-up areas that may have been missed. Using clinical studies, we show that these systems significantly improve polyp detection rates.

Incomplete Exploration
To help the GI detect polyps that are outside the field of view, we previously developed an ML system that reduces the rate of incomplete exploration by estimating the fractions of covered and non-covered regions of a colon during a colonoscopy. This earlier work uses computer vision and geometry in a technique we call colonoscopy coverage deficiency via depth, to compute segment-by-segment coverage for the colon. It does so in two phases: first computing depth maps for each frame of the colonoscopy video, and then using these depth maps to compute the coverage in real time.

The ML system computes a depth image (middle) from a single RGB image (left). Then, based on the computation of depth images for a video sequence, it calculates local coverage (right), and detects where the coverage has been deficient and a second look is required (blue color indicates observed segments where red indicates uncovered ones). You can learn more about this work in our previous blog post.

This segment-by-segment work yields the ability to estimate what fraction of the current segment has been covered. The helpfulness of such functionality is clear: during the procedure itself, a physician may be alerted to segments with deficient coverage, and can immediately return to review these areas, potentially reducing the rates of missed polyps due to incomplete exploration.

Incomplete Detection
In our most recent paper, we look into the problem of incomplete detection. We describe an ML model that aids a GI in detecting polyps that are within the field of view, so as to reduce the rate of incomplete detection. We developed a system that is based on convolutional neural networks (CNN) with an architecture that combines temporal logic with a single frame detector, resulting in more accurate detection.

This new system has two principal advantages. The first is that the system improves detection performance by reducing the number of false negatives detections of elusive polyps, those polyps that are particularly difficult for GIs to detect. The second advantage is the very low false positive rate of the system. This low false positive rate makes these systems more likely to be adopted in the clinic.

Examples of the variety of polyps detected by the ML system.

We trained the system on 3600 procedures (86M video frames) and tested it on 1400 procedures (33M frames). All the videos and metadata were de-identified. The system detected 97% of the polyps (i.e., it yielded 97% sensitivity) at 4.6 false alarms per procedure, which is a substantial improvement over previously published results. Of the false alarms, follow-up review showed that some were, in fact, valid polyp detections, indicating that the system was able to detect polyps that were missed by the performing endoscopist and by those who annotated the data. The performance of the system on these elusive polyps suggests its generalizability in that the system has learned to detect examples that were initially missed by all who viewed the procedure.

We evaluated the system performance on polyps that are in the field of view for less than five seconds, which makes them more difficult for the GI to detect, and for which models typically have much lower sensitivity. In this case the system attained a sensitivity that is about three times that of the sensitivity that the original procedure achieved. When the polyps were present in the field of view for less than 2 seconds, the difference was even more stark — the system exhibited a 4x improvement in sensitivity.

It is also interesting to note that the system is fairly insensitive to the choice of neural network architecture. We used two architectures: RetinaNet and  LSTM-SSD. RetinaNet is a leading technique for object detection on static images (used for video by applying it to frames in a consecutive fashion). It is one of the top performers on a variety of benchmarks, given a fixed computational budget, and is known for balancing speed of computation with accuracy. LSTM-SSD is a true video object detection architecture, which can explicitly account for the temporal character of the video (e.g., temporal consistency of detections, ability to deal with blur and fast motion, etc.). It is known for being robust and very computationally lightweight and can therefore run on less expensive processors. Comparable results were also obtained on the much heavier Faster R-CNN architecture. The fact that results are similar across different architectures implies that one can choose the network meeting the available hardware specifications.

Prospective Clinical Research Study
As part of the research reported in our detection paper we ran a clinical validation on 100 procedures in collaboration with Shaare Zedek Medical Center in Jerusalem, where our system was used in real time to help GIs. The system helped detect an average of one polyp per procedure that would have otherwise been missed by the GI performing the procedure, while not missing any of the polyps detected by the GIs, and with 3.8 false alarms per procedure. The feedback from the GIs was consistently positive.

We are encouraged by the potential helpfulness of this system for improving polyp detection, and we look forward to working together with the doctors in the procedure room to further validate this research.

Acknowledgements
The research was conducted by teams from Google Health and Google Research, Israel with support from Verily Life Sciences, and in collaboration with Shaare Zedek Medical Center. Verily is advancing this research via a newly established center in Israel, led by Ehud Rivlin. This research was conducted by Danny Veikherman, Tomer Golany, Dan M. Livovsky, Amit Aides, Valentin Dashinsky, Nadav Rabani, David Ben Shimol, Yochai Blau, Liran Katzir, Ilan Shimshoni, Yun Liu, Ori Segol, Eran Goldin, Greg Corrado, Jesse Lachter, Yossi Matias, Ehud Rivlin, and Daniel Freedman. Our appreciation also goes to several institutions and GIs who provided advice along the way and tested our system prototype. We would like to thank all of our team members and collaborators who worked on this project with us, including: Chen Barshai, Nia Stoykova, and many others.

Categories
Misc

August Arrivals: GFN Thursday Brings 34 Games to GeForce NOW This Month

It’s a new month for GFN Thursday, which means a new month full of games on GeForce NOW. August brings a wealth of great new PC game launches to the cloud gaming service, including King’s Bounty II, Humankind and NARAKA: BLADEPOINT. In total, 13 titles are available to stream this week. They’re just a portion Read article >

The post August Arrivals: GFN Thursday Brings 34 Games to GeForce NOW This Month appeared first on The Official NVIDIA Blog.

Categories
Misc

On TensorFlow, how to use CNN on a stack of images

On TensorFlow, how to use CNN on a stack of images submitted by /u/Striking-Warning9533
[visit reddit] [comments]
Categories
Misc

Getting Started on Mobile Image Recognition

I’m a novice on ML and mobile development topics, and I’d like to practice and make a basic app which could recognize a class of objects (animals, food, anything with freely available dataset) and returns information of said object.

I see dozens of articles and public repositories on implementing image recognition on mobile, mostly using Tensorflow, and I’ve found a lot of image datasets on Kaggle to train on.

Now I’m confused on how to actually start. I was thinking of using React Native since I’m a bit more experienced with Javascript and using npm packages, but a lot of articles say to just go native for better performance. I don’t know if what I’m doing would be considered “heavy” so I’m a bit confused here.

Any advice is appreciated!

submitted by /u/throwawayeksdee1
[visit reddit] [comments]

Categories
Misc

NVIDIA Sets Conference Call for Second-Quarter Financial Results

CFO Commentary to Be Provided in Writing Ahead of CallSANTA CLARA, Calif., Aug. 04, 2021 (GLOBE NEWSWIRE) — NVIDIA will host a conference call on Wednesday, August 18, at 2 p.m. PT (5 p.m. …

Categories
Misc

AI Detects Gravitational Waves Faster than Real Time

Illustration of deep space, stars and the galaxy.New research creates a deployable AI framework for detecting gravitational waves within massive amounts of data at several magnitudes faster than real time. Illustration of deep space, stars and the galaxy.

Scientists searching the universe for gravitational waves just got a boost thanks to a new study and AI.

The research, recently published in Nature Astronomy, creates a deployable AI framework for detecting gravitational waves within massive amounts of data—at several magnitudes faster than real time.

Created by a group of scientists from Argonne National Laboratory, the University of Chicago, the University of Illinois at Urbana-Champaign, NVIDIA, and IBM, the work highlights how AI and supercomputing can accelerate reproducible, data-driven discoveries.

“As a computer scientist, what’s exciting to me about this project is that it shows how, with the right tools, AI methods can be integrated naturally into the workflows of scientists. Allowing them to do their work faster and better. Augmenting, not replacing, human intelligence,” study senior author Ian Foster, director of Argonne’s Data Science and Learning division said in a press release.

In 2015, the advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) first detected gravitational waves when two black holes, 1.3 billion light-years away, collided and merged. 

These waves occur when massive objects quickly accelerate (such as a star exploding or massive objects colliding) creating a ripple through space time.

The notable discovery confirmed part of Einstein’s theory of relativity, hypothesizing that space and time are linked. It also marked the start of gravitational wave astronomy, which could result in a deeper understanding of the cosmos, including dark energy, gravity, and neutron stars. 

It also holds potential for scientists to step back through time to the moments around the Big Bang.

Since 2015, many more gravitational wave sources have been detected from LIGO. As the observatory continues with sensor upgrades and refinements, the expanse of detectors within the universe will also grow, creating large amounts of data for processing. Quickly computing these data streams remains key to gravitational wave astronomy advancements and discoveries.

In 2018 Eliu Huerta, lead for Translational AI and Computational Ccience at Argonne, demonstrated the capability of machine learning to detect gravitational waves from multiple LIGO detector data streams.  

In this study, the researchers further refined the model, which uses the cuDNN-accelerated deep learning framework distributed over 64 NVIDIA GPUs. They tested the model against LIGO data from 2017 and found it accurately identified four binary black hole mergers—without any misclassifications. It also processed a month’s worth of data in under 7 minutes. 

“In this study, we’ve used the combined power of AI and supercomputing to help solve timely and relevant big-data experiments. We are now making AI studies fully reproducible, not merely ascertaining whether AI may provide a novel solution to grand challenges,” Huerta said.

The team’s models are open-source and readily available. 


Read the full article in Nature Astronomy >>
Read more >>  

 

Categories
Misc

Digital Footprint Demonstration

submitted by /u/howyamean
[visit reddit] [comments]

Categories
Misc

Tensorflow Graphics

Tensorflow Graphics

Hello everybody. I’m asking for help because I can’t find the math attribute.

Some advice?

I would like to use this optimizer :

tfg.math.optimizer.levenberg_marquardt

https://preview.redd.it/nvvx664lddf71.png?width=1583&format=png&auto=webp&s=b9a3d5d724b2cdbaa87d17f4880c6d17969095e1

submitted by /u/Filippo9559
[visit reddit] [comments]