Categories
Misc

2 Powerful 2 Be Stopped: ‘Dying Light 2 Stay Human’ Arrives on GeForce NOW’s Second Anniversary

Great things come in twos. Techland’s Dying Light 2 Stay Human arrives with RTX ON and is streaming from the cloud tomorrow, Feb. 4. Plus, in celebration of the second anniversary of GeForce NOW, February is packed full of membership rewards in Eternal Return, World of Warships and more. There are also 30 games joining Read article >

The post 2 Powerful 2 Be Stopped: ‘Dying Light 2 Stay Human’ Arrives on GeForce NOW’s Second Anniversary appeared first on The Official NVIDIA Blog.

Categories
Misc

New to ML – Advice

I am looking to use ML to provide a single answer to a text input.

The idea is that I will have a dataset with two columns, a description column and a code column.

The user will enter a description via an api and the result will be a code that is most relevant based on the previous descriptions and codes used.

Can someone point me in the right direction. Zero experience with ML. Programming background.

Thanks

submitted by /u/mattbatchelor14
[visit reddit] [comments]

Categories
Misc

Rain or Shine: Radar Vision Sees Through Clouds to Support Emergency Flood Relief

Flooding usually comes with various bad weather conditions, such as thick clouds, heavy rain and blustering winds. GPU-powered data science systems can now help researchers and emergency flood response teams to see through it all. John Murray, visiting professor in the Geographic Data Science Lab at the University of Liverpool, developed cuSAR, a platform that Read article >

The post Rain or Shine: Radar Vision Sees Through Clouds to Support Emergency Flood Relief appeared first on The Official NVIDIA Blog.

Categories
Misc

ModuleNotFoundError: No module named ‘tflearn’

submitted by /u/Guacamole_is_good
[visit reddit] [comments]

Categories
Misc

How to scrape Google Local Results with Artificial Intelligence?

How to scrape Google Local Results with Artificial Intelligence? submitted by /u/Kagermanov
[visit reddit] [comments]
Categories
Misc

NVIDIA Sets Conference Call for Fourth-Quarter Financial Results

CFO Commentary to Be Provided in Writing Ahead of CallSANTA CLARA, Calif., Feb. 02, 2022 (GLOBE NEWSWIRE) — NVIDIA will host a conference call on Wednesday, February 16, at 2:30 p.m. PT (5:30 …

Categories
Misc

Surgical Robot Performs First Solo Operation

Using machine learning and computer vision, a surgical robot successfully performs an anastomosis, demonstrating a notable step toward automated surgery.

In a medical first, a robot has performed laparoscopic surgery without the guidance of a surgeon’s hand. The study, recently publish in Science Robotics, outlines the design of an enhanced version of the Smart Tissue Autonomous Robot (STAR) that completed the challenging surgery on the soft tissue of a pig. The accomplishment marks a milestone toward fully automated robotic surgeries.

“Our findings show that we can automate one of the most intricate and delicate tasks in surgery: the reconnection of two ends of an intestine. The STAR performed the procedure in four animals and it produced significantly better results than humans performing the same procedure,” Axel Krieger, senior author and assistant professor of mechanical engineering at Johns Hopkins’ Whiting School of Engineering, said in a press release.

In laparoscopic procedures, surgeons use small incisions and a camera to perform an operation in the abdomen or pelvis. Anastomosis—which involves connecting two tubular structures such as blood vessels or intestines—is often performed laparoscopically. Despite being minimally invasive, the surgery has potential for serious complications to the patient if any leakage occurs due to flawed suturing. 

Autonomous robotic surgery has the potential to improve medical efficiency, safety, and reliability. However, according to the study autonomous anastomosis poses challenges when it comes to intricate imaging, tissue tracking, and surgical planning. These procedures also often require quick adaptation if an issue arises during surgery. 

The current STAR model improves on a 2016 iteration capable of suturing a pig’s intestine, however it required human intervention and created a larger incision. 

With advanced robotic precision and suturing tools, along with a 3D imaging system and machine learning-based tracking algorithms, the latest STAR can adjust its surgical plan in real time.

“We developed machine learning, computer vision, and advanced control techniques to track the target tissue movement in response to patient breathing, detect the tissue deformations between different suturing steps, and operate the robot under motion constraints,” the researchers writer in the study.

A machine-learning algorithm based on convolutional neural networks (CNNs) predicts tissue motion and guides suture plans. The researchers trained the CNNs using 9,294 examples of motion profiles from anastomosis procedures, to learn tissue motion based on breathing patterns and other tissue motion during surgery. 

The robot synchronizes with a camera to scan and create suture plans while the tissue is stationary. Using enhanced computer vision and a CNN-based landmark detection algorithm, STAR generates two initial suture plans to connect adjacent tissue. Once an operator selects a plan, the robot applies a suture to the tissue and reimages the area for tissue deformation. 

If a change in tissue position is greater than 3 mm compared with the surgical plan, it notifies the operator to initiate a new suture planning and approval step. This process repeats for every suture.

According to Krieger, an NVIDIA GeForce GTX GPU was used for training and running the CNNs, including four convolutional, three dense layers, and two outputs that tracked tissue motion. Training and testing of the landmark detection algorithm, using a cascaded U-Net architecture, was performed with an NVIDIA T4 GPU.

The researchers examined the quality of the anastomosis, which includes needle placement corrections, suture spacing, size of suture bites, completion time, lumen patency, and leak pressure. They found the autonomous STAR outperformed the consistency and accuracy of both expert surgeons and robot-assisted surgeries.

“What makes the STAR special is that it is the first robotic system to plan, adapt, and execute a surgical plan in soft tissue with minimal human intervention,” Krieger said.

Read the study in Science Robotics. >>
Read more. >>

Categories
Misc

Jetson Project of the Month: Detecting Acute Lymphoblastic Leukemia with NVIDIA Jetson

NVIDIA Jetson Nano is paving the way to detect certain types of cancer sooner.

NVIDIA Jetson Nano is paving the way to detect certain types of cancer sooner.

Adam Milton-Barker’s grandfather, Peter Moss, was diagnosed with a terminal illness, Acute Myeloid Leukemia, in 2018. One month prior, doctors had given his grandfather an ‘all clear’ during a routine blood test with no signs of leukemia. At the time, he was convinced there should have been some sort of sign about the disease.

Milton-Barker had previous experience using AI for breast cancer detection and wanted to see if what he learned could be applied to leukemia detection. In memory of his grandfather, he established the Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research Project, an open-source research project dedicated to creating free technologies focused on the early detection of leukemia.

Fast forward to August 2021 when Milton-Barker demonstrated a project testing the capabilities of NVIDIA Jetson Nano for the classification of Acute Lymphoblastic Leukemia (ALL) at the edge. This project was also his submission for the NVIDIA Jetson AI Specialist Certification

A Nano solution to a big challenge

Using Jetson Nano, the project can detect and classify instances of ALL in images of tissue samples from the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset.

The project has steps for developers to train custom convolutional neural networks (CNNs) developed using the Intel oneAPI AI Analytics Toolkit and Intel Optimization for TensorFlow to accelerate the training process. It also includes instructions to use TensorRT for high-performance inference on the Jetson Nano to classify ALL. 

Developers can convert the trained model into TFRT, ONNX, and TensorRT formats to test how each architecture yields different inference times. As seen in the results, TensorRT cuts the inference time down from the original 16 seconds per image to just 0.07 seconds:

  • TensorFlow Model: 16.357818841934204
  • TFRT Model: 8.33677887916565
  • TensorRT Model: 0.07416033744812012

Milton-Barker summarized: “When comparing the performance of the TFRT model with the TensorRT model we see an improvement of [an additional] 8 seconds, demonstrating the pure power of TensorRT and the possibilities it brings to AI on the edge.”

In the GitHub repository for this work, he noted: “This project should be used for research purposes only… Although the model is accurate and shows good results both on paper and in real-world testing, it is trained on a small amount of data and needs to be trained on larger datasets to really evaluate its accuracy.”

Lend a hand

Interested in helping further this research? This project requires an NVIDIA Jetson Nano Developer Kit and the Jetson Nano Developer Kit SD Card Image. For information on how to set up your Jetson Nano visit Getting Started with Jetson Nano Developer Kit.

You can access a docker image for easy installation of the software needed to replicate this project on Jetson Nano from this repository.

Additionally, you will need access to the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset. Learn more about contributing to this project and apply to access the dataset.

For more information about Acute Lymphoblastic Leukemia, visit the Peter Moss Leukemia Medtech Research page.

Categories
Misc

Gesture recognition and tensorflow?

Hi guys, I’m new to tensorflow and I am working on a computer vision project (using opencv and mediapipe). I wish to implement gesture recognition but do not know how to.

I thought about using a sequential model but do not really know how to implement it for gesture recognition.

All help is welcome.

Thanking you in advance

submitted by /u/Chuchu123DOTexe
[visit reddit] [comments]

Categories
Misc

TensorFlow Team Introduce BlazePose GHUM Posture Estimation Model and Selfie Segmentation For Body Segmentation Using MediaPipe and TensorFlow.js

TensorFlow Team Introduce BlazePose GHUM Posture Estimation Model and Selfie Segmentation For Body Segmentation Using MediaPipe and TensorFlow.js

Image segmentation is a method used in computer vision to group pixels in an image into semantic areas, which is typically used to locate objects and boundaries. Body segmentation models do the same thing for a person and their twenty-four body parts. This technology can be used for a variety of purposes, including augmented reality, picture editing, and creative effects on photographs and movies, to name a few.

The TensorFlow team has recently released two new highly optimized body segmentation models that are accurate and quick as part of their improved body segmentation and posture APIs in TensorFlow.js.

Continue Reading

Github: https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_mediapipe

Demo: https://storage.googleapis.com/tfjs-models/demos/segmentation/index.html?model=blazepose

Tensorflow blog: https://blog.tensorflow.org/2022/01/body-segmentation.html

https://i.redd.it/3iguxsbllgf81.gif

submitted by /u/ai-lover
[visit reddit] [comments]