Categories
Misc

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Things used in this project

Hardware components:
Arduino Mega 2560
Software apps and online services:
Neuton Tiny ML
Real-time Food Quality Prediction.

Real-time Food Quality Prediction

Story

With each passing year, the issue of food waste becomes more acute for the environment. A recent Food Waste Index Report by the United Nations Environment Program (UNEP) showed that, on average, consumers waste almost a billion tons of food per year (or 17 percent of all food purchased): https://www.unep.org/resources/report/unep-food-waste-index-report-2021

The fact that people produce more food than they consume has significant negative consequences. For example, an estimated 8-10% of global greenhouse gas emissions come from unused food. On the contrary, reducing food waste will help to reduce greenhouse gas emissions and global pollution, as well as increase food availability for countries that suffer from hunger.

This situation suggests that in the near future, we will need to focus not on scaling food production, but on timely quality control so that fresh products can be sold and consumed. To fix the current situation, humanity will need smarter user-friendly technologies that can help them monitor product quality in real-time.

In this piece, I’ll explain an easy way to check food quality that can be implemented in an average store, and even in your own fridge. And the best part – it’s not rocket science at all!

Introduction:

Recently, I conducted a simple experiment, and I would like to share it with you, as I strongly believe that such practical solutions can make a great difference in solving global problems. Baby steps on the way to the global good.

My idea is to use the Tiny Machine Learning approach to forecast whether food is fresh or spoiled based on the data from gas sensors. I conducted my experiment with the use of 7 gas sensors.

In my tutorial, you will learn how you can automatically create a super tiny machine learning model, embed it into a sensor’s microcontroller, and check food quality with it.

So let’s get it started!

Procedure:

Step 1: Create a TinyML model with Neuton

Create a new solution “Food Quality” on the Neuton platform, and upload the training dataset containing signals for food quality, labeled for two classes (fresh and spoiled). My dataset contained 784 rows.

Then, select the target (Label) and target metric (Accuracy), also enabling the Tiny Machine Learning mode. Additionally, select the 8-bit depth for calculations without float data types and click “Start Training”.

The model will be ready in several minutes. Next, download the model.

Create a TinyML model with Neuton

Step 2: Create the microcontroller’s firmware

Download an example: https://github.com/Neuton-tinyML/arduino-example

Project Description

The project contains:

  • code for receiving a dataset via USB-UART serial port,
  • prediction fulfillment,
  • results indication,
  • code for measuring prediction time.

The main sketch file “arduino-tiny-ml-neuton.ino” has functions for processing data packets.

The main process goes on in the user_app.c file:

static NeuralNet neuralNet = { 0 };

extern const unsigned char model_bin[];

extern const unsigned int model_bin_len;

uint8_t app_init()

{

return (ERR_NO_ERROR != CalculatorInit(&neuralNet, NULL));

}

inline Err CalculatorOnInit(NeuralNet* neuralNet)

{

memUsage += sizeof(*neuralNet);

app_reset();

timer_init();

return CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

}

Here, create an object NeuralNet and call a function for loading the model located in the file model.c

CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

The model is now ready to make predictions. For this, you need to call the CalculatorRunInference function by transferring a float array of size neuralNet.inputsDim to it.

The last value is BIAS and should be 1.

inline float* app_run_inference(float* sample, uint32_t size_in, uint32_t *size_out)

{

if (!sample || !size_out)

return NULL;

if (size_in / sizeof(float) != app_inputs_size())

return NULL;

*size_out = sizeof(float) * neuralNet.outputsDim;

if (app.reverseByteOrder)

Reverse4BytesValuesBuffer(sample, app_inputs_size());

return CalculatorRunInference(&neuralNet, sample);

}

When performing a prediction, three callback functions are called: CalculatorOnInferenceStart before and CalculatorOnInferenceEnd after the prediction, as well as CalculatorOnInferenceResult with the prediction result.

In the example, I used these functions to measure the prediction time.

An array with class probabilities is passed to the function with the result of the prediction, with the size neuralNet.outputsDim. Here, find the class with the highest probability, and if the probability is > 0.5, turn on the LED (green for class 0 and red for class 1).

inline void CalculatorOnInferenceResult(NeuralNet* neuralNet, float* result)

{

if (neuralNet->taskType == TASK_BINARY_CLASSIFICATION && neuralNet->outputsDim >= 2)

{

float* value = result[0] >= result[1] ? &result[0] : &result[1];

if (*value > 0.5)

{

if (value == &result[0])

{

led_green(1);

led_red(0);

}

else

{

led_green(0);

led_red(1);

}

}

else

{

led_green(0);

led_red(0);

}

}

}

Step 3: Copy the downloaded model to the sketch

Copy the model file model.c from the model archive to MCU firmware.

Copy the downloaded model to the sketch

Step 4: Compile the sketch and upload it to the board

Now, everything is ready for sketch compilation. I used a program to send data from the computer to MCU and display the prediction results (it emulates sensor data and sends data to MCU).

Compile the sketch and upload it to the board.

To perform the prediction, download the utility: https://github.com/Neuton-tinyML/dataset-uploader
Depending on your OS, use the appropriate file in the bin folder.

You need to specify two parameters for the utility: USB port and dataset file.

Sample:

uploader -d./food_quality_binary_test_spoiled.csv -s /dev/cu.usbmodem14411101

The utility reads a CSV file and sends the samples line by line to the microcontroller. Then, it outputs the results as a CSV file to the stdout stream. After sending all the samples, the utility requests a report that contains the prediction time and the amount of memory consumed.

Step 5: Check how the embedded model functions

Create two CSV files, containing one line each, with data corresponding to two classes: fresh and spoiled.

Then, send each of them to the microcontroller and see the result of the prediction

Check how the embedded model functions

In this case, the food stays fresh, as the predicted class is zero, which means “fresh food”. The probability of zero is very high – 100% percent. The prediction was made in 3844 microseconds with 199 kB of Flash memory usage and 136 B of RAM usage. Also, you can see that the green LED is on, which signifies a good outcome.

Check how the embedded model functions

Here are the results for another row of data. In this case, we see that the model predicted that the food is spoiled, as the predicted class is one, which indicates “spoiled food”. The prediction was also made very fast, in 3848 microseconds, with the same 199 kB of Flash memory usage and 136 kB of RAM usage. In this case, you can see the red LED, indicating that the food is spoiled.

Conclusion:

This experiment proves that in just 5 simple steps, you can create a working smart device that, despite its tiny size, can be of great help in monitoring food quality. I am absolutely sure that such technologies can help us make our planet a cleaner and healthier place

submitted by /u/literallair
[visit reddit] [comments]

Categories
Misc

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Things used in this project

Hardware components:
Arduino Mega 2560
Software apps and online services:
Neuton Tiny ML
Real-time Food Quality Prediction.

Real-time Food Quality Prediction

Story

With each passing year, the issue of food waste becomes more acute for the environment. A recent Food Waste Index Report by the United Nations Environment Program (UNEP) showed that, on average, consumers waste almost a billion tons of food per year (or 17 percent of all food purchased): https://www.unep.org/resources/report/unep-food-waste-index-report-2021

The fact that people produce more food than they consume has significant negative consequences. For example, an estimated 8-10% of global greenhouse gas emissions come from unused food. On the contrary, reducing food waste will help to reduce greenhouse gas emissions and global pollution, as well as increase food availability for countries that suffer from hunger.

This situation suggests that in the near future, we will need to focus not on scaling food production, but on timely quality control so that fresh products can be sold and consumed. To fix the current situation, humanity will need smarter user-friendly technologies that can help them monitor product quality in real-time.

In this piece, I’ll explain an easy way to check food quality that can be implemented in an average store, and even in your own fridge. And the best part – it’s not rocket science at all!

Introduction:

Recently, I conducted a simple experiment, and I would like to share it with you, as I strongly believe that such practical solutions can make a great difference in solving global problems. Baby steps on the way to the global good.

My idea is to use the Tiny Machine Learning approach to forecast whether food is fresh or spoiled based on the data from gas sensors. I conducted my experiment with the use of 7 gas sensors.

In my tutorial, you will learn how you can automatically create a super tiny machine learning model, embed it into a sensor’s microcontroller, and check food quality with it.

So let’s get it started!

Procedure:

Step 1: Create a TinyML model with Neuton

Create a new solution “Food Quality” on the Neuton platform, and upload the training dataset containing signals for food quality, labeled for two classes (fresh and spoiled). My dataset contained 784 rows.

Then, select the target (Label) and target metric (Accuracy), also enabling the Tiny Machine Learning mode. Additionally, select the 8-bit depth for calculations without float data types and click “Start Training”.

The model will be ready in several minutes. Next, download the model.

Create a TinyML model with Neuton

Step 2: Create the microcontroller’s firmware

Download an example: https://github.com/Neuton-tinyML/arduino-example

Project Description

The project contains:

  • code for receiving a dataset via USB-UART serial port,
  • prediction fulfillment,
  • results indication,
  • code for measuring prediction time.

The main sketch file “arduino-tiny-ml-neuton.ino” has functions for processing data packets.

The main process goes on in the user_app.c file:

static NeuralNet neuralNet = { 0 };

extern const unsigned char model_bin[];

extern const unsigned int model_bin_len;

uint8_t app_init()

{

return (ERR_NO_ERROR != CalculatorInit(&neuralNet, NULL));

}

inline Err CalculatorOnInit(NeuralNet* neuralNet)

{

memUsage += sizeof(*neuralNet);

app_reset();

timer_init();

return CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

}

Here, create an object NeuralNet and call a function for loading the model located in the file model.c

CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

The model is now ready to make predictions. For this, you need to call the CalculatorRunInference function by transferring a float array of size neuralNet.inputsDim to it.

The last value is BIAS and should be 1.

inline float* app_run_inference(float* sample, uint32_t size_in, uint32_t *size_out)

{

if (!sample || !size_out)

return NULL;

if (size_in / sizeof(float) != app_inputs_size())

return NULL;

*size_out = sizeof(float) * neuralNet.outputsDim;

if (app.reverseByteOrder)

Reverse4BytesValuesBuffer(sample, app_inputs_size());

return CalculatorRunInference(&neuralNet, sample);

}

When performing a prediction, three callback functions are called: CalculatorOnInferenceStart before and CalculatorOnInferenceEnd after the prediction, as well as CalculatorOnInferenceResult with the prediction result.

In the example, I used these functions to measure the prediction time.

An array with class probabilities is passed to the function with the result of the prediction, with the size neuralNet.outputsDim. Here, find the class with the highest probability, and if the probability is > 0.5, turn on the LED (green for class 0 and red for class 1).

inline void CalculatorOnInferenceResult(NeuralNet* neuralNet, float* result)

{

if (neuralNet->taskType == TASK_BINARY_CLASSIFICATION && neuralNet->outputsDim >= 2)

{

float* value = result[0] >= result[1] ? &result[0] : &result[1];

if (*value > 0.5)

{

if (value == &result[0])

{

led_green(1);

led_red(0);

}

else

{

led_green(0);

led_red(1);

}

}

else

{

led_green(0);

led_red(0);

}

}

}

Step 3: Copy the downloaded model to the sketch

Copy the model file model.c from the model archive to MCU firmware.

Copy the downloaded model to the sketch

Step 4: Compile the sketch and upload it to the board

Now, everything is ready for sketch compilation. I used a program to send data from the computer to MCU and display the prediction results (it emulates sensor data and sends data to MCU).

Compile the sketch and upload it to the board.

To perform the prediction, download the utility: https://github.com/Neuton-tinyML/dataset-uploader
Depending on your OS, use the appropriate file in the bin folder.

You need to specify two parameters for the utility: USB port and dataset file.

Sample:

uploader -d./food_quality_binary_test_spoiled.csv -s /dev/cu.usbmodem14411101

The utility reads a CSV file and sends the samples line by line to the microcontroller. Then, it outputs the results as a CSV file to the stdout stream. After sending all the samples, the utility requests a report that contains the prediction time and the amount of memory consumed.

Step 5: Check how the embedded model functions

Create two CSV files, containing one line each, with data corresponding to two classes: fresh and spoiled.

Then, send each of them to the microcontroller and see the result of the prediction

Check how the embedded model functions

In this case, the food stays fresh, as the predicted class is zero, which means “fresh food”. The probability of zero is very high – 100% percent. The prediction was made in 3844 microseconds with 199 kB of Flash memory usage and 136 B of RAM usage. Also, you can see that the green LED is on, which signifies a good outcome.

Check how the embedded model functions

Here are the results for another row of data. In this case, we see that the model predicted that the food is spoiled, as the predicted class is one, which indicates “spoiled food”. The prediction was also made very fast, in 3848 microseconds, with the same 199 kB of Flash memory usage and 136 kB of RAM usage. In this case, you can see the red LED, indicating that the food is spoiled.

Conclusion:

This experiment proves that in just 5 simple steps, you can create a working smart device that, despite its tiny size, can be of great help in monitoring food quality. I am absolutely sure that such technologies can help us make our planet a cleaner and healthier place

submitted by /u/literallair
[visit reddit] [comments]

Categories
Misc

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Things used in this project

Hardware components:
Arduino Mega 2560
Software apps and online services:
Neuton Tiny ML
Real-time Food Quality Prediction.

https://i.redd.it/lr71emk35of81.gif

Story

With each passing year, the issue of food waste becomes more acute for the environment. A recent Food Waste Index Report by the United Nations Environment Program (UNEP) showed that, on average, consumers waste almost a billion tons of food per year (or 17 percent of all food purchased): https://www.unep.org/resources/report/unep-food-waste-index-report-2021

The fact that people produce more food than they consume has significant negative consequences. For example, an estimated 8-10% of global greenhouse gas emissions come from unused food. On the contrary, reducing food waste will help to reduce greenhouse gas emissions and global pollution, as well as increase food availability for countries that suffer from hunger.

This situation suggests that in the near future, we will need to focus not on scaling food production, but on timely quality control so that fresh products can be sold and consumed. To fix the current situation, humanity will need smarter user-friendly technologies that can help them monitor product quality in real-time.

In this piece, I’ll explain an easy way to check food quality that can be implemented in an average store, and even in your own fridge. And the best part – it’s not rocket science at all!

Introduction:

Recently, I conducted a simple experiment, and I would like to share it with you, as I strongly believe that such practical solutions can make a great difference in solving global problems. Baby steps on the way to the global good.

My idea is to use the Tiny Machine Learning approach to forecast whether food is fresh or spoiled based on the data from gas sensors. I conducted my experiment with the use of 7 gas sensors.

In my tutorial, you will learn how you can automatically create a super tiny machine learning model, embed it into a sensor’s microcontroller, and check food quality with it.

So let’s get it started!

Procedure:

Step 1: Create a TinyML model with Neuton

Create a new solution “Food Quality” on the Neuton platform, and upload the training dataset containing signals for food quality, labeled for two classes (fresh and spoiled). My dataset contained 784 rows.

Then, select the target (Label) and target metric (Accuracy), also enabling the Tiny Machine Learning mode. Additionally, select the 8-bit depth for calculations without float data types and click “Start Training”.

The model will be ready in several minutes. Next, download the model.

Create a TinyML model with Neuton

Step 2: Create the microcontroller’s firmware

Download an example: https://github.com/Neuton-tinyML/arduino-example

Project Description

The project contains:

  • code for receiving a dataset via USB-UART serial port,
  • prediction fulfillment,
  • results indication,
  • code for measuring prediction time.

The main sketch file “arduino-tiny-ml-neuton.ino” has functions for processing data packets.

The main process goes on in the user_app.c file:

static NeuralNet neuralNet = { 0 };

extern const unsigned char model_bin[];

extern const unsigned int model_bin_len;

uint8_t app_init()

{

return (ERR_NO_ERROR != CalculatorInit(&neuralNet, NULL));

}

inline Err CalculatorOnInit(NeuralNet* neuralNet)

{

memUsage += sizeof(*neuralNet);

app_reset();

timer_init();

return CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

}

Here, create an object NeuralNet and call a function for loading the model located in the file model.c

CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

The model is now ready to make predictions. For this, you need to call the CalculatorRunInference function by transferring a float array of size neuralNet.inputsDim to it.

The last value is BIAS and should be 1.

inline float* app_run_inference(float* sample, uint32_t size_in, uint32_t *size_out)

{

if (!sample || !size_out)

return NULL;

if (size_in / sizeof(float) != app_inputs_size())

return NULL;

*size_out = sizeof(float) * neuralNet.outputsDim;

if (app.reverseByteOrder)

Reverse4BytesValuesBuffer(sample, app_inputs_size());

return CalculatorRunInference(&neuralNet, sample);

}

When performing a prediction, three callback functions are called: CalculatorOnInferenceStart before and CalculatorOnInferenceEnd after the prediction, as well as CalculatorOnInferenceResult with the prediction result.

In the example, I used these functions to measure the prediction time.

An array with class probabilities is passed to the function with the result of the prediction, with the size neuralNet.outputsDim. Here, find the class with the highest probability, and if the probability is > 0.5, turn on the LED (green for class 0 and red for class 1).

inline void CalculatorOnInferenceResult(NeuralNet* neuralNet, float* result)

{

if (neuralNet->taskType == TASK_BINARY_CLASSIFICATION && neuralNet->outputsDim >= 2)

{

float* value = result[0] >= result[1] ? &result[0] : &result[1];

if (*value > 0.5)

{

if (value == &result[0])

{

led_green(1);

led_red(0);

}

else

{

led_green(0);

led_red(1);

}

}

else

{

led_green(0);

led_red(0);

}

}

}

Step 3: Copy the downloaded model to the sketch

Copy the model file model.c from the model archive to MCU firmware.

Copy the downloaded model to the sketch

Step 4: Compile the sketch and upload it to the board

Now, everything is ready for sketch compilation. I used a program to send data from the computer to MCU and display the prediction results (it emulates sensor data and sends data to MCU).

Compile the sketch and upload it to the board.

To perform the prediction, download the utility: https://github.com/Neuton-tinyML/dataset-uploader
Depending on your OS, use the appropriate file in the bin folder.

You need to specify two parameters for the utility: USB port and dataset file.

Sample:

uploader -d./food_quality_binary_test_spoiled.csv -s /dev/cu.usbmodem14411101

The utility reads a CSV file and sends the samples line by line to the microcontroller. Then, it outputs the results as a CSV file to the stdout stream. After sending all the samples, the utility requests a report that contains the prediction time and the amount of memory consumed.

Step 5: Check how the embedded model functions

Create two CSV files, containing one line each, with data corresponding to two classes: fresh and spoiled.

Then, send each of them to the microcontroller and see the result of the prediction

Check how the embedded model functions

In this case, the food stays fresh, as the predicted class is zero, which means “fresh food”. The probability of zero is very high – 100% percent. The prediction was made in 3844 microseconds with 199 kB of Flash memory usage and 136 B of RAM usage. Also, you can see that the green LED is on, which signifies a good outcome.

Check how the embedded model functions

Here are the results for another row of data. In this case, we see that the model predicted that the food is spoiled, as the predicted class is one, which indicates “spoiled food”. The prediction was also made very fast, in 3848 microseconds, with the same 199 kB of Flash memory usage and 136 kB of RAM usage. In this case, you can see the red LED, indicating that the food is spoiled.

Conclusion:

This experiment proves that in just 5 simple steps, you can create a working smart device that, despite its tiny size, can be of great help in monitoring food quality. I am absolutely sure that such technologies can help us make our planet a cleaner and healthier place

submitted by /u/literallair
[visit reddit] [comments]

Categories
Misc

2 Powerful 2 Be Stopped: ‘Dying Light 2 Stay Human’ Arrives on GeForce NOW’s Second Anniversary

Great things come in twos. Techland’s Dying Light 2 Stay Human arrives with RTX ON and is streaming from the cloud tomorrow, Feb. 4. Plus, in celebration of the second anniversary of GeForce NOW, February is packed full of membership rewards in Eternal Return, World of Warships and more. There are also 30 games joining Read article >

The post 2 Powerful 2 Be Stopped: ‘Dying Light 2 Stay Human’ Arrives on GeForce NOW’s Second Anniversary appeared first on The Official NVIDIA Blog.

Categories
Misc

New to ML – Advice

I am looking to use ML to provide a single answer to a text input.

The idea is that I will have a dataset with two columns, a description column and a code column.

The user will enter a description via an api and the result will be a code that is most relevant based on the previous descriptions and codes used.

Can someone point me in the right direction. Zero experience with ML. Programming background.

Thanks

submitted by /u/mattbatchelor14
[visit reddit] [comments]

Categories
Misc

Rain or Shine: Radar Vision Sees Through Clouds to Support Emergency Flood Relief

Flooding usually comes with various bad weather conditions, such as thick clouds, heavy rain and blustering winds. GPU-powered data science systems can now help researchers and emergency flood response teams to see through it all. John Murray, visiting professor in the Geographic Data Science Lab at the University of Liverpool, developed cuSAR, a platform that Read article >

The post Rain or Shine: Radar Vision Sees Through Clouds to Support Emergency Flood Relief appeared first on The Official NVIDIA Blog.

Categories
Misc

ModuleNotFoundError: No module named ‘tflearn’

submitted by /u/Guacamole_is_good
[visit reddit] [comments]

Categories
Misc

How to scrape Google Local Results with Artificial Intelligence?

How to scrape Google Local Results with Artificial Intelligence? submitted by /u/Kagermanov
[visit reddit] [comments]
Categories
Misc

NVIDIA Sets Conference Call for Fourth-Quarter Financial Results

CFO Commentary to Be Provided in Writing Ahead of CallSANTA CLARA, Calif., Feb. 02, 2022 (GLOBE NEWSWIRE) — NVIDIA will host a conference call on Wednesday, February 16, at 2:30 p.m. PT (5:30 …

Categories
Misc

Surgical Robot Performs First Solo Operation

Using machine learning and computer vision, a surgical robot successfully performs an anastomosis, demonstrating a notable step toward automated surgery.

In a medical first, a robot has performed laparoscopic surgery without the guidance of a surgeon’s hand. The study, recently publish in Science Robotics, outlines the design of an enhanced version of the Smart Tissue Autonomous Robot (STAR) that completed the challenging surgery on the soft tissue of a pig. The accomplishment marks a milestone toward fully automated robotic surgeries.

“Our findings show that we can automate one of the most intricate and delicate tasks in surgery: the reconnection of two ends of an intestine. The STAR performed the procedure in four animals and it produced significantly better results than humans performing the same procedure,” Axel Krieger, senior author and assistant professor of mechanical engineering at Johns Hopkins’ Whiting School of Engineering, said in a press release.

In laparoscopic procedures, surgeons use small incisions and a camera to perform an operation in the abdomen or pelvis. Anastomosis—which involves connecting two tubular structures such as blood vessels or intestines—is often performed laparoscopically. Despite being minimally invasive, the surgery has potential for serious complications to the patient if any leakage occurs due to flawed suturing. 

Autonomous robotic surgery has the potential to improve medical efficiency, safety, and reliability. However, according to the study autonomous anastomosis poses challenges when it comes to intricate imaging, tissue tracking, and surgical planning. These procedures also often require quick adaptation if an issue arises during surgery. 

The current STAR model improves on a 2016 iteration capable of suturing a pig’s intestine, however it required human intervention and created a larger incision. 

With advanced robotic precision and suturing tools, along with a 3D imaging system and machine learning-based tracking algorithms, the latest STAR can adjust its surgical plan in real time.

“We developed machine learning, computer vision, and advanced control techniques to track the target tissue movement in response to patient breathing, detect the tissue deformations between different suturing steps, and operate the robot under motion constraints,” the researchers writer in the study.

A machine-learning algorithm based on convolutional neural networks (CNNs) predicts tissue motion and guides suture plans. The researchers trained the CNNs using 9,294 examples of motion profiles from anastomosis procedures, to learn tissue motion based on breathing patterns and other tissue motion during surgery. 

The robot synchronizes with a camera to scan and create suture plans while the tissue is stationary. Using enhanced computer vision and a CNN-based landmark detection algorithm, STAR generates two initial suture plans to connect adjacent tissue. Once an operator selects a plan, the robot applies a suture to the tissue and reimages the area for tissue deformation. 

If a change in tissue position is greater than 3 mm compared with the surgical plan, it notifies the operator to initiate a new suture planning and approval step. This process repeats for every suture.

According to Krieger, an NVIDIA GeForce GTX GPU was used for training and running the CNNs, including four convolutional, three dense layers, and two outputs that tracked tissue motion. Training and testing of the landmark detection algorithm, using a cascaded U-Net architecture, was performed with an NVIDIA T4 GPU.

The researchers examined the quality of the anastomosis, which includes needle placement corrections, suture spacing, size of suture bites, completion time, lumen patency, and leak pressure. They found the autonomous STAR outperformed the consistency and accuracy of both expert surgeons and robot-assisted surgeries.

“What makes the STAR special is that it is the first robotic system to plan, adapt, and execute a surgical plan in soft tissue with minimal human intervention,” Krieger said.

Read the study in Science Robotics. >>
Read more. >>