Categories
Misc

[Project] Generating questions from AskReddit Dataset

Hello, I wanted to share a project I’ve been working on using a dataset that has questions from the r/AskReddit. I used the GPT-2 pretrained model and trained a model to generate questions. Some of the questions generated make no sense, and some are pretty funny. Check it out at the link below.

https://www.kaggle.com/camerinfigueroa/generate-reddit-posts-with-pretrained-gpt-2-model

submitted by /u/YodaByteRAM
[visit reddit] [comments]

Categories
Misc

Need some advice on GPU’s?

Brand new to machine learning. I followed a tutorial to build a chatbot which uses nlp to determine which response to send, it uses tensorflow/keras.

It seems to run the training part fine but when actually running the chatbot it returns errors regarding nvcuda and cudart.

I had no idea that GPU’s were relevant in machine learning. Is it something I can get around or do I need to invest in a new laptop??

My current model is MSI Modern 14 B10MW Core i3 10th Gen with an intel GPU.

Below is the code for the chatbot, not sure if it’s relevant but it does seem like if the training.py file is running the this should be fine?

Pretty confused so any help would be great.

import json import pickle import numpy as np import nltk from nltk.stem import WordNetLemmatizer import tensorflow from tensorflow import keras from keras.models import load_model lemmatizer = WordNetLemmatizer intents = json.loads(open('intents.json').read()) words = pickle.load(open('words.pkl', 'rb')) classes = pickle.load(open('classes.pkl', 'rb')) model = load_model('chatbot_model.model') def clean_up_sentence(sentence): sentence_words = nltk.word_tokenize(sentence) sentence_words = [lemmatizer.lemmatize(word) for word in sentence_words] return sentence_words def bag_of_words(sentence): sentence_words = clean_up_sentence(sentence) bag = [0] * len(words) for w in sentence_words: for i, word in enumerate(words): if word == w: bag[i] = 1 return np.array(bag) def predict_class(sentence): bow = bag_of_words(sentence) res = model.predict(np.array(bow))[0] error_threshold = 0.25 results = [[i, r] for i, r in enumerate(res) if r > error_threshold] results.sort(key=lambda x: x[1], reverse=True) return_list = [] for r in results: return_list.append({'intent' : classes[r[0]], 'probability': str(r[1])}) return return_list 

submitted by /u/Zenemm
[visit reddit] [comments]

Categories
Misc

(Help) TensorFlow mentions "HadoopFileSystem load error: libhdfs.so: cannot open shared object file" when verifying installation. Is this normal?

Hello, this is my first time using TensorFlow. I am installing it via pip (running python 3.7) using the official guide. It seems that everything has installed correctly, so I run the command to verify the installation. This returns:

2022-02-04 14:13:27.807610: E tensorflow/core/platform/hadoop/hadoop_file_system.cc:132] HadoopFileSystem load error: libhdfs.so: cannot open shared object f
ile: No such file or directory
Tensor(“Sum:0”, shape=(), dtype=float32)

Is this normal?

submitted by /u/RedditorOfRohan
[visit reddit] [comments]

Categories
Misc

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Things used in this project

Hardware components:
Arduino Mega 2560
Software apps and online services:
Neuton Tiny ML
Real-time Food Quality Prediction.

Real-time Food Quality Prediction

Story

With each passing year, the issue of food waste becomes more acute for the environment. A recent Food Waste Index Report by the United Nations Environment Program (UNEP) showed that, on average, consumers waste almost a billion tons of food per year (or 17 percent of all food purchased): https://www.unep.org/resources/report/unep-food-waste-index-report-2021

The fact that people produce more food than they consume has significant negative consequences. For example, an estimated 8-10% of global greenhouse gas emissions come from unused food. On the contrary, reducing food waste will help to reduce greenhouse gas emissions and global pollution, as well as increase food availability for countries that suffer from hunger.

This situation suggests that in the near future, we will need to focus not on scaling food production, but on timely quality control so that fresh products can be sold and consumed. To fix the current situation, humanity will need smarter user-friendly technologies that can help them monitor product quality in real-time.

In this piece, I’ll explain an easy way to check food quality that can be implemented in an average store, and even in your own fridge. And the best part – it’s not rocket science at all!

Introduction:

Recently, I conducted a simple experiment, and I would like to share it with you, as I strongly believe that such practical solutions can make a great difference in solving global problems. Baby steps on the way to the global good.

My idea is to use the Tiny Machine Learning approach to forecast whether food is fresh or spoiled based on the data from gas sensors. I conducted my experiment with the use of 7 gas sensors.

In my tutorial, you will learn how you can automatically create a super tiny machine learning model, embed it into a sensor’s microcontroller, and check food quality with it.

So let’s get it started!

Procedure:

Step 1: Create a TinyML model with Neuton

Create a new solution “Food Quality” on the Neuton platform, and upload the training dataset containing signals for food quality, labeled for two classes (fresh and spoiled). My dataset contained 784 rows.

Then, select the target (Label) and target metric (Accuracy), also enabling the Tiny Machine Learning mode. Additionally, select the 8-bit depth for calculations without float data types and click “Start Training”.

The model will be ready in several minutes. Next, download the model.

Create a TinyML model with Neuton

Step 2: Create the microcontroller’s firmware

Download an example: https://github.com/Neuton-tinyML/arduino-example

Project Description

The project contains:

  • code for receiving a dataset via USB-UART serial port,
  • prediction fulfillment,
  • results indication,
  • code for measuring prediction time.

The main sketch file “arduino-tiny-ml-neuton.ino” has functions for processing data packets.

The main process goes on in the user_app.c file:

static NeuralNet neuralNet = { 0 };

extern const unsigned char model_bin[];

extern const unsigned int model_bin_len;

uint8_t app_init()

{

return (ERR_NO_ERROR != CalculatorInit(&neuralNet, NULL));

}

inline Err CalculatorOnInit(NeuralNet* neuralNet)

{

memUsage += sizeof(*neuralNet);

app_reset();

timer_init();

return CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

}

Here, create an object NeuralNet and call a function for loading the model located in the file model.c

CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

The model is now ready to make predictions. For this, you need to call the CalculatorRunInference function by transferring a float array of size neuralNet.inputsDim to it.

The last value is BIAS and should be 1.

inline float* app_run_inference(float* sample, uint32_t size_in, uint32_t *size_out)

{

if (!sample || !size_out)

return NULL;

if (size_in / sizeof(float) != app_inputs_size())

return NULL;

*size_out = sizeof(float) * neuralNet.outputsDim;

if (app.reverseByteOrder)

Reverse4BytesValuesBuffer(sample, app_inputs_size());

return CalculatorRunInference(&neuralNet, sample);

}

When performing a prediction, three callback functions are called: CalculatorOnInferenceStart before and CalculatorOnInferenceEnd after the prediction, as well as CalculatorOnInferenceResult with the prediction result.

In the example, I used these functions to measure the prediction time.

An array with class probabilities is passed to the function with the result of the prediction, with the size neuralNet.outputsDim. Here, find the class with the highest probability, and if the probability is > 0.5, turn on the LED (green for class 0 and red for class 1).

inline void CalculatorOnInferenceResult(NeuralNet* neuralNet, float* result)

{

if (neuralNet->taskType == TASK_BINARY_CLASSIFICATION && neuralNet->outputsDim >= 2)

{

float* value = result[0] >= result[1] ? &result[0] : &result[1];

if (*value > 0.5)

{

if (value == &result[0])

{

led_green(1);

led_red(0);

}

else

{

led_green(0);

led_red(1);

}

}

else

{

led_green(0);

led_red(0);

}

}

}

Step 3: Copy the downloaded model to the sketch

Copy the model file model.c from the model archive to MCU firmware.

Copy the downloaded model to the sketch

Step 4: Compile the sketch and upload it to the board

Now, everything is ready for sketch compilation. I used a program to send data from the computer to MCU and display the prediction results (it emulates sensor data and sends data to MCU).

Compile the sketch and upload it to the board.

To perform the prediction, download the utility: https://github.com/Neuton-tinyML/dataset-uploader
Depending on your OS, use the appropriate file in the bin folder.

You need to specify two parameters for the utility: USB port and dataset file.

Sample:

uploader -d./food_quality_binary_test_spoiled.csv -s /dev/cu.usbmodem14411101

The utility reads a CSV file and sends the samples line by line to the microcontroller. Then, it outputs the results as a CSV file to the stdout stream. After sending all the samples, the utility requests a report that contains the prediction time and the amount of memory consumed.

Step 5: Check how the embedded model functions

Create two CSV files, containing one line each, with data corresponding to two classes: fresh and spoiled.

Then, send each of them to the microcontroller and see the result of the prediction

Check how the embedded model functions

In this case, the food stays fresh, as the predicted class is zero, which means “fresh food”. The probability of zero is very high – 100% percent. The prediction was made in 3844 microseconds with 199 kB of Flash memory usage and 136 B of RAM usage. Also, you can see that the green LED is on, which signifies a good outcome.

Check how the embedded model functions

Here are the results for another row of data. In this case, we see that the model predicted that the food is spoiled, as the predicted class is one, which indicates “spoiled food”. The prediction was also made very fast, in 3848 microseconds, with the same 199 kB of Flash memory usage and 136 kB of RAM usage. In this case, you can see the red LED, indicating that the food is spoiled.

Conclusion:

This experiment proves that in just 5 simple steps, you can create a working smart device that, despite its tiny size, can be of great help in monitoring food quality. I am absolutely sure that such technologies can help us make our planet a cleaner and healthier place

submitted by /u/literallair
[visit reddit] [comments]

Categories
Misc

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Things used in this project

Hardware components:
Arduino Mega 2560
Software apps and online services:
Neuton Tiny ML
Real-time Food Quality Prediction.

Real-time Food Quality Prediction

Story

With each passing year, the issue of food waste becomes more acute for the environment. A recent Food Waste Index Report by the United Nations Environment Program (UNEP) showed that, on average, consumers waste almost a billion tons of food per year (or 17 percent of all food purchased): https://www.unep.org/resources/report/unep-food-waste-index-report-2021

The fact that people produce more food than they consume has significant negative consequences. For example, an estimated 8-10% of global greenhouse gas emissions come from unused food. On the contrary, reducing food waste will help to reduce greenhouse gas emissions and global pollution, as well as increase food availability for countries that suffer from hunger.

This situation suggests that in the near future, we will need to focus not on scaling food production, but on timely quality control so that fresh products can be sold and consumed. To fix the current situation, humanity will need smarter user-friendly technologies that can help them monitor product quality in real-time.

In this piece, I’ll explain an easy way to check food quality that can be implemented in an average store, and even in your own fridge. And the best part – it’s not rocket science at all!

Introduction:

Recently, I conducted a simple experiment, and I would like to share it with you, as I strongly believe that such practical solutions can make a great difference in solving global problems. Baby steps on the way to the global good.

My idea is to use the Tiny Machine Learning approach to forecast whether food is fresh or spoiled based on the data from gas sensors. I conducted my experiment with the use of 7 gas sensors.

In my tutorial, you will learn how you can automatically create a super tiny machine learning model, embed it into a sensor’s microcontroller, and check food quality with it.

So let’s get it started!

Procedure:

Step 1: Create a TinyML model with Neuton

Create a new solution “Food Quality” on the Neuton platform, and upload the training dataset containing signals for food quality, labeled for two classes (fresh and spoiled). My dataset contained 784 rows.

Then, select the target (Label) and target metric (Accuracy), also enabling the Tiny Machine Learning mode. Additionally, select the 8-bit depth for calculations without float data types and click “Start Training”.

The model will be ready in several minutes. Next, download the model.

Create a TinyML model with Neuton

Step 2: Create the microcontroller’s firmware

Download an example: https://github.com/Neuton-tinyML/arduino-example

Project Description

The project contains:

  • code for receiving a dataset via USB-UART serial port,
  • prediction fulfillment,
  • results indication,
  • code for measuring prediction time.

The main sketch file “arduino-tiny-ml-neuton.ino” has functions for processing data packets.

The main process goes on in the user_app.c file:

static NeuralNet neuralNet = { 0 };

extern const unsigned char model_bin[];

extern const unsigned int model_bin_len;

uint8_t app_init()

{

return (ERR_NO_ERROR != CalculatorInit(&neuralNet, NULL));

}

inline Err CalculatorOnInit(NeuralNet* neuralNet)

{

memUsage += sizeof(*neuralNet);

app_reset();

timer_init();

return CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

}

Here, create an object NeuralNet and call a function for loading the model located in the file model.c

CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

The model is now ready to make predictions. For this, you need to call the CalculatorRunInference function by transferring a float array of size neuralNet.inputsDim to it.

The last value is BIAS and should be 1.

inline float* app_run_inference(float* sample, uint32_t size_in, uint32_t *size_out)

{

if (!sample || !size_out)

return NULL;

if (size_in / sizeof(float) != app_inputs_size())

return NULL;

*size_out = sizeof(float) * neuralNet.outputsDim;

if (app.reverseByteOrder)

Reverse4BytesValuesBuffer(sample, app_inputs_size());

return CalculatorRunInference(&neuralNet, sample);

}

When performing a prediction, three callback functions are called: CalculatorOnInferenceStart before and CalculatorOnInferenceEnd after the prediction, as well as CalculatorOnInferenceResult with the prediction result.

In the example, I used these functions to measure the prediction time.

An array with class probabilities is passed to the function with the result of the prediction, with the size neuralNet.outputsDim. Here, find the class with the highest probability, and if the probability is > 0.5, turn on the LED (green for class 0 and red for class 1).

inline void CalculatorOnInferenceResult(NeuralNet* neuralNet, float* result)

{

if (neuralNet->taskType == TASK_BINARY_CLASSIFICATION && neuralNet->outputsDim >= 2)

{

float* value = result[0] >= result[1] ? &result[0] : &result[1];

if (*value > 0.5)

{

if (value == &result[0])

{

led_green(1);

led_red(0);

}

else

{

led_green(0);

led_red(1);

}

}

else

{

led_green(0);

led_red(0);

}

}

}

Step 3: Copy the downloaded model to the sketch

Copy the model file model.c from the model archive to MCU firmware.

Copy the downloaded model to the sketch

Step 4: Compile the sketch and upload it to the board

Now, everything is ready for sketch compilation. I used a program to send data from the computer to MCU and display the prediction results (it emulates sensor data and sends data to MCU).

Compile the sketch and upload it to the board.

To perform the prediction, download the utility: https://github.com/Neuton-tinyML/dataset-uploader
Depending on your OS, use the appropriate file in the bin folder.

You need to specify two parameters for the utility: USB port and dataset file.

Sample:

uploader -d./food_quality_binary_test_spoiled.csv -s /dev/cu.usbmodem14411101

The utility reads a CSV file and sends the samples line by line to the microcontroller. Then, it outputs the results as a CSV file to the stdout stream. After sending all the samples, the utility requests a report that contains the prediction time and the amount of memory consumed.

Step 5: Check how the embedded model functions

Create two CSV files, containing one line each, with data corresponding to two classes: fresh and spoiled.

Then, send each of them to the microcontroller and see the result of the prediction

Check how the embedded model functions

In this case, the food stays fresh, as the predicted class is zero, which means “fresh food”. The probability of zero is very high – 100% percent. The prediction was made in 3844 microseconds with 199 kB of Flash memory usage and 136 B of RAM usage. Also, you can see that the green LED is on, which signifies a good outcome.

Check how the embedded model functions

Here are the results for another row of data. In this case, we see that the model predicted that the food is spoiled, as the predicted class is one, which indicates “spoiled food”. The prediction was also made very fast, in 3848 microseconds, with the same 199 kB of Flash memory usage and 136 kB of RAM usage. In this case, you can see the red LED, indicating that the food is spoiled.

Conclusion:

This experiment proves that in just 5 simple steps, you can create a working smart device that, despite its tiny size, can be of great help in monitoring food quality. I am absolutely sure that such technologies can help us make our planet a cleaner and healthier place

submitted by /u/literallair
[visit reddit] [comments]

Categories
Misc

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Real-time Food Quality Prediction.Detect spoiled products using the Tiny Machine Learning approach.

Things used in this project

Hardware components:
Arduino Mega 2560
Software apps and online services:
Neuton Tiny ML
Real-time Food Quality Prediction.

https://i.redd.it/lr71emk35of81.gif

Story

With each passing year, the issue of food waste becomes more acute for the environment. A recent Food Waste Index Report by the United Nations Environment Program (UNEP) showed that, on average, consumers waste almost a billion tons of food per year (or 17 percent of all food purchased): https://www.unep.org/resources/report/unep-food-waste-index-report-2021

The fact that people produce more food than they consume has significant negative consequences. For example, an estimated 8-10% of global greenhouse gas emissions come from unused food. On the contrary, reducing food waste will help to reduce greenhouse gas emissions and global pollution, as well as increase food availability for countries that suffer from hunger.

This situation suggests that in the near future, we will need to focus not on scaling food production, but on timely quality control so that fresh products can be sold and consumed. To fix the current situation, humanity will need smarter user-friendly technologies that can help them monitor product quality in real-time.

In this piece, I’ll explain an easy way to check food quality that can be implemented in an average store, and even in your own fridge. And the best part – it’s not rocket science at all!

Introduction:

Recently, I conducted a simple experiment, and I would like to share it with you, as I strongly believe that such practical solutions can make a great difference in solving global problems. Baby steps on the way to the global good.

My idea is to use the Tiny Machine Learning approach to forecast whether food is fresh or spoiled based on the data from gas sensors. I conducted my experiment with the use of 7 gas sensors.

In my tutorial, you will learn how you can automatically create a super tiny machine learning model, embed it into a sensor’s microcontroller, and check food quality with it.

So let’s get it started!

Procedure:

Step 1: Create a TinyML model with Neuton

Create a new solution “Food Quality” on the Neuton platform, and upload the training dataset containing signals for food quality, labeled for two classes (fresh and spoiled). My dataset contained 784 rows.

Then, select the target (Label) and target metric (Accuracy), also enabling the Tiny Machine Learning mode. Additionally, select the 8-bit depth for calculations without float data types and click “Start Training”.

The model will be ready in several minutes. Next, download the model.

Create a TinyML model with Neuton

Step 2: Create the microcontroller’s firmware

Download an example: https://github.com/Neuton-tinyML/arduino-example

Project Description

The project contains:

  • code for receiving a dataset via USB-UART serial port,
  • prediction fulfillment,
  • results indication,
  • code for measuring prediction time.

The main sketch file “arduino-tiny-ml-neuton.ino” has functions for processing data packets.

The main process goes on in the user_app.c file:

static NeuralNet neuralNet = { 0 };

extern const unsigned char model_bin[];

extern const unsigned int model_bin_len;

uint8_t app_init()

{

return (ERR_NO_ERROR != CalculatorInit(&neuralNet, NULL));

}

inline Err CalculatorOnInit(NeuralNet* neuralNet)

{

memUsage += sizeof(*neuralNet);

app_reset();

timer_init();

return CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

}

Here, create an object NeuralNet and call a function for loading the model located in the file model.c

CalculatorLoadFromMemory(neuralNet, model_bin, model_bin_len, 0);

The model is now ready to make predictions. For this, you need to call the CalculatorRunInference function by transferring a float array of size neuralNet.inputsDim to it.

The last value is BIAS and should be 1.

inline float* app_run_inference(float* sample, uint32_t size_in, uint32_t *size_out)

{

if (!sample || !size_out)

return NULL;

if (size_in / sizeof(float) != app_inputs_size())

return NULL;

*size_out = sizeof(float) * neuralNet.outputsDim;

if (app.reverseByteOrder)

Reverse4BytesValuesBuffer(sample, app_inputs_size());

return CalculatorRunInference(&neuralNet, sample);

}

When performing a prediction, three callback functions are called: CalculatorOnInferenceStart before and CalculatorOnInferenceEnd after the prediction, as well as CalculatorOnInferenceResult with the prediction result.

In the example, I used these functions to measure the prediction time.

An array with class probabilities is passed to the function with the result of the prediction, with the size neuralNet.outputsDim. Here, find the class with the highest probability, and if the probability is > 0.5, turn on the LED (green for class 0 and red for class 1).

inline void CalculatorOnInferenceResult(NeuralNet* neuralNet, float* result)

{

if (neuralNet->taskType == TASK_BINARY_CLASSIFICATION && neuralNet->outputsDim >= 2)

{

float* value = result[0] >= result[1] ? &result[0] : &result[1];

if (*value > 0.5)

{

if (value == &result[0])

{

led_green(1);

led_red(0);

}

else

{

led_green(0);

led_red(1);

}

}

else

{

led_green(0);

led_red(0);

}

}

}

Step 3: Copy the downloaded model to the sketch

Copy the model file model.c from the model archive to MCU firmware.

Copy the downloaded model to the sketch

Step 4: Compile the sketch and upload it to the board

Now, everything is ready for sketch compilation. I used a program to send data from the computer to MCU and display the prediction results (it emulates sensor data and sends data to MCU).

Compile the sketch and upload it to the board.

To perform the prediction, download the utility: https://github.com/Neuton-tinyML/dataset-uploader
Depending on your OS, use the appropriate file in the bin folder.

You need to specify two parameters for the utility: USB port and dataset file.

Sample:

uploader -d./food_quality_binary_test_spoiled.csv -s /dev/cu.usbmodem14411101

The utility reads a CSV file and sends the samples line by line to the microcontroller. Then, it outputs the results as a CSV file to the stdout stream. After sending all the samples, the utility requests a report that contains the prediction time and the amount of memory consumed.

Step 5: Check how the embedded model functions

Create two CSV files, containing one line each, with data corresponding to two classes: fresh and spoiled.

Then, send each of them to the microcontroller and see the result of the prediction

Check how the embedded model functions

In this case, the food stays fresh, as the predicted class is zero, which means “fresh food”. The probability of zero is very high – 100% percent. The prediction was made in 3844 microseconds with 199 kB of Flash memory usage and 136 B of RAM usage. Also, you can see that the green LED is on, which signifies a good outcome.

Check how the embedded model functions

Here are the results for another row of data. In this case, we see that the model predicted that the food is spoiled, as the predicted class is one, which indicates “spoiled food”. The prediction was also made very fast, in 3848 microseconds, with the same 199 kB of Flash memory usage and 136 kB of RAM usage. In this case, you can see the red LED, indicating that the food is spoiled.

Conclusion:

This experiment proves that in just 5 simple steps, you can create a working smart device that, despite its tiny size, can be of great help in monitoring food quality. I am absolutely sure that such technologies can help us make our planet a cleaner and healthier place

submitted by /u/literallair
[visit reddit] [comments]

Categories
Misc

2 Powerful 2 Be Stopped: ‘Dying Light 2 Stay Human’ Arrives on GeForce NOW’s Second Anniversary

Great things come in twos. Techland’s Dying Light 2 Stay Human arrives with RTX ON and is streaming from the cloud tomorrow, Feb. 4. Plus, in celebration of the second anniversary of GeForce NOW, February is packed full of membership rewards in Eternal Return, World of Warships and more. There are also 30 games joining Read article >

The post 2 Powerful 2 Be Stopped: ‘Dying Light 2 Stay Human’ Arrives on GeForce NOW’s Second Anniversary appeared first on The Official NVIDIA Blog.

Categories
Misc

New to ML – Advice

I am looking to use ML to provide a single answer to a text input.

The idea is that I will have a dataset with two columns, a description column and a code column.

The user will enter a description via an api and the result will be a code that is most relevant based on the previous descriptions and codes used.

Can someone point me in the right direction. Zero experience with ML. Programming background.

Thanks

submitted by /u/mattbatchelor14
[visit reddit] [comments]

Categories
Misc

Rain or Shine: Radar Vision Sees Through Clouds to Support Emergency Flood Relief

Flooding usually comes with various bad weather conditions, such as thick clouds, heavy rain and blustering winds. GPU-powered data science systems can now help researchers and emergency flood response teams to see through it all. John Murray, visiting professor in the Geographic Data Science Lab at the University of Liverpool, developed cuSAR, a platform that Read article >

The post Rain or Shine: Radar Vision Sees Through Clouds to Support Emergency Flood Relief appeared first on The Official NVIDIA Blog.

Categories
Misc

ModuleNotFoundError: No module named ‘tflearn’

submitted by /u/Guacamole_is_good
[visit reddit] [comments]