Brand new to machine learning. I followed a tutorial to build a chatbot which uses nlp to determine which response to send, it uses tensorflow/keras.
It seems to run the training part fine but when actually running the chatbot it returns errors regarding nvcuda and cudart.
I had no idea that GPU’s were relevant in machine learning. Is it something I can get around or do I need to invest in a new laptop??
My current model is MSI Modern 14 B10MW Core i3 10th Gen with an intel GPU.
Below is the code for the chatbot, not sure if it’s relevant but it does seem like if the training.py file is running the this should be fine?
Pretty confused so any help would be great.
import json import pickle import numpy as np import nltk from nltk.stem import WordNetLemmatizer import tensorflow from tensorflow import keras from keras.models import load_model lemmatizer = WordNetLemmatizer intents = json.loads(open('intents.json').read()) words = pickle.load(open('words.pkl', 'rb')) classes = pickle.load(open('classes.pkl', 'rb')) model = load_model('chatbot_model.model') def clean_up_sentence(sentence): sentence_words = nltk.word_tokenize(sentence) sentence_words = [lemmatizer.lemmatize(word) for word in sentence_words] return sentence_words def bag_of_words(sentence): sentence_words = clean_up_sentence(sentence) bag = [0] * len(words) for w in sentence_words: for i, word in enumerate(words): if word == w: bag[i] = 1 return np.array(bag) def predict_class(sentence): bow = bag_of_words(sentence) res = model.predict(np.array(bow))[0] error_threshold = 0.25 results = [[i, r] for i, r in enumerate(res) if r > error_threshold] results.sort(key=lambda x: x[1], reverse=True) return_list = [] for r in results: return_list.append({'intent' : classes[r[0]], 'probability': str(r[1])}) return return_list
submitted by /u/Zenemm
[visit reddit] [comments]