Categories
Misc

Wrong output from tensorflow pose classification while test data is 97% accurate

Hi all, I’m running into a problem with a student project.

I’m trying to use the tensorflow pose estimation library to create a script that recognizes different human gestures (specifically, pointing up, pointing left, and pointing right) using Movenet.

I followed the tutorial [ https://www.tensorflow.org/lite/tutorials/pose_classification ] to train my neural network using 3000+ pictures of gestures sourced from fellow students. The testing section of the tutorial shows that the model has a 97% accuracy on the test data subselection.

The output of this tutorial gives a .tflite file, and links to the [ https://github.com/tensorflow/examples/tree/master/lite/examples/pose_estimation/raspberry_pi ] github as a tutorial on how to use this .tflite to classify new input.

However the classifications seem to be completely off, not one gesture seems to be recognized. Suspicious of this result, I tried inserting some of the old training videos as input. These also seem to be classified completely wrongly, which leads me to think there is something wrong with my execution of the code.

Has anyone run into a similar problem using the tensorflow pose classification before? Or does anyone have an idea on what I could be doing wrong? I followed all the steps in the tutorials multiple times and am getting a bit hopeles…

The code I use to run the pose classification from the github:

import pose_estimation
pose_estimation.run(
‘movenet_lightning’, # estimation_model: str,
‘keypoint’, # tracker_type: str, # Apparantly not needed when using singlepose
‘gesture_classifier_using_lighting’, # classification_model: str,
‘gesture_labels.txt’, # label_file: str,
‘Kaj7.mp4’, # camera_id: int, #right now set to be an example video used in training
600, # width: int,
600) # height: int

submitted by /u/newbroo
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *