Categories
Misc

bigger Dataset resulting in loss of NaN without exeeding RAM limits

I’m currently trying to build a model that can authenticate a person on their movement data (accelleration etc)

The dataset is built by me and stored in a JSON file for training in google colab. Sample Notebook

Now older versions of the dataset with less worked out fine. But the new version I got has more entries and sudenly I only get a Loss of NaN and Accuracy of 0.5, no matter what I do.

RAM seems to be an obvious reason, but the RAM usage tracker in colab shows normal levels (2-4gb of the available 13) Also I mocked up dummy datasets with the same, or even bigger sizes and they worked out fine.

Do you guys have any Idea what is going on here? My only idea going forward is to move over to TFRecords instead of the JSON file.

submitted by /u/Cha-Dao_Tech
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *