Split inputdata into seperate LSTM layers

I have 10k+ documents which belong either to class 0 or 1. I want to train a model by feeding it each sentence in a document through a textvectorize layer ➡️ embedding layer ➡️ lstm layer Then concat all lstm layers for each sentence feed them to 1 dense layer with outputsize of 128 ➡️ dense layer outputsize of 1 with softmax as activation.

Number of sentences in each document is somewhere between 0 and 5000. Each sentence has between 1-5 words.

I have managed to create a model for predicting which class a sentence most likely belong to. But i want to basically extend it to take all sentences from an exampledocument for classifying entire document. Each sentence is not related to the other.

How do i go about this?

submitted by /u/jacobkodar
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *