Categories
Misc

Map image to vector data using deep learning

I’m working on a project where basically I would like to take a relatively small binary image and train a DL model to produce a vector of 360 outputs, one for each angle (i.e. the amplitude response at every angle produced by the image in question). I’m trying to figure out how to best build this from a model perspective. I think some convolutional layers might make sense, but I’m not sure whether I should just build a dense layer with 360 outputs, one for each angle, or something else entirely (perhaps a convolutional AE or RNN/LSTM or something like that, since we’re looking at sequence data?) And I’m not sure how the fact angles are involved might change anything either. Any ideas are appreciated!

submitted by /u/engndesign74
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *