Categories
Misc

Model was constructed with shape (None, 100) for input but it was called on an input with incompatible shape (None, 1).

Hello,

I’m working on a project where I aim to give an image on the night sky to my neural network, and it should tell me how many satellites are in the frame.

I’ve trained my network to learn what a satellite looks like by giving it 10×10 pxl cropped images, and so my model takes 100 input. I’m doing it as a classification problem, and so it gives me the probability of the 10×10 pxl image to contain a satellite.

I’ve trained the network with no issue, I’ve done the testing with no issue, but when it comes to actually giving it a single 10x10pxl image to give me an answer, I get this error message:

ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 100 but received input with shape (None, 1)

This is what my code looks like for the model training and testing:

visible = Input(shape=100) Hidden1 = Dense(32, activation = 'relu')(visible) Hidden2 = Dense(64, activation = 'relu')(Hidden1) Hidden3 = Dense(128, activation = 'relu')(Hidden2) Output = Dense(1, activation = 'softplus')(Hidden3) model = Model(inputs=visible, outputs=Output) model.compile(loss='huber',optimizer='adam', metrics=['accuracy']) batch_size = 112 epochs = 200 model.fit(x=x_train,y=y_train, batch_size=batch_size,epochs=epochs) test_loss, test_acc = model.evaluate(x_test, y_test) >Test Loss: 0.06204257160425186, Test Accuracy: 0.8361972570419312 

I have written a little code that crops full size frames (480×640) into a multitude of 10×10 crops, and I want to feed each crop to the network to then tell me which crops contain a satellite.

# image size is 480*640 height = 480 length = 640 X_Try = [] for a in range(0, height, 10): for b in range(0, length, 10): img = image.crop((a, b, a+10, b+10)) img = np.array(img)/255.0 #to normalize my values img = img.flatten() #to make the matrix into a single array X_Try.append(img) print(len(X_Try)) >3072 print(len(X_Try[0])) >100 

My X_Try is populated with arrays of shape (100,1), and my logic is that I should just give each of those arrays to my model to predict, and that should output a y_pred of shape (3072,1) containing the probability of each crop to contain a satellite.

However, my problem appears here:

y_pred=[] for f in range(0,len(X_Try)): y_p = model.predict(X_Try[f]) y_pred.append(y_p) 

This is then where the error message appears: it seems to be of the opinion that I’m giving something of shape (None, 1), and it only takes input of shape (None,100), however I’ve checked before and all of my arrays in X_Try are indeed of size 100.

I’ve also tried with a single crop, in case it is the for loop that’s causing issue:

y_pred = model.predict(X_Try[0]) 

but that gives me the same error.

I’ve looked online for similar issue but most people seem to have that problem during the training or the testing, not at the prediction.

Could someone guide me in the right direction?

EDIT: I forgot to add that, but I find that doing

y_pred = model.predict(x_test) 

does work, where x_test has a shape of (4481,100,1) but for some reason doing

y_pred = model.predict(X_Try) 

doesn’t work, where X_Try has a shape of (3072,100,1) doesn’t work and gives me the error message:

ValueError: Layer model_2 expects 1 input(s), but it received 3072 input tensors.

submitted by /u/flaflou
[visit reddit] [comments]

Leave a Reply

Your email address will not be published.