Issue
so I have 2 images, img1
and img2
both with shape=(20,20)
, to which I expand_dims
to (1,20,20)
1 being batch size and feed them to the network, but I get the following error:
ValueError: Negative dimension size caused by subtracting 3 from 1 for '{{node conv2d/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](Placeholder, conv2d/Conv2D/ReadVariableOp)' with input shapes: [?,1,20,20], [3,3,20,32]. ```
def mean_squared_error(y_true, y_pred):
return tf.keras.metrics.mean_squared_error(y_true, y_pred)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(1,20,20)))
model.add(Conv2D(1, kernel_size=(3, 3),
activation='relu'))
model.compile(optimizer='adam', loss=mean_squared_error, metrics=[mean_squared_error, 'accuracy'])
# Train
model.fit(img1, img2)
Solution
The convolution layers reduce your input's dimensions, but IIUC, you are trying to apply mse
to the model's output and img2
. So try something like this:
import tensorflow as tf
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(32, kernel_size=(2, 2),
activation='relu',
input_shape=(20, 20, 1)))
model.add(tf.keras.layers.Conv2DTranspose(1, kernel_size=(2, 2),
activation='relu'))
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
# Train
img1 = tf.random.normal((1, 20, 20))
img2 = tf.random.normal((1, 20, 20))
model.fit(img1, img2)
Answered By - AloneTogether
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.