Issue
I take data from a file
y = np.genfromtxt('dataY.txt', dtype=np.float32) #input data for target dataset
x = np.genfromtxt('dataX.txt', dtype=np.float32)#input data for input dataset
I split the data accordingly:
xtrain, xtest, ytrain, ytest = train_test_split (x,y,test_size=.2)
train_data = tf.data.Dataset.from_tensor_slices((xtrain, ytrain))
valid_data = tf.data.Dataset.from_tensor_slices((xtest, ytest))
The dataY.txt
file is made by 1000 rows. Each row includes 30 numbers that I want the NN to guess after the training, given the input X.
The dataX.txt
file is made by 1000 rows, one for each Y. Each row includes 100 numbers.
Question: how do I make the following code work?
model = Sequential()
#whad do I need to write in the following line(s)?
model.add(Conv2D(100,(7,7)))
model.add(LeakyReLU())
model.compile( loss='mse', metrics=['mse'])
model.fit(train_data, epochs=10, validation_data=valid_data)
ypred = model.predict(x)
ERROR:
ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: (100, 1)
Solution
It looks like you're using a convolutional layer for 1D data. Try dense layers instead:
model = tf.keras.models.Sequential()
model.add(tf.keras.Input(shape=(100,)))
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dense(30))
[...]
Also I'd recommend printing the shapes of x, y, xtest, xtrain, etc. to ensure you're feeding in the correct dimensions.
(Source: https://keras.io/api/layers/core_layers/dense/)
Answered By - Skrt
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.