Issue
I have written the following Keras §model:
input = Input(shape=(train.shape[1:]))
x = layers.Conv1D(filters=32, padding="valid", activation="relu", strides=1, kernel_size=1)(input)
x = layers.Conv1D(filters=32, padding="valid", activation="relu", strides=1, kernel_size=1)(x)
x = layers.Conv1D(filters=32, padding="valid", activation="relu", strides=1, kernel_size=1)(x)
x = layers.GlobalMaxPooling1D()(x)
x = layers.Dense(1024, activation="relu")(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.1)(x)
predictions = layers.Dense(1,kernel_initializer='normal')(x)
model = tf.keras.Model(inputs=[protein_input], outputs=[predictions])
plot_model(model,"model.png", show_dtype=True, show_shapes=True, show_layer_names=True)
model.summary()
Which produces this table:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Total params: 1,086,593
Trainable params: 1,086,593
Non-trainable params: 0
_________________________________________________________________
plot_model()
also does not produce a diagram.
The model compiles, and I can run model.fit
.
model.compile(loss=tf.keras.losses.MeanSquaredError(), optimizer="adam", metrics=['mean_absolute_error'])
epochs=100
history = model.fit(x=[train],y=log_training_Kd_labels,
validation_data=([val],log_validation_Kd_labels),
epochs=epochs,
batch_size=100)
But I am not sure if any learning is occurring. Can anyone advise why this is happening?
Solution
You are probably mixing keras and tensorflow libraries. Since Tensorflow implemented keras libraries, this a common mistake between developers that import keras and tensorflow and use both of them randomly and this leads to some weird behavior.
Just use either import tensorflow.keras
or import keras
in your entire code.
For example if I code like this (use both library randomly):
import keras #import keras
import tensorflow as tf
from tensorflow.keras.layers import Dense #import layers from tensorflow.keras
from tensorflow.keras import Input
input = Input(shape = (20,))
x = Dense(30, name = 'dense1')(input)
x = Dense(20, name = 'dense2')(x)
output = Dense(1)(x)
model = keras.models.Model(inputs = input ,outputs = output)
model.compile(loss = 'mse', optimizer = 'adam')
model.summary()
The output will be:
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Total params: 1,271
Trainable params: 1,271
Non-trainable params: 0
_________________________________________________________________
But if I modify imports and use just tensorflow.keras
and not use keras
like this:
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras import Input
from tensorflow.keras.models import Model
input = Input(shape = (20,))
x = Dense(30, name = 'dense1')(input)
x = Dense(20, name = 'dense2')(x)
output = Dense(1)(x)
model = Model(inputs = input ,outputs = output)
model.compile(loss = 'mse', optimizer = 'adam')
model.summary()
I will get the output like this:
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 20)] 0
_________________________________________________________________
dense1 (Dense) (None, 30) 630
_________________________________________________________________
dense2 (Dense) (None, 20) 620
_________________________________________________________________
dense_2 (Dense) (None, 1) 21
=================================================================
Total params: 1,271
Trainable params: 1,271
Non-trainable params: 0
_________________________________________________________________
Answered By - Kaveh
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.