Issue
I have a custom model trained initially on VGG16 using transfer learning. However, it was initially trained on images with a smaller input size. Now, I am using images with bigger sizes, therefore I'd like to grab the first model and take advantage of what it has learned but now with new dataset.
More specifically:
Layer (type) Output Shape Param #
=================================================================
block1_conv1 (Conv2D) (None, 128, 160, 64) 1792
block1_conv2 (Conv2D) (None, 128, 160, 64) 36928
block1_pool (MaxPooling2D) (None, 64, 80, 64) 0
block2_conv1 (Conv2D) (None, 64, 80, 128) 73856
block2_conv2 (Conv2D) (None, 64, 80, 128) 147584
block2_pool (MaxPooling2D) (None, 32, 40, 128) 0
block3_conv1 (Conv2D) (None, 32, 40, 256) 295168
block3_conv2 (Conv2D) (None, 32, 40, 256) 590080
block3_conv3 (Conv2D) (None, 32, 40, 256) 590080
block3_pool (MaxPooling2D) (None, 16, 20, 256) 0
block4_conv1 (Conv2D) (None, 16, 20, 512) 1180160
block4_conv2 (Conv2D) (None, 16, 20, 512) 2359808
block4_conv3 (Conv2D) (None, 16, 20, 512) 2359808
block4_pool (MaxPooling2D) (None, 8, 10, 512) 0
block5_conv1 (Conv2D) (None, 8, 10, 512) 2359808
block5_conv2 (Conv2D) (None, 8, 10, 512) 2359808
block5_conv3 (Conv2D) (None, 8, 10, 512) 2359808
block5_pool (MaxPooling2D) (None, 4, 5, 512) 0
flatten (Flatten) (None, 10240) 0
dense (Dense) (None, 16) 163856
output (Dense) (None, 1) 17
The problem is that this model already includes an input layer of 128x160, and I'd like to change it to 384x288 for transfer learning.
The above is my first model, I now would like to do transfer learning again but with a different dataset that has an input of size 384x288 and I'd like to use a softmax for two classes instead.
So, what i want to do is a transfer learning from the custom model on a different dataset, So I need to change the input size and retrain the new model with my own data
How can I do a transfer learning on the model above but with a new dataset and different classification layer in the output?
Solution
Found a very simple solution to my problem and now I am able to train it with different data and diferent classification layers:
from keras.models import load_model
from keras.models import Model
from keras.models import Sequential
old_model = load_model("/content/drive/MyDrive/old_model.h5")
old_model = Model(old_model.input, old_model.layers[-4].output) # Remove the classification, dense and flatten layers
base_model = Sequential() # Create a new model from the 2nd layer and all the convolutional blocks
for layer in old_model.layers[1:]:
base_model.add(layer)
for layer_number, layer in enumerate(base_model.layers):
print(layer_number, layer.name, layer.trainable)
# Perform transfer learning
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(384, 288, 3)),
base_model,
tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(units=2, activation='softmax')
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
Answered By - xnok
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.