Issue
I am trying to use the VGG16 network for multiple input images.
Training this model using a simple CNN with 2 inputs gave me an acc. of about 50 %, which is why I wanted to try it using an established model like VGG16.
Here is what I have tried out:
# imports
from keras.applications.vgg16 import VGG16
from keras.models import Model
from keras.layers import Conv2D, MaxPooling2D, Activation, Dropout, Flatten, Dense
def def_model():
model = VGG16(include_top=False, input_shape=(224, 224, 3))
# mark loaded layers as not trainable
for layer in model.layers:
layer.trainable = False
# return last pooling layer
pool_layer = model.layers[-1].output
return pool_layer
m1 = def_model()
m2 = def_model()
m3 = def_model()
# add classifier layers
merge = concatenate([m1, m2, m3])
# optinal_conv = Conv2D(64, (3, 3), activation='relu', padding='same')(merge)
# optinal_pool = MaxPooling2D(pool_size=(2, 2))(optinal_conv)
# flatten = Flatten()(optinal_pool)
flatten = Flatten()(merge)
dense1 = Dense(512, activation='relu')(flatten)
dense2 = Dropout(0.5)(dense1)
output = Dense(1, activation='sigmoid')(dense2)
inshape1 = Input(shape=(224, 224, 3))
inshape2 = Input(shape=(224, 224, 3))
inshape3 = Input(shape=(224, 224, 3))
model = Model(inputs=[inshape1, inshape2, inshape3], outputs=output)
- I get this error while calling the
Model
function.
ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_21:0", shape=(?, 224, 224, 3), dtype=float32) at layer "input_21". The following previous layers were accessed without issue: []`
I understand that the graph is a disconnect, but I could not find out where.
Here are the compile
and fit
functions.
# compile model
model.compile(optimizer="Adam", loss='binary_crossentropy', metrics=['accuracy'])
model.fit([train1, train2, train3], train,
validation_data=([test1, test2, test3], ytest))
- I have commented on some lines:
optinal_conv
andoptinal_pool
. What could be the effect to applyConv2D
andMaxPooling2D
after theconcatenate
function?
Solution
I recommend seeing this answer Multi-input Multi-output Model with Keras Functional API. Here is one way you can achieve this:
# 3 inputs
input0 = tf.keras.Input(shape=(224, 224, 3), name="img0")
input1 = tf.keras.Input(shape=(224, 224, 3), name="img1")
input2 = tf.keras.Input(shape=(224, 224, 3), name="img2")
concate_input = tf.keras.layers.Concatenate()([input0, input1, input2])
# get 3 feature maps with same size (224, 224)
# pretrained models needs that
input = tf.keras.layers.Conv2D(3, (3, 3),
padding='same', activation="relu")(concate_input)
# pass that to imagenet model
vg = tf.keras.applications.VGG16(weights=None,
include_top = False,
input_tensor = input)
# do whatever
gap = tf.keras.layers.GlobalAveragePooling2D()(vg.output)
den = tf.keras.layers.Dense(1, activation='sigmoid')(gap)
# build the complete model
model = tf.keras.Model(inputs=[input0, input1, input2], outputs=den)
Answered By - M.Innat
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.