Issue
I am following this guide to learn image classification with CNN and I implemented this code into my data set:
https://www.tensorflow.org/tutorials/images/classification
Code Updated
train_image_generator = ImageDataGenerator(rescale=1. / 255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1. / 255) # Generator for our validation data
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_img_folder,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical',
color_mode='grayscale')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=valid_img_folder,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='categorical',
color_mode='grayscale'
)
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 1)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(3, activation='softmax')
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train_value // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_valid_value // batch_size
)
# Single prediction
img = []
temp = np.array(Image.open('path/to/pic.jpg').resize((256, 256), Image.ANTIALIAS))
temp.shape = temp.shape + (1,) # now its (256, 256, 1)
img.append(temp)
test = np.array(img) # (1, 1024, 1024, 1)
prediction = model.predict(test)
When I try predict_generator function:
test_datagen = ImageDataGenerator(rescale=1 / 255.)
test_generator = test_datagen.flow_from_directory('test_images/',
classes=['0', '1', '2'],
color_mode='grayscale',
shuffle=True,
# use same size as in training
target_size=(256, 256))
preds = model.predict_generator(test_generator, steps=4) # I dont know what is steps doing. I put there because of error.
My first question is: I can get training and validation accuracy but I want to get single picture's prediction result. How can I do that? Example:
foo = model.predict(path/to/pic.jpg)
# foo returns 0-> 0.70 | 1-> 0.30
Added: When I try to use model.predict like that I got this error:
ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (1024, 1024)
or converting to 2d (and also 3d) np.array still got same.
My second question is: is there any way to predict without complete %100? I mean If we have 2 classes (cat and dog) and test moon picture i want to get results like that:
%15 cat | %10 dog
not
%50 cat | %50 dog
Added: I tried to put garbage class as following changing. When I run that onhistory = model.fit_generator
line I got following error:
ValueError: Error when checking target: expected dense_2 to have shape (3,) but got array with shape (2,)
Thank you in advance
Solution
First question :I can get training and validation accuracy but I want to get single picture's prediction result. How can I do that?
As you can see in the doc, you can totally use model.predict(x)
, as long as your x
is :
- A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).
- A dict mapping input names to the corresponding array/tensors, if the model has named inputs.
- A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample weights).
You just have to write the code that reads the .jpg image and feed it to the model.
Second question : is there any way to predict without complete %100? I mean If we have 2 classes (cat and dog) and test moon picture i want to get results like that:
You could create a third class 'garbage', to do so you'll need to change the last layer of your net to:
Dense(3, activation='softmax')
And change your loss to categorical_crossentropy
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
And change class_mode
to categorical
and not binary
.
In that case you'll have dog:15%, cat:10%, garbage: 75%
Edit on conv2D error :
ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (1024, 1024)
you have :
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)),
This means that an image is (height, width, channel)
.
As seen in the doc, since this is the input_layer you need to provide the format in 4D with the shape : (samples, rows, cols, channels)
. If you want to give only one image, you need to have an array shaped as (1, rows, cols, channels)
.
Answered By - Orphee Faucoz
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.