Issue
I am trying to build an image classification neural network using Keras to identify if a picture of a square on a chessboard contains either a black piece or a white piece. I created 256 pictures with size 45 x 45 of all chess pieces of a single chess set for both white and black by flipping them and rotating them. Since the number of training samples is relatively low and I am a newbie in Keras, I am having difficulties creating a model.
The structure of the images folders looks as follows:
-Data
---Training Data
--------black
--------white
---Validation Data
--------black
--------white
The zip file is linked here (Only 1.78 MB)
The code I have tried is based off this and can be seen here:
# Imports components from Keras
import tensorflow
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras import layers
import numpy as np
from PIL import Image
from tensorflow.python.ops.gen_dataset_ops import prefetch_dataset
import matplotlib.pyplot as plt
import glob
# Initializes a sequential model
model = Sequential()
# First layer
model.add(Dense(10, activation='relu', input_shape=(45*45*3,)))
# Second layer
model.add(Dense(10, activation='relu'))
# Output layer
model.add(Dense(2, activation='softmax'))
# Compile the model
model.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy'])
#open training data as np array
filelist = glob.glob('Data/Training Data/black/*.png')
train_dataBlack = np.array([np.array(Image.open(fname)) for fname in filelist])
filelist = glob.glob('Data/Training Data/white/*.png')
train_dataWhite = np.array([np.array(Image.open(fname)) for fname in filelist])
train_data = np.append(train_dataBlack,train_dataWhite)
#open validation data as np array
filelist = glob.glob('Data/Validation Data/black/*.png')
test_dataBlack = np.array([np.array(Image.open(fname)) for fname in filelist])
filelist = glob.glob('Data/Validation Data/white/*.png')
test_dataWhite = np.array([np.array(Image.open(fname)) for fname in filelist])
test_data = np.append(test_dataBlack,test_dataWhite)
test_labels = np.zeros(shape=(256,2))
#initializing training labels numpy array
train_labels = np.zeros(shape=(256,2))
i = 0
while(i < 256):
if(i < 128):
train_labels[i] = np.array([1,0])
else:
train_labels[i] = np.array([0,1])
i+=1
#initializing validation labels numpy array
i = 0
while(i < 256):
if(i < 128):
test_labels[i] = np.array([1,0])
else:
test_labels[i] = np.array([0,1])
i+=1
#shuffling the training data and training labels in the same way
rng_state = np.random.get_state()
np.random.shuffle(train_data)
np.random.set_state(rng_state)
np.random.shuffle(train_labels)
# Reshape the data to two-dimensional array
train_data = train_data.reshape(256, 45*45*3)
# Fit the model
model.fit(train_data, train_labels, epochs=10,validation_split=0.2)
#save/open model
model.save_weights('model_saved.h5')
model.load_weights('model_saved.h5')
# Reshape test data
test_data = test_data.reshape(256, 45*45*3)
# Evaluate the model
model.evaluate(test_data, test_labels)
#testing output for a single image
img = test_data[20]
img = img.reshape(1,45*45*3)
predictions = model.predict(img)
print(test_labels[20])
print(predictions*100)
The output doesn't seem to suggest any 'learning' is done since the accuracy of the validation data is 0.5000 even though it managed to get the test image 20 correct with 99% accuracy (not sure what's up there):
Epoch 1/10
7/7 [==============================] - 0s 22ms/step - loss: 76.1521 - accuracy: 0.4804 - val_loss: 34.4301 - val_accuracy: 0.6346
Epoch 2/10
7/7 [==============================] - 0s 3ms/step - loss: 38.9190 - accuracy: 0.4559 - val_loss: 19.3758 - val_accuracy: 0.3846
Epoch 3/10
7/7 [==============================] - 0s 3ms/step - loss: 18.7589 - accuracy: 0.5049 - val_loss: 35.1795 - val_accuracy: 0.3654
Epoch 4/10
7/7 [==============================] - 0s 3ms/step - loss: 18.5703 - accuracy: 0.5000 - val_loss: 4.7349 - val_accuracy: 0.5962
Epoch 5/10
7/7 [==============================] - 0s 3ms/step - loss: 6.5564 - accuracy: 0.5539 - val_loss: 10.1864 - val_accuracy: 0.4423
Epoch 6/10
7/7 [==============================] - 0s 3ms/step - loss: 6.8870 - accuracy: 0.5833 - val_loss: 11.2020 - val_accuracy: 0.4038
Epoch 7/10
7/7 [==============================] - 0s 3ms/step - loss: 7.3905 - accuracy: 0.5343 - val_loss: 17.9842 - val_accuracy: 0.3846
Epoch 8/10
7/7 [==============================] - 0s 3ms/step - loss: 6.3737 - accuracy: 0.6029 - val_loss: 13.0180 - val_accuracy: 0.4038
Epoch 9/10
7/7 [==============================] - 0s 3ms/step - loss: 6.2868 - accuracy: 0.5980 - val_loss: 14.8001 - val_accuracy: 0.3846
Epoch 10/10
7/7 [==============================] - 0s 3ms/step - loss: 5.0725 - accuracy: 0.6618 - val_loss: 18.7289 - val_accuracy: 0.3846
8/8 [==============================] - 0s 1ms/step - loss: 21.6894 - accuracy: 0.5000
[1. 0.]
[[99 1]]
I am clueless about pretty much everything:
- number of layers
- number of nodes in each layer
- the type of layers
- number of steps per epoch
- number of epochs
I have experimented a lot with all of those variables, but nothing I tried seems to help.
Thanks in advance for a response!
Solution
First thing you should do is to switch from an ANN/MLP to a shallow/very simple convolutional neural network.
You can have a look here on TensorFlow's official website. (https://www.tensorflow.org/tutorials/images/cnn).
The last layer's definition, the optimizer, loss function and metrics are correct!
You only need a more powerful network to be able to learn on your dataset, hence the suitability of CNN in case of image processing.
Once you have a baseline established (based on the tutorial above), you can start playing around with the hyperparameters.
Answered By - Timbus Calin
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.