Issue
I am implementing a decaying learning rate based on accuracy from the previous epoch.
Capturing Metrics:
class CustomMetrics(tf.keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.metrics={'loss': [],'accuracy': [],'val_loss': [],'val_accuracy': []}
self.lr=[]
def on_epoch_end(self, epoch, logs={}):
print(f"\nEPOCH {epoch} Callng from METRICS CLASS")
self.metrics['loss'].append(logs.get('loss'))
self.metrics['accuracy'].append(logs.get('accuracy'))
self.metrics['val_loss'].append(logs.get('val_loss'))
self.metrics['val_accuracy'].append(logs.get('val_accuracy'))
Custom Learning Decay:
from tensorflow.keras.callbacks import LearningRateScheduler
def changeLearningRate(epoch):
initial_learningrate=0.1
#print(f"EPOCH {epoch}, Calling from ChangeLearningRate:")
lr = 0.0
if epoch != 0:
if custom_metrics_dict.metrics['accuracy'][epoch] < custom_metrics_dict.metrics['accuracy'][epoch-1]:
print(f"Accuracy @ epoch {epoch} is less than acuracy at epoch {epoch-1}")
print("[INFO] Decreasing Learning Rate.....")
lr = initial_learningrate*(0.1)
print(f"LR Changed to {lr}")
return lr
Model Preparation:
input_layer = Input(shape=(2))
layer1 = Dense(32,activation='tanh',kernel_initializer=tf.random_uniform_initializer(0,1,seed=30))(input_layer)
output = Dense(2,activation='softmax',kernel_initializer=tf.random_uniform_initializer(0,1,seed=30))(layer1)
model = Model(inputs=input_layer,outputs=output)
custom_metrics_dict=CustomMetrics()
lrschedule = LearningRateScheduler(changeLearningRate, verbose=1)
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1,momentum=0.9)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train,Y_train,epochs=4, validation_data=(X_test,Y_test), batch_size=16 ,callbacks=[custom_metrics_dict,lrschedule])
It's erroring out with index out of range error
. From what I noticed, per epoch, LRScheduler code is being called more than once. I am unable to figure a way to make appropriate function calls. What can I try next?
Solution
The signature of the scheduler function is def scheduler(epoch, lr):
which means you should take the lr from that parameter.
You shouldn't write the initial_learningrate = 0.1
, if you do that your lr will not decay, you will always return the same when the accuracy decrease.
For the out of range exception you check than epoch is not 0, which means than at for epoch = 1 you're checking custom_metrics_dict.metrics['accuracy'][epoch]
and custom_metrics_dict.metrics['accuracy'][epoch-1]
, but you stored only one accuracy value, epoch 0 has no accuracy value so this array custom_metrics_dict.metrics['accuracy']
has only one value in it
I've run your code correctly with this function
from tensorflow.keras.callbacks import LearningRateScheduler
def changeLearningRate(epoch, lr):
print(f"EPOCH {epoch}, Calling from ChangeLearningRate: {custom_metrics_dict.metrics['accuracy']}")
if epoch > 1:
if custom_metrics_dict.metrics['accuracy'][epoch - 1] > custom_metrics_dict.metrics['accuracy'][epoch-2]:
print(f"Accuracy @ epoch {epoch} is less than acuracy at epoch {epoch-1}")
print("[INFO] Decreasing Learning Rate.....")
lr = lr*(0.1)
print(f"LR Changed to {lr}")
return lr
Answered By - Alexandre Catalano
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.