Issue
My neural network trainign in pytorch is getting very wierd.
I am training a known dataset that came splitted into train and validation. I'm shuffeling the data during training and do data augmentation on the fly.
I have those results:
Train accuracy start at 80% and increases
Train loss decreases and stays stable
Validation accuracy start at 30% but increases slowly
I have the following graphs to show:
How can you explain that the validation loss increases and the validation accuracy increases?
How can be such a big difference of accuracy between validation and training sets? 90% and 40%?
Update:
I balanced the data set. It is binary classification. It now has now 1700 examples from class 1, 1200 examples from class 2. Total 600 for validation and 2300 for training. I still see similar behavior:
**Can it be becuase I froze the weights in part of the network?
**Can it be becuase the hyperparametrs like lr?
Solution
I found the solution: I had different data augmentation for training set and validation set. Matching them also increased the validation accuracy!
Answered By - BestR
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.