Issue
I have a doubt regarding the evaluation on the test set of my bert model. During the eval part param.requires_grad is suppose to be True or False? indipendently if I did a full fine tuning during training or not. My model is in model.eval() mode but I want to be sure to not force nothing wrong in the Model() class when i call it for evaluation. Thanks !
if freeze_bert == 'True':
for param in self.bert.parameters():
param.requires_grad = False
#logging.info('freeze_bert: {}'.format(freeze_bert))
#logging.info('param.requires_grad: {}'.format(param.requires_grad))
if freeze_bert == 'False':
for param in self.bert.parameters():
param.requires_grad = True
Solution
If you freeze your model then the parameter of the corresponding modules must not be updated, i.e. they should not require gradient computation: requires_grad=False
.
Note nn.Module
also has a requires_grad_
method:
if freeze_bert == 'True':
self.bert.requires_grad_(False)
elif freeze_bert == 'False:
self.bert.requires_grad_(True)
Ideally freeze_bert
would be a boolean and you would simply do:
self.bert.requires_grad_(not freeze_bert)
Answered By - Ivan
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.