Issue
Might be a bit silly but I need to make sure this is correct. Does it matter if I place my code like this?:
model.eval()
with torch.no_grad():
Or can I get the same behaviour like this:
with torch.no_grad():
model.eval()
I'm just wondering because I have a function that has model.eval() inside of it which goes inside of a loop where with torch.no_grad():
is before it ...
Solution
both of them are correct, you just need to use the model. eval()
Before you explore,
you should put the model in eval mode, both in general and so that batch norm doesn't cause you issues and is using its eval statistics
Answered By - Mohamed Fathallah
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.