Issue
I am using the torch.nn.BCEWithLogitsLoss
function to train a binary classification model. This loss function, as quoted from the docs, has already embedded with a sigmoid
function.
This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.
So my question is after training this model, how should I use this model with the new inputs? Can I simply add a F.sigmoid()
layer after the final layer of my model? But the doc says it is different from using a plain Sigmoid layer with BCELoss.
Solution
Considering you have a trained model
with BCEWithLogitsLoss, for inference you can simply do:
model = your_model()
out = model(test_data)
probs = torch.sigmoid(out)
You can checkout this issue for further reference.
Answered By - Ro.oT
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.