Issue
I am building a toy model to take in some images and give me a classification. My model looks like:
conv2d -> pool -> conv2d -> linear -> linear
.
My issue is that when we create the model, we have to calculate the size of the first linear layer in_features
based on the size of the input image. If we get new images of different sizes, we have to recalculate in_features
for our linear layer. Why do we have to do this? Can't it just be inferred?
Solution
As of 1.8, PyTorch now has LazyLinear
which infers the input dimension:
A
torch.nn.Linear
module where in_features is inferred.
Answered By - iacob
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.