Issue
If I have a model with a linear layer lin = nn.Linear(out_dim, in_dim)
then lin.named_parameters()
produces a sequence something like [('weight',Tensor), ('bias',Tensor)]
But if I run model.named_parameters()
the sequence is [('lin.weight',Tensor), ('lin.bias',Tensor)]
.
Is it possible to get the full name of the tensor from the layer? i.e. the name it has inside the root module?
Solution
Modules containing other modules will just preprend their own name separated by a period .
to the ones of the containing modules, so to get the "final" names you just have to split of the last part:
import torch
a = torch.nn.Linear(1, 1)
a.b = torch.nn.Linear(1, 1)
a.b.c = torch.nn.Linear(1, 1)
for (fullname, param) in a.named_parameters():
name = fullname.split('.')[-1]
print(name, '\t', fullname)
Answered By - flawr
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.