Issue
I’m implementing a neural network from a paper in PyTorch. Here is the screenshot of the paper: Here N_Psi is a neural network, and K is a decision matrix. The way I came up with is to include an extra linear layer for K, but wondering if there's any chance to explicitly define K as decision variables in a more direct way?
Any hint would be very helpful. Thanks in advance!
Solution
You can define your decision matrix as a fully connected layer with no bias, using nn.Linear
. Then you have to add this additional layer to your optimizer parameter list. Given N
your neural network, K
you linear layer, and optim
your torch.optim.Optimizer
class, you can:
optimizer = optim(list(N.parameters()) + list(K.parameters()))
Then in the inference stage, given x_n+1
and x_n
, do something like:
mse = F.mse_loss(N(x_n+1), K(N(x_n)))
reg_1 = K.weight.pow(2).sum()
reg_2 = p2v(N.parameters()).sum()
loss = mse + lamb_1*reg_1 + lamb_2*reg_2
Where we imported:
torch.nn.functional
asF
torch.nn.utils.parameters_to_vector
asp2v
Answered By - Ivan
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.