Issue
I'm trying to implement a custom neural network model using PyTorch for a classification task. When I inspect the output probabilities, they don't sum up to 1. I've added a torch.nn.Softmax(dim=1) layer at the end of my model, which should normalize the output to probabilities, but it doesn't seem to be working.
def custom_model(input_size, output_size):
model = torch.nn.Sequential(
torch.nn.Linear(input_size, 64),
torch.nn.ReLU(),
torch.nn.Linear(64, output_size),
torch.nn.Softmax(dim=1) # Softmax layer for classification
)
return model
input_size = 10
output_size = 5
model = custom_model(input_size, output_size)
criterion = torch.nn.CrossEntropyLoss()
optimiser = torch.optim.SGD(model.parameters(), lr=0.001)
input_data = torch.randn(32, input_size)
target = torch.randint(0, output_size, (32,))
output = model(input_data)
loss = criterion(output, target)
optimiser.zero_grad()
loss.backward()
optimiser.step()
print(f'Loss: {loss.item()}')
Can anyone help?
Solution
I append the following code at the end of yours
# predict the output of one sample
sample = torch.randn(input_size)
sample = sample.unsqueeze(0)
output = model(sample)
print(f'Output: {output}')
# print the sum of the output vector
print(f'Sum: {output.sum()}')
and output is the following:
Loss: 1.6140714883804321
Output: tensor([[0.1783, 0.2104, 0.1990, 0.1492, 0.2631]], grad_fn=<SoftmaxBackward0>)
Sum: 0.9999999403953552
It is not exactly 1.0
but almost, so the code is correct.
Answered By - Marco Parola
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.