Issue
I understand that conv2d is used for downsampling and conv2dtranspose is the opposite (upsampling). However, assuming we are not using stride or padding here. Is there a difference between the two?
Downsampling means reducing the size of input dimension. for example If you have an input of (Batch Size = 5, Channel = 3, Height = 8, Width = 8), if you reduce the height and width using maxpooling (stride=2 kernel_size=2) the output becomes (Batch Size = 5, Channel = 3, Height = 4, Width = 4). That's downsampling, the opposite is upsampling (Increasing the Height and Width dimension)
for example:
classifier1 = torch.nn.Conv2d(in_channels=10, out_channels=5, kernel_size=1)
classifier2 = torch.nn.Conv2dTranspose(in_channels=10, out_channels=5, kernel_size=1)
Solution
Operation wise, no difference. ConvTranspose2d()
inserts stride - 1
zeros inbetween all rows and columns, adds kernel size - padding - 1
padding zeros, then does exactly the same stuff as Conv2d()
. Default arguments result in no changes.
Though, if you actually run them back to back like this on the same input, the results will vary unless you explicitly equalize the inital weights, of course.
Answered By - dx2-66
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.