Issue
Community,
I have a file of bytes organized into chunks of 16384 bytes. Each of the chunks contains an uncompressed image of size 64x64 pixels. The format of the image pixel is 8-bit ABGR.
Let's say I have successfully read the chunk into numpy.array:
buf = numpy.fromfile( dataFile, dtype=np.uint8, count=16384, offset=offs)
The question is how could I convert this array of bytes into a Pytorch tensor so that to be able to perform convolution (Conv2d)?
If I understand it properly, the aforementioned convolution (Conv2d) expects the tensor to have separate channel planes instead of a single plane of multichannel pixels.
And the extra question is how to get rid of alpha-channel in the meanwhile?
Solution
In the following code, I'm assuming that your image is stored in a row-major fashion (so your bytes are abgrabgrabgr....
and not aaaa....bbbb....gggg....rrrr
)
# First I reshape the buf into an image with 4 channels
buf = buf.reshape((4, 64, 64))
# Then I remove the alpha channel by taking only the 3 last :
bgr_buf = buf[1:, :, :]
# Then I make it into a pytorch tensor :
tensor = torch.from_numpy(bgr_buf)
# And finally, pytorch convolutions expects batchs of images. So let's make it a batch of 1
batch = tensor.view(1, 3, 64, 64)
Answered By - trialNerror
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.