Issue
I want to convert a list of pixel values to tensor, but I got an error. My code calculate the pixel values (RGB) for each detected object in the image. How we can convert the list to tensor??
my code:
cropped_images =[]
imgs = PIL.Image.open(img_path).convert('RGB')
#print(img_path)
image_width, image_height = imgs.size
imgArrays = np.array(imgs)
X = (xCenter*image_width)
Y = (yCenter*image_height)
W = (Width*image_width)
H = (Height*image_height)
cropped_image = np.zeros((image_height, image_width))
for i in range(len(X)):
x1, y1, w, h = X[i], Y[i], W[i], H[i]
x_start = int(x1 - (w/2))
y_start = int(y1 - (h/2))
x_end = int(x_start + w)
y_end = int(y_start + h)
temp = imgArrays[y_start: y_end, x_start: x_end]
cropped_image_pixels = torch.as_tensor(temp)
cropped_images.append(cropped_image_pixels)
stacked_tensor = torch.stack(cropped_images)
print(stacked_tensor)
the error:
RuntimeError Traceback (most recent call last)
<ipython-input-82-653a155c3b71> in <module>()
130
131 if __name__=="__main__":
--> 132 main()
2 frames
<ipython-input-80-670335a0656c> in __getitem__(self, idx)
76 cropped_image_pixels = torch.as_tensor(temp)
77 cropped_images.append(cropped_image_pixels)
---> 78 stacked_tensor = torch.stack(cropped_images)
79
80 print(stacked_tensor)
RuntimeError: stack expects each tensor to be equal size, but got [506, 343, 3] at entry 0 and [520, 334, 3] at entry 1
Solution
list of tensors has two tensors and it's clear that both don't have same size
torch.stack(tensors, dim=0, *, out=None) → Tensor
Concatenates a sequence of tensors along a new dimension.
All tensors need to be of the same size.
you can use this pseudo code
import torchvision.transforms as transforms
.
.
.
.
temp=[]
for img_name in LIST:
img=cv2.resize(img,(H,W))
temp.append(img)
train_x=np.asarray(temp)
transform = transforms.Compose(
[transforms.ToTensor(),
Answered By - oussama Seffai
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.