Issue
Firstly, I have trained a model on 224,224,3 images and now I am working on visualization taken from MNIST dataset codebase. Below code is worked fine on grayscale images but when i used for color images it didn't not work out.
Code Works fine
with torch.no_grad():
while True:
image = cv2.imread("example.png", flags=cv2.IMREAD_GRAYSCALE)
print(image.shape)
input_img_h, input_img_w = image.shape
image = scale_transformation(image, scale_factor=scale_factors[scale_idx_factor])
image = rotation_transformation(image, angle=rotation_factors[rotation_idx_factor])
scale_idx_factor = (scale_idx_factor + 1) % len(scale_factors)
rotation_idx_factor = (rotation_idx_factor + 1) % len(rotation_factors)
image_tensor = torch.from_numpy(image) / 255.
print("image_tensor.shape:", image_tensor.shape)
image_tensor = image_tensor.view(1, 1, input_img_h, input_img_w)
image_tensor = T.Normalize((0.1307,), (0.3081,))(image_tensor)
image_tensor = image_tensor.to(device)
out = model(image_tensor)
image = np.repeat(image[..., np.newaxis], 3, axis=-1)
roi_y, roi_x = input_img_h // 2, input_img_w // 2
plot_offsets(image, save_output, roi_x=roi_x, roi_y=roi_y)
save_output.clear()
image = cv2.resize(image, dsize=(224, 224))
cv2.imshow("image", image)
key = cv2.waitKey(30)
if key == 27:
break
Code with problem: I have changed image size only
with torch.no_grad():
while True:
image = cv2.imread("image_06764.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
print('Original Dimensions : ', image.shape)
width = 224
height = 224
dim = (width, height)
image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)
# print(resized.shape[0])
input_img_h = image.shape[0]
input_img_w = image.shape[1]
image = scale_transformation(image, scale_factor=scale_factors[scale_idx_factor])
print("dfdf", image.shape)
image = rotation_transformation(image, angle=rotation_factors[rotation_idx_factor])
scale_idx_factor = (scale_idx_factor + 1) % len(scale_factors)
rotation_idx_factor = (rotation_idx_factor + 1) % len(rotation_factors)
image_tensor = torch.from_numpy(image) / 255.
print("ggggggggggg", image_tensor.size())
image_tensor = image_tensor.view(32, 3, input_img_h, input_img_w)
print("image_tensor.shape:", image_tensor.shape)
image_tensor = T.Normalize((0.1307,), (0.3081,))(image_tensor)
image_tensor = image_tensor.to(device)
out = model(image_tensor)
image = np.repeat(image[..., np.newaxis], 3, axis=-1)
roi_y, roi_x = input_img_h // 2, input_img_w // 2
plot_offsets(image, save_output, roi_x=roi_x, roi_y=roi_y)
save_output.clear()
image = cv2.resize(image, dsize=(224, 224))
cv2.imshow("image", image)
key = cv2.waitKey(30)
if key == 27:
break
Traceback
Traceback (most recent call last):
File "/media/cvpr/CM_1/tutorials/Deformable_Convolutionv_V2/offset_visualization.py", line 184, in <module>
image_tensor = image_tensor.view(32, 3, input_img_h, input_img_w)
RuntimeError: shape '[32, 3, 224, 224]' is invalid for input of size 50176
Solution
image_tensor
is a tensor size of 50176
, which can be resized to 224x224
. However, you're trying to resize it to 32x3x224x224
.
Try this:
image_tensor = image_tensor.view(1, 1, input_img_h, input_img_w).repeat(1, 3, 1, 1)
Above code will copy the grayscale image 3 time channel-wise, resulting a tensor size of 1x3x224x224
.
Additionally, why are you converting the color image to grayscale image with image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
? There will be no channel problem if you remove it.
Any advise or error correction of the answer is welcomed
Answered By - Hayoung
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.