Hello,
I am working to family with Pytorch.
I created a 3D network to classify image. The input shape of image I used is (1, 1, 48, 48, 48) and the output shape is torch.Size([1, 256, 3, 3, 3]).
Now I want to continue use Sobel filter for edge detection. I used this following code:
inputs = torch.randn(1, 1, 48, 48, 48)
x = net.forward(inputs)
print(x.shape) // [1, 256, 3, 3, 3]
sobel = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
depth=x.size()[1]
channels=x.size()[2]
sobel_kernel = torch.FloatTensor(sobel).expand(depth,channels,3,3).unsqueeze(0)
print(sobel_kernel.shape) //torch.Size([1, 256, 3, 3, 3])
malignacy = F.conv3d(x, sobel_kernel, stride=1, padding=1)
print(malignacy.shape) //torch.Size([1, 1, 3, 3, 3])
The output is torch.Size([1, 1, 3, 3, 3]), but I want to keep the number of image’s depth is 256 not 1.
How can I solve it?
Thanks in advance!