How could i convolute an image with a Gaussian kernel?

my way is to transform each image in the batch to numpy and use scipy Gaussian filtering function to do it, and then transform it back to tensor.
are there any inbuilt pytorch functions or clever ways to achieve this ? thanks

use .numpy() on the input batch, and transform the data using scipy as you wish. If it is CPU tensor, data pointer is shared between torch.Tensor and it’s corresponding ndarray generated with .numpy().

See for example:

x = torch.Tensor([1, 2, 3])
y = x.numpy()
print(x)
print(y)

x.add_(1)

print(x)
print(y)

# see that y has also changed if you added something to x

# Similarly,

y += 1
print(x)
print(y)

# changing y in-place also changes x.

1 Like

thank you very much for the point. in this way, after i use scipy, due to data is synchronized i dont need to transform the numpy array back to torch tensors,right?

That is correct, it is only true for CPU tensors remember. For CUDA Tensors the memory sharing is not present.

i got it. thanks for your kind answer