I’m actually working with the library since long time. Until now, the applications were very standard and sometimes even already implemented.
In my actual case, I would need change the convolution, ideally, I would like to:
“Change” the image convoluted area. For example actually abbreviated we have conv(Tensor[:]).
What I’d like to have is something like: conv(Tensor[1:5, 1:5, : , :]).
I would like to know if there is an already implemented solution? And if no, is there any possibility to make it myself?
I could have implemented a convolution, my concern is about GPU memory management which leads me to ask you if there is any innate solution.
Thanks in advance.
In your example you are applying the convolution for the
4 samples in the current batch and
This should work as you’ve suggested:
batch_size = 10
channels = 10
h, w = 24, 24
x = torch.randn(batch_size, channels, h, w)
conv = nn.Conv2d(4, 1, 3, 1, 1)
output = conv(x[1:5, 1:5])
I actually was agreeing with this solution which seems correct. My actual concern is: can I translate the convolution without using such tensor reduction.
In my case, for a more precise case, I’d like to make 4 convolution at a time (4 different layers) but for each of them, they should start correspondingly as (0.1), (0,0), (1,0) and (1,1). The idea was to know is there was a solution for a prior translation of the convolution.
If there is no such possible solution, I’ll go for a tensor reduction with the square bracket accessor.
Anyhow it’s just a matter of a proper / generic code. If it’s not possible thanks for your answer.