Hello.

I need to implement a segmentation-aware conv operation, which means during conv, the weights of the filter is multiplied by a local segmentation-aware weights.

As the weights are different in different location, i need to unfold the image conv to act like matrix multiplcation.

Thanks.

you could either use `torch.unfold`

from `pytorch`

or im2col/col2im from https://github.com/szagoruyko/pyinn

will unfold support dilated spacing in future? with which the dilated convolution can be implemented too.

will unfold support dilated spacing in future? with which the dilated convolution can be implemented too.

We now have `nn.functional.unfold`

that supports batches, dilation and is pretty much the same as im2col for 4d inputs: https://pytorch.org/docs/stable/nn.html#torch.nn.functional.unfold

Sorry for not watching the latest update… Is it available in version 0.3 ? I can’t visit the version selection website so can only see version 0.4…

Another question is about the functional caller, suppose I am customizing a operator which needs `im2col`

operation, for a input variable `v_input`

, after unfold, I get `v_unfold`

, them `v_unfold`

will do a product with another variable `v_another`

, according to my understanding of autograd mechanism, to log the graph of and for the backprop for product operation, we need to save the operand `v_unfold`

and `v_another`

, however, after im2col, the variable `v_unfold`

will consume to much memory. I think we can compose these two operator, the `im2col`

, and `product`

, them when forward, we only need to save the tensor of `v_input`

, and for forward and backward, do the `unfold`

manually, therefore, it can save many memory. However, when feed the functional a tensor, it will return a Variable, is there a way to do pure tensor operation with nn.functional ? thanks.