Cross convolution layer

Is there a Pytorch implementation of the ‘cross convolution’ layer used in the Visual Dynamics paper? Specifically, I would like to implement a network that predicts a set of conv filters (i.e. their weights are the network output) that are later convolved with an image. I could imagine doing something like

fliter = nn.Conv2d(…)

Will the gradients be backproped properly in this case?

Don’t use here. Anything you use going through the .data attribute won’t be backpropagated.

I’d recommend writing it using the torch.nn.functional interface:

import torch.nn.functional as F
p = F.conv2d(image, weight=network_output, ...)

See torch.nn.functional.conv2d

If you used a the nn.Conv2d module, you would need to change the weight attribute every forward pass:

fliter = nn.Conv2d(...)
filter.weight = network_output
p = filter(image)

Makes sense. Thanks for the reply!