I’d like to build a CNN whose conv filters are dependent on the input (similar to dynamic filter networks https://arxiv.org/abs/1605.09673 )
I modified the cifar10 example, but I noticed that if I try to access a conv layers weight, and copy a Variable into it I cant (line 52).
What is the correct way to do this?
(Also notice that I’ve changed batch size to 1, is there a way to do this with bigger batches?)
I think it would be much simpler with the functional interface. Just call F.conv2d(input, weight) where weight is generated by some other part of the network. This should work with arbitrary batches, just be careful about providing correct dimensions of the weights.
Thanks a lot apaszke, I was not aware of these “functionals”.
Is this the correct way to do this convolution in a batch manner? (it runs very slow…)
y = self.pool(F.relu(self.conv1(y)))
z = Variable(torch.Tensor(x.size()[0], 16, 10, 10))
for i in range(x.size()[0]):
z[i,:]= F.conv2d(y[i,:].unsqueeze(0), x[i,:]).squeeze(0)
z = self.pool(F.relu(z))
z = z.view(-1, 16*5*5)
(x contains the convolutional weights, y contains the image I want to convolve over, and z is where I put the result)
Your input has invalid size. For these weights ((out_channels, in_channels, kT, kH, kW)), it should be 1x16x6x14x14. You probably swapped the in and out channels weight dimensions.