Why cant I change the padding, and strides in Conv2d?

Hi, perhaps I am missing something, but I cannot seem to figure out how to change the padding and stride amounts in the Conv2d method.

From here I see the nn.SpatialConvolution API, and here it mentions what the PyTorch API is? But no where can I see how to change the stride and padding amount… please tell me that this exists?

Thanks

Oh, how did you even find that link :smiley: These are very old docs, use only docs.pytorch.org. When do you want to change them?

You can pass them as arguments to the Module constructor e.g.

nn.Conv2d(16, 32, kernel_size=3, padding=(5, 3))

Alternatively, if you need to change them at runtime, I’d suggest using the functional interface:

import torch.nn.functonal as F

...

F.conv2d(input, self.weight, self.bias, kernel_size=3, padding=(x, y))

Sweet thank you! I should not watch netflix while searching for pytorch docs it seems. :stuck_out_tongue:

The change at run-time is intriguing … I didn’t even know people did that. How could you guarantee the same dimensionality if you changed padding amounts during run time though?

The whole point of run-time graph construction is that you don’t have to provide the framework any guarantees about dimensionality :wink: As long as you do ops that match the sizes, they can vary all the times, and the same applies to module parameters. That’s the whole beauty of dynamic graphs.

I don’t know if people do that, but it might be reasonable if you have variably sized inputs, or are doing something completely new.

Very interesting - I mean I know that in the context of RNNs we can have variable length inputs etc, since the back-prop can be unrolled N times, but I didnt realize that we could have variable length weights/parameters etc… I’ll read more about it, (and any sources on this are appreciated), but the top question in my mind is how do you guarantee that in a dynamic graph, when we change the dimensionality of weights, that those are new weights are even trained? Does that make sense?

For example conv layers can be applied to variably sized inputs, with a fixed set of weights.

I didn’t mean to say that you necessarily want to alter the weights, but there have been some papers on having networks approximate parameters of another network (e.g. this one), so it could probably be used for that.

Thank you will take a look!