Hi,
I have two convolutions which have the same shared weight. One is using dilation=1, the other is using dilation=2.
Let x
be an input tensor, and w
the shared weight.
Is there anything I can do to save memory when calculating:
y = F.conv2d(x, weight=w, dilation=1) + F.conv2d(x, weight=w, dilation=2)
and doing the backward pass?
Right now the memory consumption is doubled which feels unnecessary.