I want to create my own conv2d layer, say myConv2d, the only change is the weight matrix, all other things will be exactily the same as pytorch defines.
I noticed that Conv2d inherits _ConvNd, as stated in the document, so i just write a similar thing like below:
#import all other needed libraries
from torch.nn.modules.conv import _ConvNd
class myConv2d(_ConvNd):
def __init__(...):
...
def conv2d_forward(self, input, weight):
#here i want to modify the behaviour of weight, say, weight*10
weight = weight * 10
...
return F.conv2d(input, weight, self.bias, self.stride,
self.padding, self.dilation, self.groups)
def forward(self, input):
return self.conv2d_forward(input, self.weight)
I saved above code in a file and import it in my network file:
from .myConv2d import myConv2d
class Net(nn.Module):
def __init__(...):
...
super(Net, self).__init__()
self.conv1 = nn.Conv2d(...) #normal conv
self.conv2 = myConv2d(...) #my conv
when i run the network, error pops:
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
means the weight in myConv2d is not passed to cuda…and actually I think this is because myConv2d not even belongs to my network graph(because it is not passed to cuda but the other part of the network are on the cuda)
So if I want to customize a conv layer’s behavior and make it part of the graph, what should I do or what did I missed?
Thanks for your help.
Update: by manually set the weight to cuda I manage to run the code, but I still need to confirm with someone that this approach is valid and working.
def conv2d_forward(self, input, weight):
device = weight.device
new_weight = weight * 10
new_weight = new_weight.to(device)
...
return F.conv2d(input, new_weight, self.bias, self.stride,
self.padding, self.dilation, self.groups)
I also print out the weight and it is updating