How to convert NoneType into Variable of bias term relative to Conv layer?

Hi, every one:
I’m defining a new layer inheriting from Conv module. And when the data flows forward, there is error as :
“RuntimeError: expected a Variable argument, but got NoneType".
The reason is that the bias==None,

if bias:
     self.bias = Parameter(torch.Tensor(out_channels))
     self.mask_bias = torch.ones(out_channels).cuda()
     self.register_parameter('bias', None)

but the type of the forward function’s input should be Variable. However, I find that the type of bias in the
function could be None? So how can I deal with it?

The simplest approach I could think of is

if bias:
    self.register_parameter("bias", Parameter(torch.Tensor(out_channels)))
    self.register_buffer("mask_bias", torch.ones(out_channels))
    self.register_buffer("bias", torch.zeros(out_channels))
    self.register_buffer("mask_bias", torch.zeros(out_channels))

as a zero bias should be equivalent to no bias at all

It’s effective, and many thanks! But there is also a quesiton: the error emerged in my function ‘_fun(input,weight, bias)’; then I print the input shape, and found that the data has been split relative to number of GPUs; and I also found that the bias could be None in Conv module, so why couldn’t the bias be None here?

I haven’t looked through the conv code but are you sure it is really None or is there an if statement which tests the bias for None and sets it to zero (as done above)?

Are you training with multiple GPUs and/or torch.nn.dataParallel?

Thank you for your direction! Then I reread the code, and found that those parameters which could be None could’t passing through the net. But here my parameters ‘bias’ has been put into net, so it should be tensor.