Parameter passing to registered linear module without bias

Hey, I’m a beginner. I want to compute linear unit x dot w.T without bias. I followed this way of instantiation from tutorial:

struct Net : torch::nn::Module {
Net(int64_t N, int64_t M)
: linear(register_module(“linear”, torch::nn::Linear(N, M)))
{ }

But according to this in Functions.h:

static inline Tensor linear(const Tensor & input, const Tensor & weight, const Tensor & bias) {
static auto table = globalATenDispatch().getOpTable(“aten::linear(Tensor input, Tensor weight, Tensor? bias=None) -> Tensor”);
return table->getOp<Tensor (const Tensor &, const Tensor &, const Tensor &)>(at::detail::infer_backend(input), at::detail::infer_is_variable(input))(input, weight, bias);
}

It doesn’t let me bypass the parameter bias. I wonder if I’m implementing it wrong? Please, can anyone tell me the approach for the same?

Hi,

The functions implemented in Functions.h is the “functional” version that implements the forward, not the nn layer that you create with torch::nn::Linear().
You can see the class doc here and the options doc (that show to to disable bias) here

Thank you :slight_smile: