Prelu c++ how it works?

Hi,

i’am trying to use the prelu function in pytorch c++, but i cannot understand how can i use it. The function parameters are two Tensors. Why i need two Tensors?. I would expected to pass only the weights.

In the python version i can pass only the weights. How this is function is related to the python version?

Thanks

I guess the c++ api is similar to the functional one: you need to give the layer’s input and the weights: https://pytorch.org/docs/stable/nn.functional.html#prelu

@erict - As explained by @albanD, the torch::nn::functional::prelu function is equivalent to Python torch.nn.functional.prelu function, and they both take two tensors (input and weight).

In the python version i can pass only the weights. How this is function is related to the python version?

Do you mind elaborating how you pass only the weights in the Python version?

Hi @albanD, @yf225

thanks for your answer. Yes i had a look at the docs. I will try to explain better. On python implementation inside Prelu class here https://pytorch.org/docs/stable/_modules/torch/nn/modules/activation.html#PReLU you can find self.weight = Parameter(torch.Tensor(num_parameters).fill_(init)) and by default self.weights is added to the model’s Parameters if i have understood well.

On pytorch c++ i have some doubts on how to use “self.weights” Tensor and pass to the prelu function. Now i’m using register_parameter in the constructor of the model like this register_parameter("prelu1", prelu1.fill_(0.25)); where prelu is torch::Tensor prelu1 = torch::ones({1})

In the forward i use the function like this x = torch::prelu(inputs,prelu1);

Is this a correct way to register the Tensor to the model like python version?
This is what i have done.

struct TestImpl : nn::Module {


	TestImpl() : conv1(register_module("conv1", nn::Conv2d(nn::Conv2dOptions(1, 32, 4).stride(2).padding(1)))){

		register_parameter("prelu1", prelu1.fill_(0.25));

	
	
	}

	torch::Tensor forward(torch::Tensor x) {
	
		x = torch::prelu(x, prelu1);
	
	}


	nn::Conv2d conv1;

	torch::Tensor prelu1 = torch::ones({ 1 });



};

Thanks

@erict Thanks for the explanation - If your goal is to find an equivalent of torch.nn.PReLU layer in C++ API, you can use torch::nn::PReLU layer which behaves exactly like the Python version, and we can construct it like auto m = torch::nn::PReLU(torch::nn::PReLUOptions().num_parameters(42).init(0.25)) (num_parameters and init are optional). And we can access its weight attribute using m.weight, just like how we do it in Python.

If your goal is to make prelu1 in your C++ module point to the parameter registered with register_parameter("prelu1", prelu1.fill_(0.25));, we should replace it with prelu1 = register_parameter("prelu1", prelu1.fill_(0.25));, and then it should be properly registered. :smiley:

Ok perfect. Thank you for the explanation.