where $prob_i$ is a vector 1x32 and $alpha_i$ is a learned parameter. It will be learned during training. So, my code is

class parameter_learning(nn.Module):
def __init__(self):
super(parameter_learning, self).__init__()
self.alpha_list = []
for i in range(3):
alpha = Parameter(torch.ones(1)).cuda()
self.alpha_list.append(alpha)
def forward(self, prob):
prob_vector = 0
for i in range (len(prob)):
prob_vector + = self.alpha_list[i]*prob[i]
y = prob_vector
return y

However, it cannot pass a list parameter to and gave me same number of alpha during training (same as 1)

You should save alpha_list as a ParameterList to ensure it is detected properly by the rest of the nn code.
You can also create a single vector of size the number of alpha and then do a dot product to compute your loss.

Great! It worked. This is solution, I think it will be useful for someone

class parameter_learning(nn.Module):
def __init__(self):
super(parameter_learning, self).__init__()
self.alpha_list = nn.ParameterList([])
for i in range(3):
alpha = Parameter(torch.ones(1))
self.alpha_list.append(alpha)
def forward(self, prob):
prob_vector = 0
for i in range (len(prob)):
prob_vector + = self.alpha_list[i]*prob[i]
y = prob_vector
return y

@albanD: If prob[i] is a vector, we will use a single alpha to learn coefficient. What is happen if the prob[i] is batch of the vector, how to make the batch of alpha where each alpha is to learn a vector in the batch. For example, if I have prob[i] size of 16x32 (where 1x32 is vector prob with coefficient is alpha), who to make 16 coefficient alpha? Do I need to change the equation also? Is alpha = Parameter(torch.ones([batch_size, 1]))