I made a customized layer for `scalar x vector`

when scalar is a variable and vector is fixed.

```
class mul_scalar(torch.autograd.Function):
"""
Customized autograd.Function of
f(T,s) = s * T,
where T is a fixed Tensor and s is a Variable
"""
def forward(self, T, s_var):
self.save_for_backward(T, s_var)
return T.mul(s_var[0])
def backward(self, grad_output):
T, s_var = self.saved_tensors
return grad_output.mul(s_var[0]), grad_output.dot(T)
```

I made my `nn.Module`

and declare `self.ms = mul_scalar()`

in `nn.Module.__init__`

.

```
class Net(nn.Module):
def __init__(self, var=1):
super(Net, self).__init__()
self.ms = mul_scalar()
def forward(self, x):
...
self.ms(x, w)
...
```

However, when backpropagating it, there is an error related to *retain variables*.

How to declare my own function properly in this case?

As an alternative way, I can use in forward function as follows: (but I want to declare mul_scalar() in `__init__`

)

```
def forward(self, x):
c = Variable(torch.FloatTensor([1]), requires_grad = True)
ms = mul_scalar()
z1 = ms(x, c)
```