# Trainable Variable in Loss Function and Matrix Multiplication

Hi, could you check the example with trainable-variable? I was thinking about two cases:

• implementing a Linear layer without torch.nn.
• implementing the Loss layer with learable parameter (ex. Center Loss where mean-center is trainable parameters).

Such stuff are easy in TesnotFlow, but I’m not sure if I coded it correctly in PyTorch

Here is the example of my try of implementing the Linear-Layer. I’m not sure if I should implement “parameters()” this or other. Or that maye PyTorch have a function which gather all trainable variable at single list.
http://pastebin.com/NkFZJkBW

Here is my try of using learnable variable in loss-function. It seems to work nice, but I’m not sure if I used the pytorch function correctly (as adding additional variable to optimizer) and as I want to one Loss applied to final layer and second to intermediate layer
http://pastebin.com/X35jEE54

Could you check if the idea of using PyTorch here is correct? Or I should change sth here.

2 Likes

Hi!

There are some issues with your first script:

• you need to define `linear1` and `linear2` as `nn.Parameter` instead of `Variable`, if not `model.parameters()` won’t see them
• just for information, `nn.Linear` implements it as `torch.mm(input, weight.t()`, as in torch7 BTW
• you might want to divide the loss by the batch size, that’s what’s the default in pytorch

At a first sight, the second example looks fine to me. But note that you can do `y.size(0)` instead of `y.size()[0]`

I’ve implemented the same loss, however, the centers learned are nearly the same. I don’t know where is the problem.

I have implemented the center loss and it looks correct. See this repo @waitwaitforget @melgor

A more easily implenment of center loss.

``````class CenterLoss(nn.Module):

def __init__(self,center_num,feature_dim):
super(CenterLoss,self).__init__()
self.center_num = center_num
self.feature_dim = feature_dim
self.center_features = nn.Parameter(torch.Tensor(self.center_num,self.feature_dim))
nn.init.normal(self.center_features.data,mean=0,std=0.1)

def forward(self,x,label):
B = x.size()[0]
center = torch.index_select(self.center_features,0,label)
diff = x-center
loss = diff.pow(2).sum() / B
return loss
``````

I had the same issue. My parameters learned are the same. Did you solve it and how? Thanks!