Pytorch general question

Hi :D,

I want to binarize a neural network (BNN) for a project. So I am reading the work of Rastegari [1] and his code on GitHub [2].

One part of the code is the following:

    def updateBinaryGradWeight(self):
        for index in range(self.num_of_params):
            weight = self.target_modules[index].data
            n = weight[0].nelement()
            s = weight.size()
            m = weight.norm(1, 3, keepdim=True)\
                    .sum(2, keepdim=True).sum(1, keepdim=True).div(n).expand(s)
            m[weight.lt(-1.0)] = 0 
            m[weight.gt(1.0)] = 0
            # m = m.add(1.0/n).mul(1.0-1.0/s[1]).mul(n)
            # self.target_modules[index].grad.data = \
            #         self.target_modules[index].grad.data.mul(m)
            m = m.mul(self.target_modules[index].grad.data)
            m_add = weight.sign().mul(self.target_modules[index].grad.data)
            m_add = m_add.sum(3, keepdim=True)\
                    .sum(2, keepdim=True).sum(1, keepdim=True).div(n).expand(s)
            m_add = m_add.mul(weight.sign())
            self.target_modules[index].grad.data = m.add(m_add).mul(1.0-1.0/s[1]).mul(n)

I am starting with PyTorch :slight_smile: . I don’t understand what’s mean that " \ " and how its working “.sum(2, keepdim = True)”:

 m = weight.norm(1, 3, keepdim=True)\
                    .sum(2, keepdim=True).sum(1, keepdim=True).div(n).expand(s)

Thanks for your help,
Regards,

.sum(2, keepdim=True) means to perform the sum operation over the 2nd dimension. keepdim means that after reduction, that dimension remains but has size 1.

In [1]: import torch

In [2]: a = torch.rand(2, 3, 4)

In [3]: a.sum(2, keepdim=False).size()
Out[3]: torch.Size([2, 3])

In [4]: a.sum(2, keepdim=True).size()
Out[4]: torch.Size([2, 3, 1])
1 Like

Thanks, and do you know how is working " \ " in :

 m = weight.norm(1, 3, keepdim=True)\
                    .sum(2, keepdim=True).sum(1, keepdim=True).div(n).expand(s)

This is a python thing. It allows you to write something that should be on a single line in multiple lines. This has nothing to do with pytorch :slight_smile:

1 Like

Hi, do you know what a have this warning:

/pytorch/torch/csrc/autograd/python_function.cpp:622: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)

with

class BinActive(torch.autograd.Function):
    '''
    Binarize the input activations and calculate the mean across channel dimension.
    '''

    def forward(self, input):
        self.save_for_backward(input)
        size = input.size()
        mean = torch.mean(input.abs(), 1, keepdim=True)
        input = input.sign()
        return input, mean

    def backward(self, grad_output, grad_output_mean):
        input, = self.saved_tensors
        grad_input = grad_output.clone()
        grad_input[input.ge(1)] = 0
        grad_input[input.le(-1)] = 0
        return grad_input

Hi,

As mentionned, this is the old style of autograd.Function.
You can find info about this here as mentioned in the warning.