Extending torch.autograd: Unable to find 'Linear' function from 'torch.nn'

Hi everyone !

I want to add a custom function and i have read the document from https://pytorch.org/docs/stable/notes/extending.html

The document used ‘Linear’ function from ‘torch.nn’ to add a custom function. But i’m unable to find the ‘Linear’ function. I’m using Pytorch 1.0.0. Please help me.

Hi,

Linear has been there for a long time.
How did you installed pytorch?
Can you find the other elements of nn?

I have the following ‘Linear’ function:

import math

import torch
from torch.nn.parameter import Parameter
from … import functional as F
from … import init
from .module import Module
from …_jit_internal import weak_module, weak_script_method

@weak_module
class Linear(Module):
r"""Applies a linear transformation to the incoming data: :math:y = xA^T + b

Args:
    in_features: size of each input sample
    out_features: size of each output sample
    bias: If set to False, the layer will not learn an additive bias.
        Default: ``True``

Shape:
    - Input: :math:`(N, *, \text{in\_features})` where :math:`*` means any number of
      additional dimensions
    - Output: :math:`(N, *, \text{out\_features})` where all but the last dimension
      are the same shape as the input.

Attributes:
    weight: the learnable weights of the module of shape
        :math:`(\text{out\_features}, \text{in\_features})`. The values are
        initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
        :math:`k = \frac{1}{\text{in\_features}}`
    bias:   the learnable bias of the module of shape :math:`(\text{out\_features})`.
            If :attr:`bias` is ``True``, the values are initialized from
            :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
            :math:`k = \frac{1}{\text{in\_features}}`

Examples::

    >>> m = nn.Linear(20, 30)
    >>> input = torch.randn(128, 20)
    >>> output = m(input)
    >>> print(output.size())
    torch.Size([128, 30])
"""
__constants__ = ['bias']

def __init__(self, in_features, out_features, bias=True):
    super(Linear, self).__init__()
    self.in_features = in_features
    self.out_features = out_features
    self.weight = Parameter(torch.Tensor(out_features, in_features))
    if bias:
        self.bias = Parameter(torch.Tensor(out_features))
    else:
        self.register_parameter('bias', None)
    self.reset_parameters()

def reset_parameters(self):
    init.kaiming_uniform_(self.weight, a=math.sqrt(5))
    if self.bias is not None:
        fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
        bound = 1 / math.sqrt(fan_in)
        init.uniform_(self.bias, -bound, bound)

@weak_script_method
def forward(self, input):
    return F.linear(input, self.weight, self.bias)

def extra_repr(self):
    return 'in_features={}, out_features={}, bias={}'.format(
        self.in_features, self.out_features, self.bias is not None
    )

@weak_module
class Bilinear(Module):
r"""Applies a bilinear transformation to the incoming data:
:math:y = x_1 A x_2 + b

Args:
    in1_features: size of each first input sample
    in2_features: size of each second input sample
    out_features: size of each output sample
    bias: If set to False, the layer will not learn an additive bias.
        Default: ``True``

Shape:
    - Input: :math:`(N, *, \text{in1\_features})`, :math:`(N, *, \text{in2\_features})`
      where :math:`*` means any number of additional dimensions. All but the last
      dimension of the inputs should be the same.
    - Output: :math:`(N, *, \text{out\_features})` where all but the last dimension
      are the same shape as the input.

Attributes:
    weight: the learnable weights of the module of shape
        :math:`(\text{out\_features} x \text{in1\_features} x \text{in2\_features})`.
        The values are initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
        :math:`k = \frac{1}{\text{in1\_features}}`
    bias:   the learnable bias of the module of shape :math:`(\text{out\_features})`
            If :attr:`bias` is ``True``, the values are initialized from
            :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
            :math:`k = \frac{1}{\text{in1\_features}}`

Examples::

    >>> m = nn.Bilinear(20, 30, 40)
    >>> input1 = torch.randn(128, 20)
    >>> input2 = torch.randn(128, 30)
    >>> output = m(input1, input2)
    >>> print(output.size())
    torch.Size([128, 40])
"""
__constants__ = ['in1_features', 'in2_features', 'out_features', 'bias']

def __init__(self, in1_features, in2_features, out_features, bias=True):
    super(Bilinear, self).__init__()
    self.in1_features = in1_features
    self.in2_features = in2_features
    self.out_features = out_features
    self.weight = Parameter(torch.Tensor(out_features, in1_features, in2_features))

    if bias:
        self.bias = Parameter(torch.Tensor(out_features))
    else:
        self.register_parameter('bias', None)
    self.reset_parameters()

def reset_parameters(self):
    bound = 1 / math.sqrt(self.weight.size(1))
    init.uniform_(self.weight, -bound, bound)
    if self.bias is not None:
        init.uniform_(self.bias, -bound, bound)

@weak_script_method
def forward(self, input1, input2):
    return F.bilinear(input1, input2, self.weight, self.bias)

def extra_repr(self):
    return 'in1_features={}, in2_features={}, out_features={}, bias={}'.format(
        self.in1_features, self.in2_features, self.out_features, self.bias is not None
    )

TODO: PartialLinear - maybe in sparse?

But the link (as mentioned in the previous post) discussed about the ‘LinearFunction’ which is not present in the torch.nn.

LinearFunction is the name of the example function. The code for it is in the post.

Okay. Then we can modify the function according to our need and save the file in torch.nn. After that we can call the function ‘LinearFunction’ in ‘Linear’ with the help of ‘apply’ (LinearFunction.apply). am I correct ?

Hi,

No you should not modify the torch library. You can just make your change in your own python code. You never have to modify any torch code :slight_smile:
This tutorial shows you how to re-implement the same Module as the torch.nn.Linear Module but in your own code. This should be inside your own python project.

Okay. Let me try.
And thank you for your help.
Thank you !!!