How to build a view layer in Pytorch for Sequential Models?

How to build a view layer in Pytorch for Sequential Models?

Is this ok:

class View(nn.Module):
    def forward(self, input, shape):
        return input.view(*shape)

I tried it based on the flatten layer but I couldn’t even make the flatten layer work:

import torch
import torch.nn as nn

## Q: why does the flatten layer work?
class Flatten(nn.Module):
    def forward(self, input):
        print(input.size())
        out = input.view(input.size(0),-1)
        print(out.size())
        return out

class View(nn.Module):
    def forward(self, input, shape):
        return input.view(*shape)

def main():
    x = torch.arange(0,6).view(3,2)
    print(x)
    print(x.size())
    flatten = Flatten()
    flt_x = flatten(x)
    print(flt_x)
    print(flt_x.size())

if __name__ == '__main__':
    main()

look at the weird output doesn’t chage size:

tensor([[0, 1],
        [2, 3],
        [4, 5]])
torch.Size([3, 2])
torch.Size([3, 2])
torch.Size([3, 2])
tensor([[0, 1],
        [2, 3],
        [4, 5]])
torch.Size([3, 2])
1 Like

Hi,

This seems to work no? You keep the first dimension and collapse all the others. But your Tensor had only 2 dimensions to begin with.

By the way for use within a Sequential, you can define a custom __init__() function on your View Module that will take the shape as input.

ok perhaps this clarifies my confusion:

class Flatten(nn.Module):
    def forward(self, input):
        '''
        Note that input.size(0) is usually the batch size.
        So what it does is that given any input with input.size(0) # of batches,
        will flatten to be 1 * nb_elements.
        '''
        batch_size = input.size(0)
        out = input.view(batch_size,-1)
        return out # (batch_size, *size)

class View(nn.Module):
    def forward(self, input, shape):
        '''
        TODO: the first dimension is the data batch_size
        so we need to decide how the input shape should be like
        '''
        return input.view(*shape)

oh I see…ur saying I can’t pass “shape” as input…

So this is correct:


class View(nn.Module):

    def __init__(self, shape):
        self.shape = shape

    def forward(self, input):
        '''
        TODO: the first dimension is the data batch_size
        so we need to decide how the input shape should be like
        '''
        return input.view(*self.shape)

If you want to use the View in a sequential yes. You have to do this. Because the Sequential only passes the output of the previous layer.

For your Flatten layer, it seem to work fine no?

import torch
from torch import nn

class Flatten(nn.Module):
    def forward(self, input):
        '''
        Note that input.size(0) is usually the batch size.
        So what it does is that given any input with input.size(0) # of batches,
        will flatten to be 1 * nb_elements.
        '''
        batch_size = input.size(0)
        out = input.view(batch_size,-1)
        return out # (batch_size, *size)

print("2D input")
foo = torch.rand(10, 20)
print("Input size:")
print(foo.size())
bar = Flatten()(foo)
print("Output size:")
print(bar.size())

print("3D input")
foo = torch.rand(10, 20, 30)
print("Input size:")
print(foo.size())
bar = Flatten()(foo)
print("Output size:")
print(bar.size())

print("8D input")
foo = torch.rand(10, 2, 3, 4, 5, 6, 7, 8)
print("Input size:")
print(foo.size())
bar = Flatten()(foo)
print("Output size:")
print(bar.size())

correction:

class View(nn.Module):
    def __init__(self, shape):
        super().__init__()
        self.shape = shape

    def forward(self, input):
        '''
        Reshapes the input according to the shape saved in the view data structure.
        '''
        batch_size = input.size(0)
        shape = (batch_size, *self.shape)
        out = input.view(shape)
        return out

I made my layer but I get a weird error:

  File "/Users/pinocchio/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 488, in __call__
    for hook in self._forward_pre_hooks.values():
  File "/Users/pinocchio/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 539, in __getattr__
    type(self).__name__, name))
AttributeError: 'View' object has no attribute '_forward_pre_hooks'

you know why?

Make sure to properly call the parent __init__ function when creating your own nn.Module().
Also make sure that you don’t have any other class/function called View that could conflict with your new one.

1 Like

hmmm I will try debugging a little longer. Meanwhile let me paste the code I ran (that had an error) for reference:

        ##
        batch_size = 1
        CHW = (3, 32, 32)
        out = torch.randn(batch_size,*CHW)
        print(f'out.size()')
        ##
        conv2d_shape = (-1, 8, 8)
        view = View(shape=(batch_size,*conv2d_shape))
        ##
        out = view(out)
        print(f'out.size()')

Darn it! I was using a wrong version of my View layer. Ooops! Fixed.

class View(nn.Module):
    def __init__(self, shape):
        super().__init__()
        self.shape = shape

    def __repr__(self):
        return f'View{self.shape}'

    def forward(self, input):
        '''
        Reshapes the input according to the shape saved in the view data structure.
        '''
        batch_size = input.size(0)
        shape = (batch_size, *self.shape)
        out = input.view(shape)
        return out
1 Like