Non-legacy View module?

There doesn’t currently appear to be any non-legacy View module in pytorch’s torch.nn module. Any reason for this? While obviously not essential, it’s convenient when porting over existing Torch networks. Would be happy to submit a PR.

1 Like

I might be wrong but I believe, it is available examples/mnist/main.py, line no: 64.

As @gsp-27 pointed out, nn.View is no longer necessary, because you can just call .view() on a Variable, and have it recorded by autograd. It should be straightforward to map old networks that use nn.View to new ones, that call just the method.

That method directly resizes the associated tensor instance; having a View module would enable one to directly add it to the sequence of modules in a network rather than having to explicitly call it. For example:

class MyNetwork(torch.nn.Module):
    def __init__(self):
        super(MyNetwork, self).__init__()
        modules = [nn.Conv2d(1, 10, 3),
                   nn.MaxPool2d(2, 2),
                   ...,
                   View(100),
                   nn.Linear(100, 2)]
        for i, m in enumerate(modules):
            self.add_module(str(i), m)
    def forward(self, x):
        for child in self.children():
            x = child(x)
        return x
2 Likes

Just to clarify .view() doesn’t resize the tensor, but returns a new one, that uses the same memory, but has different sizes.

Is your use case that you want to have a single Sequential? We discussed having nn.View in new nn, but we decided that it’s better to keep it simple, since it is easily achievable using autograd. If you have a conv part and a fc classifier, you can take an approach with two Sequentials, just like in torchvision.models (link points directly to AlexNet implementation).

I guess it’s ultimately a matter of taste, but an nn.View module seems to enable a more declarative definition of models that require a change in tensor size between layers; having to explicitly call the view() method in forward() effectively spreads the model definition across __init__() and forward(). In any event, the nn.View module I was thinking about contributing just wraps the torch.autograd.variable.View() function and implicitly handles the batch size, so the underlying functionality is (presumably) the same as explicitly calling the tensor view() method.

2 Likes

Of course. That’s a valid, yet not recommended solution :slight_smile:

The main idea is that the model structure should be defined by forward and that’s why we decided to not include any containers. We only left Sequential, because it can simplify some cases, but we didn’t want to implement modules that would be redundant, since we have all these autograd functions.

Duly noted. Thanks for the feedback.

Sorry to come back to this discussion after a month or so, but I indeed miss some convenient way of flattening out a layer inside of Sequential. I am breaking this operation into two pieces as suggested, a Sequential model that outputs a image shape, and then in the forward() method, I flatten the results.

You could define a simple Flatten Module like this:

class Flatten(nn.Module):
    def __init__(self):
        super(Flatten, self).__init__()

    def forward(self, x):
        x = x.view(x.size(0), -1)
        return x

model = nn.Sequential(
            nn.Conv2d(3, 1, 3, 1, 1),
            Flatten(),
            nn.Linear(24*24, 1)
        )

x = Variable(torch.randn(10, 3, 24, 24))
model(x)
8 Likes

Very true @ptrblck, thanks for sharing. :slight_smile:

1 Like