Why my nn.Identity function is changing the input shape?


Code:
import torch
import torchvision.models
from torchinfo import summary
import torch.nn as nn
class Identity(nn.Module):
def init(self):
super(Identity, self).init()

def forward(self, x):
    print(x.shape)
    return x

model1=torchvision.models.vit_h_14(weights=‘IMAGENET1K_SWAG_E2E_V1’)
for index in range(12,32):
model1.encoder.layers[index]=nn.Identity()
model1.encoder.ln=Identity()
model1.head=Identity()
model1.heads=Identity()
model1.fc=Identity()
print(model1)

If the original modules, which you are replacing with nn.Identity modules, changed the activation shape, the issue is expected.
Did you check which layers you are replacing? And if so, did you check if these layers change the activation shape?

Hi :slight_smile: Thank you. I have a similar problem, how do I know if the layers I’m replacing change the activation shape? And how do I correct it? I’m replacing classifier with nn.Identity in a VGG model

You could either manually check the input and output activation shape inside the forward method of your model:

def forward(self, x):
    ...
    print(x.shape)
    x = self.classifier(x)
    print(x.shape)
    return x

and compare these or you could alternatively also use forward hooks and print the input/output shapes there.

1 Like