How to convert a pre-trained model from NCHW to NHWC format

Using a pre-trained PyTorch model (N C HW format) but my acceleration platform requires model in NHWC format.

Is there an easy way for converting PyTorch model to NHWC format?

I have permuted the weights by fetching weights from PyTorch’s state_dict() method like -


if('conv' in str(key)):
            params[key] = value.permute(0,2,3,1)

But am unable to repopulate the model with permuted dictionary - model.load_state_dict(Updated_params) gives -

size mismatch for stage4.2.branches.2.3.conv1.weight: copying a param with shape torch.Size([128, 3,
3, 128]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).

To resolve this, how can I define layers in the new model in NHWC format in PyTorch.

The link below doesn’t seem to solve this issue.
Channels Last Memory Format in PyTorch

Thanks!

Hi,
You are using wrong permutation order.

See this example:

x = torch.randn(1, 128, 128, 3)
# your order
x.permute(0,2,3,1).shape # torch.Size([1, 128, 3, 128])

# correct order
x = x.permute(0, 3, 1, 2)
x.shape  # torch.Size([1, 3, 128, 128])

And the error corresponds to this issue.

I am not sure still it could work fine or not because of channel mutation, because of the forward method. For instance, concatenation, squeezing, etc and all other method which use dim argument to do the operation, if exist in the forward function, may cause issues. You may need to override forward function w.r.t. channel changes.

Bests

Hi,
Thanks for your reply. If you see, you are recommending the solution other way around i.e. NHWC to NCHW. I want the opposite.

I want (1, 3,128,128) to (1, 128, 128, 3) for which

value.permute(0,2,3,1)

is the correct order.

Thanks for the insight regarding forward function though.

Ow sorry! I misunderstood the weight with input.

What was the issue with this tutorial? I think it is ok as you can send all layers to channel last or channel first mode. I am not sure I am missing something here.

# define channel first
conv = nn.Conv2d(128, 128, 3)  # replicate [128, 128, 3, 3] weight tensor
print(conv.weight.shape)
print(conv.weight.stride())

# convert to channel last
conv = conv.to(memory_format=torch.channels_last)
print(conv.weight.shape)
print(conv.weight.stride())

# convert back to channel first
conv = conv.to(memory_format=torch.contiguous_format)
print(conv.weight.shape)
print(conv.weight.stride())

# output
# torch.Size([128, 128, 3, 3])
# (1152, 9, 3, 1)
# torch.Size([128, 128, 3, 3])
# (1152, 1, 384, 128)
# torch.Size([128, 128, 3, 3])
# (1152, 9, 3, 1)

This is the correct way to convert the existing model or layer. Please also make sure you are converting inputs as well

input = input.to(memory_format=torch.channels_last)

Please see below code snippet and comments in print lines:

device = 'cuda'
input = torch.randint(1, 10, (2, 8, 4, 4), dtype=torch.float32, device= device, requires_grad=True)
model = torch.nn.Conv2d(8, 4, 3)

print(input.shape)  # torch.Size([2, 8, 4, 4])
input = input.contiguous(memory_format=torch.channels_last)
print(input.shape)  # still torch.Size([2, 8, 4, 4]) - need [2, 4, 4, 8] (NHWC) here
model = model.to(memory_format=torch.channels_last)
print(model)    # output= Conv2d(8, 4, kernel_size=(3, 3), stride=(1, 1))
model = model.to(device)
out = model(input)
print(out.shape)    # output= torch.Size([2, 4, 2, 2])  | need [2, 2, 2, 4] (NHWC) here 
print(out.is_contiguous(memory_format=torch.channels_last)) # Output: True

Please see this comment.

It appears memory_format = torch.channels_last is not converting the layers/input to NHWC format. It performs a different functionality. Pytorch Channels Last Memory Format webpage also doesn’t mention NHWC anywhere in the whole webpage.

I hope I have put my requirement correctly. Thanks.

Can you please clarify what do you mean by acceleration platform in this case. PyTorch operators (and modules) require CV tensors to be in specific indexing order NCHW. To use accelerated NHWC kernels we preserve dimensions order but laying out tensor in memory differently.

Acceleration platform is a custom processor. I am currently using the pytorch model, converting it to onnx/keras and then porting it to the processor for inference. But face this challenge of NHWC vs NHWC.

If possible, can you please shed some light on the possibility of updating the model definition after permuting the layers, as mentioned in my original comment?

thanks!

any updates ? I have the same issue and I’m totally stuck

All PyTorch operators are written to take NCHW as dimensions order. There is no way to change it (you can only change memory format - aka how tensor laid in memory).

If you really want to change the order of dimensions you would need to permute each model parameter manually. Take into account that your model will not work in PyTorch anymore.