Why memory formats are mixed?

I’m running the following code snippet on PyTorch version 1.6.0:

import torch
import torch.nn as nn


class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.layers = nn.Sequential(
            nn.Conv2d(3, 16, 1),
            nn.ReLU(),
        )

    def forward(self, model_inputs):
        return self.layers(model_inputs)


device = torch.device('cuda:0')
torch.cuda.set_device(device)

model = Model()
model = model.to(device=device, memory_format=torch.channels_last)

x = torch.zeros((1, 3, 32, 32), dtype=torch.float, device=device)
x = x.contiguous(memory_format=torch.channels_last)

loss = model(x).mean()
loss.backward()

During backward, it generates the following message:

[W TensorIterator.cpp:924] Warning: Mixed memory format inputs detected while calling the operator. The operator will output channels_last tensor even if some of the inputs are not in channels_last format. (function operator())

If I remove backward() call, then no warning is raised, as well as if I remove ReLU function from the model.

Where may formats not match? Can someone provide a deeper understanding of what’s happening here?

I don’t get this warning in the nightly release. Could you update to 1.7.1 or the nightly, as this warning might have been wrong and seems to be fixed?