[Closed] BatchNorm Double Backward error, requires 'save_mean'?

I’m using the double backward capability for BatchNorm that was recently added (thanks so much!). But I’m getting an error that I’m struggling to track down. I’ve tried to write down a minimal example.

import torch
import torch.nn as nn
from torch.autograd import Variable

class Net(nn.Module):
    def __init__(self):
        super(self.__class__, self).__init__()
        self.classifier = nn.Sequential(
                nn.Linear(4,4),
                nn.BatchNorm1d(4, momentum=0.1, affine=False),
                nn.ReLU(inplace=True)
        )

    def forward(self, x):
       return self.classifier(x)


input_var = Variable(torch.ones((4))).cuda()
net = Net()
net.cuda()
out = net.forward(input_var)
loss = sum(out)
loss.backward(create_graph=True)
loss.backward(create_graph=True)

The output of running this code is an error:

RuntimeError: missing required argument 'save_mean'

For some reason, I’m not able to trace the error any further than the torch.autograd.backward() function. And the only place save_mean seems to appear is in the legacy batch norm code.
Can anyone see what I’m doing wrong?
Thanks!

Closing this as the bug was resolved in this PR: https://github.com/pytorch/pytorch/pull/2277