Initializing a member tensor after creation with DataParallel (repost)

Ah, good to know on the second part.

But no, your suggestion above was to initialize it as a buffer so it gets picked up by dataparallel and then resize it later before using it, correct? When I try to do that it looks like the resize is not getting picked up by the buffer that was registered and when it gets called one (or both) of the copies still have the original size.

Just to put all the code in the same place:

def init(self):
    ...
    self.register_buffer('currentGrad', torch.zeros(1).to(gf.device).double())

def memberFunction(self):
    self.otherFunction(self.currentGrad)

def saveGrad(self, grad_input, grad_output):
    if(len(self.currentGrad) == 1):
        self.currentGrad = self.currentGrad.repeat(grad_output[0].shape)
    self.currentGrad = grad_output[0].detach()

What I had noticed with the earlier thread is it seems like setting things with self.currentGrad=... would cause the problem where it is not found but self.currentGrad += ... for in place modifications did work. Is there a way to do an in place resize maybe?