Question about '_rebuild_tensor_v2'

Hi,

I have a model trained using the newest master branch of pytorch (0.4.0a0+18a76f5).
Now I want to use old version of pytorch (‘0.4.0’) to load the model.

It reports an error: ‘module’ object has no attribute ‘_rebuild_tensor_v2’

I noticed that ‘_rebuild_tensor_v2’ was added recently.
Is there any method to use ‘0.4.0’ to load a model trained by ‘0.4.0a0+18a76f5’?

bowen

I have the same problem, I develop and train using latest PyTorch from source, save a model, and want to deploy it on a robot which is using the stable PyTorch 0.3.1 from the repo.

It seems that this is intended behaviour, though. It is really annoying, actually.

Thanks to Python’s incredible flexibility, we can actually monkey-patch our way to making this work. This is very ugly, but gets the job done, add it at the top of your file, after import torch:

# Monkey-patch because I trained with a newer version.
# This can be removed once PyTorch 0.4.x is out.
# See https://discuss.pytorch.org/t/question-about-rebuild-tensor-v2/14560
import torch._utils
try:
    torch._utils._rebuild_tensor_v2
except AttributeError:
    def _rebuild_tensor_v2(storage, storage_offset, size, stride, requires_grad, backward_hooks):
        tensor = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
        tensor.requires_grad = requires_grad
        tensor._backward_hooks = backward_hooks
        return tensor
    torch._utils._rebuild_tensor_v2 = _rebuild_tensor_v2
38 Likes

Thanks man! This worked for me!

It works for me! Thanks!

That works for me, thanks!

how to use this? do you rebuild it from source? like you edit from environment or put it somewhere in the code for loading?
Sorry for newbie question here

just put it before loading checkpoint

Thank you so much!! These few lines saved my afternoon!

This worked for me! Thanks a lot!

This worked for me! Thank you very much!