Resnet18 throws exception on conversion to half floats

Hey,

I have tried to launch the following code:

from torchvision import models
resnet = models.resnet18(pretrained=True).cpu()
resnet.half()

and have got an exception:
libc++abi.dylib: terminating with uncaught exception of type std::invalid_argument: Unsupported tensor type

Sounds like the halftensor type is not registered properly. But not sure why it’s the case.

Using pytorch 0.2.0 py36_1cu75 soumith

Any advice how to fix it?

The problem is because you’re doing it on CPU. Here’s a reply on a similar problem from another post on pytorch forums. It should work if you dua a .cuda() and move everything to GPU.

link - Variable failed to wrap HalfTensor - #2 by albanD

Hey,

But they are exist, check the following code:

import torch
t1 = torch.ones([4, 4]).cpu().half()

it works.

they exist but no math operations are defined on them.

1 Like

it’s ok. I just want to save some memory by storing all data in float16 and do computations in float32. Is where easy way to achieve this?

HalfTensor is more or less fully supported on the GPU. On the CPU it’s just a dummy tensor for storage.

Not sure that I understand your answer. “On the CPU it’s just a dummy tensor for storage” does it mean that actually half CPU tensor stores data in float32, so it’s just fully dummy class without any useful logic at all?

HalfTensor on CPU is equivalent to the HalfTensor on the GPU, but it does not implement any mathematical operations. On the CPU, it only has copy and serialization operations. (half CPU Tensor doesn’t store data in float32, it stores it in float16)

ok, then my question is still valid: assume we have pretrained resnet18, we want to launch model at CPU for prediction, store all weights and intermediate data in float16 (do conversation to float32 when necessary) and do all computations in float32. Could it be done with pytorch? And if yes, how?