Serializing HalfTensor Nets?

Digging through the issues and I can’t find anything on this particular instance:

I’m trying to serialize a network that I’ve moved to the GPU and cast to HalfTensor. When I call torch.save(net,filename) I get the following error:

 File "<stdin>", line 1, in <module>
  File "torch/serialization.py", line 123, in save
    return _save(obj, f, pickle_module, pickle_protocol)
  File "torch/serialization.py", line 218, in _save
    _add_to_tar(save_storages, tar, 'storages')
  File "torch/serialization.py", line 31, in _add_to_tar
    fn(tmp_file)
  File "torch/serialization.py", line 189, in save_storages
    storage_type = normalize_storage_type(type(storage))
  File "torch/serialization.py", line 99, in normalize_storage_type
    return getattr(torch, storage_type.__name__)
AttributeError: 'module' object has no attribute 'HalfStorage'

I’ve tried casting to float, bringing the network back onto the cpu, and casting into a few other datatypes but it looks like a call to float() doesn’t change the underlying storage type in a way that would make this possible. If I don’t half() the tensor I can still save it just fine, but once I’ve called half() nothing I do apparently changes the underlying storage type back.

Any tips (or if this has been fixed in a recent PR that I’m not seeing) would be appreciated. Thanks again for all the help, this has been an extremely pleasant experience thus far.

Best,

Andy

So, it turns out, we didn’t implement CPU HalfTensor in pytorch yet. I’m sorry for your breakage, but is it an option to typecast the model to float() for now?
I’ve opened an issue and we’ll fix it soon: https://github.com/pytorch/pytorch/issues/838

It is, I’m currently just .float().cpu().numpy() 'ing my weights and dumping them to .npz’s like I used to. Thanks for the response!