Well. At least in my pytorch version is not implemented
Traceback (most recent call last):
File “”, line 1, in
AttributeError: ‘module’ object has no attribute ‘empty_cache’
I have just installed the last pytorch version from the webpage. Python2.7 cuda8 and pip installation.
I think this should be quickly fixed as I am getting out of memory when running on a 8GB nvidia gpu. I have two networks sharing some parameters. The problem is I get out of memory after some batches… this does not make a lot of sense however as each batch I do same operation. Forward for network 1, forward for network 2 and backward. Each data from the data iterator is copied into a Variable so I can use the nn.Module, but I think this should not allocate more memory. After all the batches nvidia-smi does not show more allocation.