Could I downgrade pytorch? or should I do something more after upgrading?

Hello everyone, I have my own loss function, and it works well before I upgrade pytorch to the latest version by following the post Updating PyTorch, then, my code does not work anymore, there is no warning or error but the program is stuck at the 146-th line of “variable.py”

self._execution_engine.run_backward((self,), (gradient,), retain_variables)

I even cannot step into it, and if I stop the program manually, it always shows

Traceback (most recent call last):
File “/home/lii/anaconda2/lib/python2.7/multiprocessing/process.py”, line 258, in _bootstrap
self.run()
File “/home/lii/anaconda2/lib/python2.7/multiprocessing/process.py”, line 114, in run
self._target(*self._args, **self._kwargs)
File “/home/lii/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py”, line 28, in _worker_loop
r = index_queue.get()
File “/home/lii/anaconda2/lib/python2.7/multiprocessing/queues.py”, line 378, in get
return recv()
File “/home/lii/anaconda2/lib/python2.7/site-packages/torch/multiprocessing/queue.py”, line 21, in recv
buf = self.recv_bytes()
KeyboardInterrupt

I am using anaconda2 in my workstation with CPU only, but the same code works well in GPU on a server with old pytorch version. So, could I downgrade pytorch or should I do something more?

Thank you!

I don’t think it’s a problem of the newer version. You probably have very tight limits on the amount of shared memory in your system and it’s probably deadlocking everything. Are you trying to load very large images/videos?

Thank you so much for your reply, I did not load large data, I tried to load a 220-dimension data and reduced the batch size to 32, it does not work either. Is it possible to downgrade pytorch or do you have any other ideas? Thank you!

if you wish to downgrade, you can.

# for example to install the previous version v0.1.10
conda install pytorch=0.1.10 -c soumith 

Thank you so much, after downgrading to 0.1.10, my code works again.

@lili.ece.gwu if there is a way to share your code with me, i can look at why it is not working in the current version. thank you.

Hi @smth Can you please share, how to downgrade to lower version using pip? I tried :

pip install http://download.pytorch.org/whl/cu75/torch-0.1.10.post3-cp27-cp27mu-manylinux1_x86_64.whl

However, this gives “Forbidden Error”

I had been wanting to do this, because after updating to the latest version, I hadn’t been able to run my previously working script on linux platforms.Also, the same script works fine with the latest version of pytorch on mac.

I have also tried increasing the memory segment size and made it equivalent to that available on Mac. However, even that didn’t helped me

Thank you

How do we do this via pip ?

You can probably

wget  http://download.pytorch.org/whl/cu80/torch-0.1.12.post2-cp27-none-linux_x86_64.whl
pip install torch-0.1.12.post2-cp27-none-linux_x86_64.whl

if you want to install an older version of pytorch (0.12). That being said, I’m not sure I would recommend downgrading. If you let us know how your code is breaking in the current version of pytorch (0.20) we’d be glad to help.

Hi,

So there are several breaking points.

  1. I was a little less specific with the expand_as commands previously, and now it required me to use [:,None].expand_as()
  2. assert on the value of a Variable is also failing now
  3. These are obvious errors, a particular module of my code completely fails, now, when it was not earlier.
1 Like

I’ve worked on other things and I am trying to migrate my code to Pytorch 0.4, but failed now.

I want to know how may I downgrade to 0.1.10 including Torchvision and everything (Cuda etc…), in case the problem cannot be solved on time.

I am implementing an SSD Model, but at the point of converting the tensor into the torch Variable, I am facing deprication issues with the Torch version on my machine.
So, if downgrading torch version will solve the issue or you could help me with the coding mistake I am making.
Initiallly i was doing this: x = torch.autograd.Variable(x.unsqueeze(0))

But due to deprication of variable function, I am facing issues for which I am trying to solve it as:

x = torch.autograd.Function.forward(x.unsqueeze(0))

I am implementing an SSD Model from amdegroot repo of github, and upon running the file geetting issue with the conversion of tensor into Variable as the direct variable function is deprecated in the new release of torch. How should I go further.

I was getting error due to this: x = torch.autograd.Variable(x.unsqueeze(0))
So, I tried this way : x = torch.autograd.backward(x),
but still getting error :
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn.

Will be very thankful for your help.

Variables are deprecated since PyTorch 0.4 and you can use tensors now.
Instead of

x = torch.autograd.Variable(x.unsqueeze(0))

just use

x = x.unsqueeze(0)