Is pytorch0.4 compatible with pytorch0.3?

I have a lot of code than can be run in the environment of 0.3, but when I directly use them in 0.4. Every time I backward, the usage of memory in GPU will increase. I try to use item instead of data[0], and directly use tensor not Variable, and for training no_grad feature should not be use. So is there any other possibilities?


Your changes sound good. Is the memory increasing until you get an OOM error?
If so, could you post a small runnable code snippet?