Tensor Addition inside cycle fills GPU Memory

Hello everyone, I’m new to pytorch and I’m having this problem, I have to iteratively perform additions of two tensors within a cycle

x_ = torch.zeros_like(x)

for i in range(my_range):
	x = do_something(x)
	x_ = x_ + x   #here is the problem
	x   = x  - x_   #here is the problem

Adding Tensors seems to occupy new memory on GPU at every iteration, I’m working with large data so my model fills all the GPU Memory immediately. Is there a way to avoid that?

If you can change the value of x_ and x inplace, you can replace these two lines with:

x_ += x
x -= x_

That will use existing memory and not allocate anything new.

Hi @albanD Thanks for the fast reply :slight_smile:
I tried with inplace addition, but still, my model is allocating new memory at every iteration :frowning:
I don’t really know how to fix it, maybe using a buffer?

It is possible that the gpu memory increases a bit until the second or third forward passes. If the memory keeps increasing steadily after that, that means that your code is doing something wrong :smiley:
Could you give us a minimal sample to reproduce this so that we can check?

@albanD I actually realized that I was doing my calculations on a variable that requires gradient (x). So my code was keeping storing gradient at every iteration.
So I changed my code like this

x_ng = x.detach()
x_ = torch.zeros_like(x_ng)

for i in range(my_range):
	x_ng = do_something(x_ng)
	x_ += x_ng   
	x_ng -= x_   

x.data=x_ng.data

I don’t know if it’s a smart strategy but surely it occupies less memory.

This is the right way to do it if you don’t need to get gradients for x through this operation.

I don’t think I do, there a no parameters to be trained in that layer…Thanks for the help :wink: