I have a standard dataloader which loads images.

On top of every image I want to add a static tensor.

But I want to clamp this to (0,1).

This new image is used to train a model.

The following code roughly show the important steps.

(everything is on gpu)

```
static_tensor = torch.load(path)
for img in dataloader:
img = img.cuda()
addition_tensor = img + static_tensor
clamped_tensor = addition_tensor.clamp(0,1)
eval = model(clamped_tensor)
loss = criterion (eval, label)
loss.backward()
optimizer.step()
```

This creates out of memory errors.

But if I change the clamping into

```
clamped_tensor = additon_tensor.data.clamp(0,1)
```

it no longer creates this error.

What is the reason behind this?

Edit: I noticed that my static tensor has requires_grad = True.

Am I correct to assume that this tensor ‘saves’ the autograd with the model and the graph of this gets bigger in every instance of the for loop?