Cuda out of memory when updating dataset

I get an out of memory error when I update the dataset while in iteration of a data loader
This function is def inside a custom dataset

RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 14.73 GiB total capacity; 13.77 GiB already allocated; 3.88 MiB free; 13.79 GiB reserved in total by PyTorch)

    def updateindex(self,index,new):#Why is this shit leaking memory
        for i in range(len(index)):
          self.targets[index[i]]=new[i]#self.targets[index]*(0.7)+(1-0.7)*new
        del index
        del new

where index is a Tensor of size n indexes and new is a Tensor of size n by 512 to update the datasets target by
new is defined by

new=target*β+(1-β)*momentumout

where target is the target from the dataloader, beta is a float and momentumout is the net output of the image
I’m not sure why it mallocs, can anybody help?