I was trying to calculate B=torch.fft.rfft2(A) in a for loop on CPU. Although the speed is pretty fast compared to numpy.fft.rfft2, the CPU memory keeps increasing. This issue only happens with Pytorch 2.8.0, while the same code works perfectly with Pytorch 2.7.1. Anyone could by any chance know where the problem is? Many thanks!
I cannot reproduce any increase in memory using:
import torch
t = torch.rand(10, 10)
for _ in range(10000000000):
rfft2 = torch.fft.rfft2(t)