Concat or append make CUDA out of memory

self.mynet is a pre-trained network. x1.size = [10,128,32,32]. I want to treat each of the 128 feature maps separately using self.mynet() and then concatenate them.
Following code works.

for i in range(x1.size()[1]):
            x = x1[:,i,:,:].unsqueeze(1)
            x = self.mynet(x)

but when I use concat or append, it will make error.

for i in range(x1.size()[1]):
            x = x1[:,i,:,:].unsqueeze(1)
            x = self.mynet(x)
            if i == 0:
                x_concat = x_sr
           else:
                x_concat = torch.cat([x_concat,x_sr],dim=1)

or append

for i in range(x1.size()[1]):
            x = x1[:,i,:,:].unsqueeze(1)
            x = self.mynet(x)
            mylist.append(x)

I do not know what is up.

Computing a for loop over the same network is equivalent to do a siamese network of 128 branches. Anyway you are doing something very strange which is concatenating after each for loop which is incorrect. The second way is better.

If you get out of memory… it means you need more memory, that’s all

[quote=“JuanFMontesinos, post:2, topic:32434”]
Computing a for loop over the same network is equivalent to do a siamese network of 128 branches.
[/quote]x

Why storage variables x take up so much memory?

Because you are not only storing x but all the information required to compute backpropagation. Besides, the information required for a siamese architecture is even more.
You are saving all the information corresponding to self.mynet, if that network is big you will require lot of space

Thank you for your reply.
mylist.append(x.detach()) might solve this problem, because I do not need bp.

if you are in an inference stage you should use
with torch.no_grad() not to store all these

Ok. Thank you very much!!