Changing the number of channels in a tensor

Hi,
I want to change the number of channels in my tensor:
for example 128 to 256. I used the code below,
a = torch.randn(2, 128, 221, 221)
a = a.expand(-1,256,-1,-1)

but I got this error
RuntimeError: The expanded size of the tensor (256) must match the existing size (128) at non-singleton dimension 1. Target sizes: [-1, 256, -1, -1]. Tensor sizes: [2, 128, 221, 221]

You can only expand dimensions with a size of 1, so you would have to use repeat in your use case.

Thanks for response
I did it
a = torch.randn(2, 128, 221, 221)
a = a.repeat(1, 256, 1, 1)
but I got the message below:
your session crashed after using all available RAM.

Thanks .I solve it.
can you help me about this error:

RuntimeError: CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 15.90 GiB total capacity; 14.80 GiB already allocated; 77.06 MiB free; 15.01 GiB reserved in total by PyTorch)

I used:
import gc
gc.collect()
and
torch.cuda.empty_cache()

but I also got the error.

I don’t think you’ve solved the issue as you are now running out of GPU memory instead of system RAM.
torch.repeat expects to get the factors which are used to multiply the size to get the output tensor.
Based on the initial description you should use a.repeat(1, 2, 1, 1).

yes I corrected it.
but this is another problem.

If a.repeat(1, 2, 1, 1) is creating the “out of memory” issue, you would have to use a smaller tensor or save some memory beforehand.

No, I get this error when I train the model

This error indicates that you are running out of memory on your GPU so you would still need to reduce the memory usage e.g. by lowering the batch size or by using torch.utils.checkpoint to trade compute for memory.

thanks a lot
You are very good and you guide everyone in problems.
I set the number of batch to 1, even reducing the data set. But I still have a problem. I still have to add other layers to this network.