RuntimeError:. DefaultCPUAllocator: not enough memory: you tried to allocate 12726195328 bytes

train_data_one_func = train_data_one_func.float()

There is an error when I run this line. I try to batch it, but I still run out of memory.

Note: My computer has 48G RAM

     train_data_one_func is a [120000,41,616] tensor

Can someone tell me how to solve this problem

Your method tries to allocate ~12GB and it seems your host RAM of 48GB might not be enough.
You would thus need to reduce the memory usage by e.g. decreasing the batch size of your input.

Thank you for answering my question.
But when I run the following code, I run out of memory no matter how small batc_size is. Here is the code for batc_size=1000 and the error message

batch_size = 1000
batches = [train_data_one_func[i:i+batch_size, :] for i in range(0, train_data_one_func.size(0), batch_size)]

for i in range(len(batches)):
       batches[i] = batches[i].float()

Error on this line batches[i] = batches[i].float()
RuntimeError: DefaultCPUAllocator: not enough memory: you tried to allocate 101024000 bytes.``

You would need to check how much free RAM your system has, e.g. via monitoring htop, and make sure your data processing doesn’t fill it all up.