Avoiding memory errors in *simple* autoencoder (by batches?)

Hey guys,

I have a simple, linear autoencoder of the following shapes:

128**3 -> 512 -> 128*3

so

model = [nn.Linear(1283, 512, bias=False), nn.Linear(512,1283, bias= False)]
my_ae = nn.Sequenial(*model).to(‘cuda’)

which I would like to train with L2 loss. Problem is, I don;t have enough memory to declare my model in
such a way.

Is there any good way to approach this problem? Train it somehow in batches?

I assume you are running out of memory, if you try to pass the whole dataset into the model?
If so, have a look at the Data loading tutorial to see, how a Dataset and DataLoader are working and how batches are created.

The model itself is quite small and its parameters will just take approx. 1.5MB of memory.

I think the model is:

model = nn.Sequential(nn.Linear(128**3, 512, bias=False), nn.Linear(512,128**3, bias= False))

So it would actually take +8GB of memory