i use this CNN architecture to classify images (input : 224,224) , (output: 2 )
class ConvNet(nn.Module):
def init(self, num_classes=2):
super(ConvNet, self).init()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
#nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=5, stride=0))
self.drop_out = nn.Dropout()
self.fc1 = nn.Linear(15488, 1000)
self.fc2 = nn.Linear(1000 ,2)
def forward(self, x):
x.float()
#x = x.permute(0,3,1,2)
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.drop_out(out)
out = self.fc1(out)
out = self.fc2(out)
return out
and it gives me :
RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 38816153600 bytes. Buy new RAM!
If your main RAM is already filled before executing this script, how much swap do you have?
Note that using the swap will hit your performance pretty hard.
A code snippet I could run on my machine to reproduce the large memory allocation.
At the moment I’m using your model definition and a random input in the shape [5, 1, 224, 224] in the forward pass, which only uses 120MB, which is far off of the reported 105GB.
I couldn’t fully know which piece of code should be sent so this is the full repo on github.
please before you run it make sure to put the “covid” folder alongside the “other” folder in a folder called “data”
Depending on the number of images and their size, this might take a lot of memory.
If you are dealing with a large dataset, it’s recommended to lazily load the batches using a Dataset and DataLoader as described in this tutorial.
How many images does your dataset have and how large is each image after preprocessing?
1075 images with an average of 400kb \ image , i am using this so i can make sure that the picture is in grayscaler not RBG.so what can we use instead for this matter ?