i use this CNN architecture to classify images (input : 224,224) , (output: 2 )
def init(self, num_classes=2):
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
self.drop_out = nn.Dropout()
self.fc1 = nn.Linear(15488, 1000)
self.fc2 = nn.Linear(1000 ,2)
def forward(self, x):
#x = x.permute(0,3,1,2)
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.drop_out(out)
out = self.fc1(out)
out = self.fc2(out)
and it gives me :
RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 38816153600 bytes. Buy new RAM!
cpu : intel xeon® e5-2696 v3 , 2.30ghz
ram : 110gb
How large is your batch size to create this error and how much of your RAM is already in use when running this script?
If your main RAM is already filled before executing this script, how much swap do you have?
Note that using the swap will hit your performance pretty hard.
before running this script ram is : 5 of 110
after running this script ram is : 110 of 110
memory : 96
with swap : 120
For an input of
[5, 1, 224, 224], your forward pass will use ~120MB.
Could you post an executable code snippet to reproduce the >105GB memory allocation?
can you specify what do you mean by an executable code snippet to reproduce the >105GB memory allocation ?
A code snippet I could run on my machine to reproduce the large memory allocation.
At the moment I’m using your model definition and a random input in the shape
[5, 1, 224, 224] in the forward pass, which only uses 120MB, which is far off of the reported 105GB.
I couldn’t fully know which piece of code should be sent so this is the full repo on github.
please before you run it make sure to put the “covid” folder alongside the “other” folder in a folder called “data”
your help is very much appreciated
It seems you are directly loading all images to your RAM in this cell:
print("[INFO] loading images...")
imagePaths = list(paths.list_images(args["dataset_1"]))
data = 
labels = 
for imagePath in imagePaths:
label = imagePath.split(os.path.sep)[-2]
image = cv2.imread(imagePath)
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
image = cv2.resize(image, (224, 224))
data = np.array(data) / 255.0
labels = np.array(labels)
Depending on the number of images and their size, this might take a lot of memory.
If you are dealing with a large dataset, it’s recommended to lazily load the batches using a
DataLoader as described in this tutorial.
How many images does your dataset have and how large is each image after preprocessing?
1075 images with an average of 400kb \ image , i am using this so i can make sure that the picture is in grayscaler not RBG.so what can we use instead for this matter ?