How to load all data into GPU for training

It is the result after print:

‘’’
Done.

torch.Size([3, 128, 128]) 0

Label : 0

Length of Train Data : 273800

Length of Validation Data : 2679

Traceback (most recent call last):

File “CNNFinal.py”, line 135, in

print(nn.Module.conv_layer1.weight.device)

AttributeError: type object ‘Module’ has no attribute ‘conv_layer1’
‘’’

Access the layer from the model object, not from the nn.Module type.
I.e. in your code you’ve created a model e.g. via:

model = ConvNeuralNet()

then access the layer from model.

Yes I made a mistake,

I run it again, it is on the CPU!!

How can I solve this?

Did you call model.to(device) before?

Haha, I did not.
I have just called it. It is running now.
Thank you for response and help. Another thing, Is it possible to suggest me any tutorial to improve my knowledge about CNN?

Good to hear it’s working now!
Maybe the CIFAR10 tutorial would be a good starter, but I’m unsure what exactly you are looking for.

I will take care of the Cifar10 tutorial, but specifically at this step I want to create a CNN model for Herbarium Dataset. There are several Data set in every year Kaggle competition. I start with the 2019 that was simple but my goal is Dataset 2022 Kaggle competition. In 2019 Dataset I resize the image to 128X128 and do data Augmentation to increase the dataset sizes 7times. The accuracy is 65.83. In this dataset how I can improve the accuracy? Just I should say the number of classes is almost 700.

For 2022 Dataset, I have problem to figure out about the dataset labels. It is a JSON files and it seems the class has hierarchy. I am confuse how I can create the label in this dataset?

It makes sense to “cache” the data after the collation, doesn’t it?
If so, this should be done in the data loader.

What is the current recipe?