Hi I’m sure this topic is well known and people already asked this question. But I couldn’t solve my issue, which is loading my labels into GPU.
I’m Using a deit_base_patch16_224 model with timm library and I would like to train my model on GPU. Here is a quick look at my code :
# Load the pre-trained ViT model from timm
model = timm.create_model("deit_base_patch16_224", pretrained=True)
next(model.parameters()).is_cuda
model = model.to(device)
model.reset_classifier(20)
# Define the loss function and optimizer
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), lr=1e-4)
# Train the model
for epoch in tqdm(range(num_epochs)):
# Set the model to training mode
model.train()
# Loop over the training dataset in batches
for i, (inputs, labels) in enumerate(train_loader):
# Zero the gradients
labels_ = labels.to(device)
inputs = inputs.to(device)
print(labels.is_cuda)
optimizer.zero_grad()
# Forward pass
outputs = model(inputs)
loss = criterion(outputs, labels.to(device))
# Backward pass and optimization step
loss.backward()
optimizer.step()
# Set the model to evaluation mode
model.eval()
# Compute the validation accuracy
correct = 0
total = 0
with torch.no_grad():
for inputs, labels in val_loader:
outputs = model(inputs)
predicted = (torch.sigmoid(outputs) > 0.5).int()
total += labels.size(0)
correct += (predicted == labels).all(dim=1).sum().item()
val_accuracy = 100 * correct / total
# Print the epoch number and validation accuracy
print("Epoch {}/{}: Validation Accuracy = {:.2f}%".format(epoch+1, num_epochs, val_accuracy))
My inputs and my model already beeing on GPU I’m having this error :
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm)