Dataloader with num_workers >0 not working on jupiter

Hi All, I read several post and I already know that there are issues with using jupyter notebook with multi threading. Since those posts were some years ago, I am wondering if this issue was fixed. Below is my machine and a snippet of my code (nested for loop). It runs indefinitely without giving any errors if I put inside Dataloader num_workers > 0 (default is zero) :slight_smile:

Is CUDA supported by this system? True
CUDA version: 12.1
Using device: cuda
ID of current CUDA device: 0

Name of current CUDA device: NVIDIA GeForce RTX 3060
NVIDIA GeForce RTX 3060

It is working on a single thread and is extremely slow and I need to speed the execution up.
Below a snippet of my code: X_PM10_train_pt, Y_PM10_train_pt are cuda tensors, but nothing change if they move on cpu.

train_loader = DataLoader(TensorDataset(X_PM10_train_pt, Y_PM10_train_pt), batch_size=params['batch_size'],shuffle=False) 
.....
for epoch in range(num_epochs):
    model_mlp2_pm10.train()
    for inputs, labels in train_loader:
        optimizer.zero_grad() #zero the gradients after updating, before starting a new optimization
        outputs = model_mlp2_pm10(inputs) #predict = forward pass with our model
        loss = criterion(outputs, labels) #loss
        loss.backward() #calculate gradients
        optimizer.step()#update weights
# Valutazione del modello sul set di validazione

Please give me any hint how to overcome this issue

Hello, is your system Windows or Linux? If Windows, Iโ€™m afraid the PyTorch dataloader can only use num_workers=0.

windows 11 professional

Thatโ€™s not strictly true and should still work, if e.g. the needed if-clause protection is used as explained here.

1 Like