Call nn modules in Dataloader

Hello guys,

I came here with a particular question about Dataloader, Could I introduce cuda nn modules in call?
I am having problems with CUDA initialization but I didn’t find the answer. Thank you so much in advanced. The error I am having happens when I use .cuda() or torch.cuda.FloatTensor(), I describe the problem below:

RuntimeError: CUDA error: initialization error

Dataloader and reading data

class SomeNetwork(object):
    def __init__(self,parama=1,paramb=2,paramc=3):
        self.net = nn.net(self,parama=1,paramb=2,paramc=3)).cuda() # CUDA
    def __call__(self, x):
       self.net(x[None,:].cuda())
        return x[0]
transformations.extend([ transforms.ToTensor(),
                         SomeNetwork(),
                         transforms.ToPILImage(),

trainset = ImageNet(root='./Database/', 
                    split='train',  
                    transform=transforms.Compose(transformations), 
                    target_transform=None,)

You’ll most likely encounter this issue if you are using multiple workers in your DataLoader, as each will try to initialize CUDA.
Could you try to set num_workers=0 and run your code again?

Thank you so much, It ran now.

Unfortunately, the other traditional augmentation are not working with 16 workers anymore. I hope this can be improved in future release.

You could try to add this code snippet to your code.
However, usually the data processing it performed on the CPU, while the data is pushed onto the device in the training loop.

1 Like

Thank you again @ptrblck , I checked the link (also the function documentation) but I did not understand how it works, could you share me a link of why it should work?

I added it at the beginning of the main code in python3, but I did not see an speed improvement.