With torch.no_grad(): - can I include optimizer, backward and etc?

Can I do with torch.no_grad():

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
1 Like

No that won’t work, as this disables gradient calculation.
From the docs:

Context-manager that disabled gradient calculation.
Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward() . It will reduce memory consumption for computations that would otherwise have requires_grad=True. In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True.

What is your intention?

I was just wanted to implement dataloader on the fly but I found a better solution without much headache. It’s a DataLoader from PyTorch that handles all the problems of creating target and input by providing a folder name with folders as categories.

1 Like

Yeah I know the one you are talking about - DL4J also has a similar dataloading capability if you ever need it.