Setup the model for gradient calculation

Is it possible to setup a network consisting of only standard layers (conv2d, batchnorm2d, relu, avgpoling, fc) for the backward pass.
Assuming I already done a forward pass but I did it with torch.no_grad() but I have saved the output of each layer. Is it possible to setup each layers with the previous output so that I can just call the backward function from PyTorch?