Implementing batch gradient descent in neural network

Hi, I am trying to train a neural network with the help of batch gradient descent. I am not sure if I am doing it correctly because I am confronting a lot of errors and an incorrect output. The code is as mentioned -

class Net(nn.Module):
    def __init__(self, in_features):
        super().__init__()
        self.disc = nn.Sequential(
            nn.Linear(in_features, 4),
            nn.Sigmoid(),
            nn.Linear(4, 1),
            nn.Sigmoid(),
        )

    def forward(self, x):
        return self.disc(x)

net = Net(image)

for epoch in range(num_epochs):
    totalloss=torch.empty_like(lossNet)
    net.zero_grad()
    for sample in dataset:
         lossNet= xxxxx
         totalloss=totalloss+lossNet
   
    totalloss=torch.div(totalloss,5) #Since there are 5 samples in the dataset so taking the average
    net.zero_grad()    
    totalloss.backward(retain_graph=True)
    opt_net.step()

I would like to know if this is the correct way of solving batch gradient descent? If not then what is the alternative method to apply batch gradient descent in Pytorch?

I believe the typical approach is to capture the model’s parameters using the optimizer and to zero_grad on the optimizer. Please take a look at the MNIST example for more details:

examples/main.py at main · pytorch/examples (github.com)