Using autograd to compute Jacobian of partial derivatives

I apologize if this question is obvious or trivial. I am very new to pytorch and I am trying to understand the autograd.grad function in pytorch. I have a neural network G that takes in inputs (x,t) and outputs (u,v). Here is the code for G:

class GeneratorNet(torch.nn.Module):

"""
A three hidden-layer generative neural network
"""

def __init__(self):
    super(GeneratorNet, self).__init__()
    self.hidden0 = nn.Sequential(
        nn.Linear(2, 100),
        nn.LeakyReLU(0.2)
        )

    self.hidden1 = nn.Sequential(
        nn.Linear(100, 100),
        nn.LeakyReLU(0.2)
        )

    self.hidden2 = nn.Sequential(
        nn.Linear(100, 100),
        nn.LeakyReLU(0.2)
        )

    self.out = nn.Sequential(
        nn.Linear(100, 2),
        nn.Tanh()
        )

def forward(self, x):
    x = self.hidden0(x)
    x = self.hidden1(x)
    x = self.hidden2(x)
    x = self.out(x)
    return x

Or simply G(x,t) = (u(x,t), v(x,t)) where u(x,t) and v(x,t) are scalar valued. Goal: Compute $\frac{\partial u(x,t)}{\partial x}$ and $\frac{\partial u(x,t)}{\partial t}$. At every training step, I have a minibatch of size $100$ so u(x,t) is a [100,1] tensor. Here is my attempt to compute the partial derivatives, where coords is the input (x,t) and just like below I added the requires_grad_(True) flag to the coords as well:

tensor = GeneratorNet(coords)
tensor.requires_grad_(True)
u , v = torch.split(tensor, 1, dim=1)
du = autograd.grad(u, coords, grad_outputs = torch.ones_like(u), create_graph=True, 
        retain_graph=True, only_inputs=True, allow_unused=True)[0]

du is now a [100,2] tensor. Question : Is this the tensor of the partials for the 100 input points of the minibatch?

There are similar questions like computing derivatives of the output with respect to inputs but I could not really figure out what’s going on. I apologize once again if this is already answered or trivial. Thank you very much.