What will happen if I set create_graph=False after computing second derivative?

I am trying to get first and second output derivatives w.r.t to network inputs. Specifically, the input size is (Batch_size, 2) and output size is (Batch_size,70) and (Batch_size,200) for out1 and out2 respectively. I want a result that is the same size with network output, and in line i of the desired result each entry "result_ij "should be $\partial(output_ij\x_i)$.
Here is my code:

    out1, out2 = model(torch.cat((col_x, col_t), dim=1))
    out1_x = torch.zeros_like(out1)
    out1_t = torch.zeros_like(out1)
    for i in range(out1.shape[1]):
        r_g = torch.zeros_like(out1)
        r_g[:, i] = 1
        res = torch.autograd.grad(out1,[col_x,col_t],grad_outputs = [r_g,r_g],retain_graph=True,create_graph=True,\
                                  allow_unused=True)
        out1_x[:,i] = res[0].squeeze()
        out1_t[:,i] = res[1].squeeze()

    # print(torch.cuda.memory_summary(device=None, abbreviated=False))
    out2_x = torch.zeros_like(out2)
    out2_t = torch.zeros_like(out2)
    for i in range(out2.shape[1]):
        r_g = torch.zeros_like(out2)
        r_g[:, i] = 1
        res = torch.autograd.grad(out2,[col_x,col_t],grad_outputs = [r_g,r_g],retain_graph=True,create_graph=True,\
                                  allow_unused=True)
        out2_x[:,i] = res[0].squeeze()
        out2_t[:,i] = res[1].squeeze()
    # print(torch.cuda.memory_summary(device=None, abbreviated=False))
    out1_xx = torch.zeros_like(out1)
    out2_xx = torch.zeros_like(out2)
    for i in range(out1.shape[1]):
        r_g = torch.zeros_like(out1)
        r_g[:, i] = 1
        partial_out1_xx = \
        torch.autograd.grad(out1_x, col_x, grad_outputs=r_g, retain_graph=True, create_graph=False, allow_unused=True)[0]
        out1_xx[:, i] = partial_out1_xx.squeeze()
    # print(torch.cuda.memory_summary(device=None, abbreviated=False))
    for i in range(out2.shape[1]):
        r_g = torch.zeros_like(out2)
        r_g[:, i] = 1
        partial_out2_xx = \
        torch.autograd.grad(out2_x, col_x, grad_outputs=r_g, retain_graph=True, create_graph=False, allow_unused=True)[0]
        out2_xx[:, i] = partial_out2_xx.squeeze()

In this code I set create_graph=False when I’m computing second derivative. Am I doing right? Or is there any quicker way to get results I want, without traversing columns?

Yes, there should be no need to create_graph=True when you already calling .grad( to compute the second derivative.