During evaluation, how to apply torch.no_grad only to part of of model?

I have a large model where most of it are just normal structure, but there is one place that I need to manually calculated part of the gradients, something like this

class Model(nn.Module):
    def __init__(self):
        # many other layers
        self.many_layers = ManyLayers()

    def forward(self, x, y, z):
        fc = self.many_layers(x)
        # Only this little place that I need to manually calculate gradient
        grad = torch.autograd.grad(z, y, torch.ones_like(z))
        grad_out = ManyOtherLayers(grad)
        return SomeOtherComplexLayers(grad_out, fc)

During evaluation, I want to use torch.no_grad() for performance, but got error

RuntimeError: element 0 of variables does not require grad and does not have a grad_fn

How do I enable grad for only this specific small part of tensor while keeping the rest of it having torch.no_grad? I tried to set y.require_grad = True, but it tells me only leave tensor can be modified.

You can use with torch.no_grad() as a context manager and move parts of your forward pass into it, which don’t need to create a computation graph and are thus also not part of the gradient computation.

Thanks, that is one of the solution, but I wonder if there are other ways than manually split the model. Do u know how other frameworks like tensorflow deal with such situation?