With torch.nograd() - context manager

It is common knowledge that
with fp.open()
the fp.close() will happen automatically after indented lines afterward with opening statement finished executing.

I came across:
with torch.nograd(), does it mean at the end, it will re-enable back gradient back? How does context manager know to turn back on if so?

Yes, gradient calculation will be enabled when this context is left assuming it was allowed before.
During its __enter__ call it will save the current status of torch.is_grad_enabled() and will reset it in its __exit__ method as seen here.