@torch.no_grad() vs with torch.no_grad()

What is the difference between @torch.no_grad() and the usual with torch.no_grad().
Are both the same?,if it is when to use which?

Hi @Rahul_pillai

with torch.no_grad():
is used to locally turn of the gradient tracking.

@torch.no_grad() is another option to turn off gradients and is used with a function. So anytime this function is called the its gradient tracking will offed locally within the execution of this function. Few examples can be found at link

Thanks for answering @SANTOSH_S

@torch.no_grad()
def fn1():
    <statements>

def fn2():
    <statements>

Here will it turn off the gradients for fn1 alone or both fn1 and fn2.

Just function fn1 alone.

Thank you @harsha_g :slightly_smiling_face: