tensor = torch.randn((1, 2), requires_grad=True) print(torch.tanh(tensor)) #a) print(nn.Tanh()(tensor)) #b) #nn.functional.tanh is deprecated #print(torch.nn.functional.tanh(tensor))
They are actually the same, and so should be identical it terms of speed
Is you look at the source,
torch.tanh in its
I saw this just a moment ago. Thanks, I would say
torch.tanh(tensor) is just a tiny bit faster.
BTW, do you have any idea what is
@weak_script_method suppose to do?