Small typo in tutorial?

Hi,

I am a newcomer to Pytorch world and started with the 60 minute blitz tutorial.

In the second paragraph of loss function part, it is mentioned that mean-squared error is computed between “input” and target.

Shouldn’t it be computed between “output” and target ? Or did I misunderstand the point here ?

Best

Yes i think it’s a typo.

1 Like

Just one additional question related to the same tutorial:

How to use the ‘grad_fn’ attribute of the ‘loss’ variable in order to show the computational graph like this:

input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
-> view -> linear -> relu -> linear -> relu -> linear
-> MSELoss
-> loss

Unfortunately, it is not described how to generate such graph trace using ‘grad_fn’.

Hi,

Depends what you mean by “a graph”.
The code sample below (from the tutorial) show you how to access the first elements:

print(loss.grad_fn)  # MSELoss
print(loss.grad_fn.next_functions[0][0])  # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0])  # ReLU

Basically grad_fn from a Variable gives you the Function that generated this Variable.
From a Function, you should use .next_functions to see which functions were before this one in the graph.

Aha, I just thought that there is a way to generate such a pattern (that includes arrows ‘layer1->layer2 …etc’).
Thaaanks so much

If you want to generate a graph out of that, you can check this small package and how it’s implemented.

1 Like