How to see what is backward doing?

Hi, how to see here how backward executed the derivation? Is it possible to see computational graph? Or is it possible anything how he does the backward of CustomCrossEntropy?
I want to reproduce it manually.

Thank you.

import torch
def CustomCrossEntropy(y_pred, y_target):
    # convert class ID to one hot encoding
    one_hot = torch.zeros_like(y_pred)
    one_hot[range(y_target.shape[0]), y_target] = 1.0

    # compute cross entropy without softmax
    ce = -one_hot * (y_pred - y_pred.exp().sum(1).log().unsqueeze(1))

    # reduction mean
    result = ce.sum(dim=1).mean()
    return result


batch_size = 3
classes_count = 10
target_class_id = torch.randint(0, classes_count, (batch_size,))

# initial logits value
x_initial = torch.randn((batch_size, classes_count))
xc = torch.nn.Parameter(x_initial.clone(), requires_grad=True)

loss_func_c = CustomCrossEntropy

loss_c = loss_func_c(xc, target_class_id)
loss_c.backward()

print(loss_c)

Hi @Martina_Ragulikova,

Have a look at the torchviz library to visualize the computational graph.

The github repo is here: GitHub - szagoruyko/pytorchviz: A small package to create visualizations of PyTorch execution graphs

Hi thanks for reply, I think that is not enough, isn’t it possible to see mathematical operations?

The comp. graph won’t show you an explicit formula, but it’ll show you the forward pass of your function. This will also show the relative backward formula via the associated backward function, the github repo has a figure showing this. If you want to find the explicit formula for the derivative this is pretty hard to do.

You’ll need to identify every function in your forward pass, and then find the associated backward formula during the backward pass. The backward formulae are defined here (in C++ for reference, not Python), I believe: pytorch/FunctionsManual.cpp at master · pytorch/pytorch · GitHub

If you want an explicit expression for the backward, you’ll need to manually define the backward pass and represent it as an torch.autograd.Function object.

So you think that finding derivative or “backward” of this function is hard to do?

ce = -x * (y - y.exp().sum(1).log().unsqueeze(1))
result = ce.sum(dim=1).mean()

Here is similar topic - Backward of crossentropyloss - #5 by Martina_Ragulikova

And I showed there same thing with MSE function but can not figure out backward for this one…