Is computation graph reconstructed in each epoch?

I am not sure if I understand it correctly, but when I read the code here: https://github.com/jcjohnson/pytorch-examples/blob/master/nn/two_layer_net_module.py.

In training part:

for t in range(500):
  # Forward pass: Compute predicted y by passing x to the model
  y_pred = model(x)

  # Compute and print loss
  loss = loss_fn(y_pred, y)
  print(t, loss.item())

  # Zero gradients, perform a backward pass, and update the weights.
  optimizer.zero_grad()
  loss.backward()
  optimizer.step()

It seems to me that in each epoch, the computation graph is reconstructed by:

y_pred = model(x)

# Compute and print loss
loss = loss_fn(y_pred, y)

Is it doing repeated construction here or is it actually doing something similar to keras’s “compile and run” operations to construct computation graph once and then do computation only afterwards?

Thank you!

The computation graph is constructed in each forward pass, which allows you to create your model dynamically and use plain python control-flow operators like for loops etc.