is this mandatory in the training session? For instance I’ve built such function to do my training.
def train(...):
# Model: train
model.train()
# Load the data and convert to device
for (data, label) in loader:
...
# Refresh the gradients
optimizer.zero_grad(set_to_none=True)
# Calculate loss
loss = model.objective(x)
# Backprop
loss.backward()
# Optimizer step
optimizer.step()
Should I keep it as it is or am I supposed to have item of loss to be able to free some space in my gpu?