Is there any issue if i create a new loss function and optimizer each time i run the training?

is there any issue if i create a new loss function and optimizer each time i run the training?

for t in range(epoch):
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)

No, nothing wrong with it. Tiny bit slower but probably not significant.

You shouldn’t see any issues for SGD as @Omroth mentioned besides the additional overhead of creating the classes, but note that other optimizers with internal states, such as Adam, will reset these states after each reinitialization which might create issues during the model training.

yeah, my worry was that the crossentropyloss might have an internal state, if that is the case, then i would probably just create both the lossfn and optimizer at the beginning and just keep reusing them