Hi, after reading through multiple Pytorch implementations, I noted that there really isn’t a standard on the function call for the train() and test() function.
On one extreme, most of the time, I’ve seen people simply put the train() and test() function inside main.py. Then initialize training using some form of:
python main.py --mode=train ...
In which, the train() only take a single parameter (usually epoch) or none as all.
Basically, model, dataset, DataLoader, loss and optimizer are global variables in the scope of
If I follow this convention, for my case; In which I compared the performance of two different custom loss → which requires two different custom Dataloaders (and Samplers)
Which meant I have to write two different train() functions.
On the other extreme, if I pass all parameters into train(), the arguments would be: epoch, model, train_set(or dataloader), loss, optimizer.
Which meant the train() function would be extremely bloated.
As such, I wanna ask you guys - What is your preference for the train functions ? train(epoch) or train(epoch, model, dataloader,…) ? Or is it some middle ground between these two extreme.