Skorch refits the data when I call load_params to load the saved parameters

So I’m using skorch for a Project. I find it super cool, the API is straightforward to work with. I trained my NN and I’m using a Checkpoint callback to save the best params, optimizer and history as described here: [skorch.callbacks — skorch 0.15.0 documentation] , so now I want to load the saved model or rather parameters and I’m using the description in the docs:
load_params . As I said, I saved the model using Checkpoint object so I thought why not using the same Checkpoint object to load the model but I think it is messy because I must declare the Checkpoint object again etc… I noticed when I load the model that somehow the fit function is called again and my model is starting a training process again! can someone tell me why it is happening or what is wrong with my code maybe? another thing is that I find it bothering to again write a Checkpoint object to load the model after saving it and I’m sure there is a better way that’s why I want to ask is there any way to do this like in pytorch by simply calling torch.load(model_path) ?

this is my code:

from skorch.callbacks import Checkpoint, TrainEndCheckpoint, LoadInitState
from skorch.dataset import CVSplit
from skorch.regressor import NeuralNetRegressor
from models import BearingCoarseNetwork, BearingDifferenceNetwork
import torch
from torch import optim
from main import get_datasets

device = ‘cuda’ if torch.cuda.is_available() else ‘cpu’

coarse_checkpoint = Checkpoint(dirname=‘results/bearing/coarse’)
coarse_train_end_checkpoint = TrainEndCheckpoint(dirname=‘results/bearing/coarse’)
bearing_coarse_model = NeuralNetRegressor(
module=BearingCoarseNetwork(n_hidden=32),
)

diff_checkpoint = Checkpoint(dirname=‘results/bearing/difference’)
diff_train_end_checkpoint = TrainEndCheckpoint(dirname=‘results/bearing/difference’)
bearing_difference_model = NeuralNetRegressor(
module=BearingDifferenceNetwork(n_hidden=32),
)

bearing_coarse_model.initialize()
bearing_difference_model.initialize()
bearing_coarse_model.load_params(checkpoint=coarse_checkpoint)
bearing_difference_model.load_params(checkpoint=diff_checkpoint)

trainset, testset = get_datasets()
coarse_preds = bearing_coarse_model.predict(trainset)
diff_preds = bearing_difference_model.predict(trainset)
y_final = coarse_preds + diff_preds
print(y_final.shape)

I noticed when I load the model that somehow the fit function is called again and my model is starting a training process again! can someone tell me why it is happening or what is wrong with my code maybe?

Loading parameters from a checkpoint or from somewhere does not start the fit loop. Your code doesn’t seem complete, so it is hard for me to replicate this issue. Can you confirm this is not a bug on your side or post a more complete example?

another thing is that I find it bothering to again write a Checkpoint object to load the model after saving it and I’m sure there is a better way that’s why I want to ask is there any way to do this like in pytorch by simply calling torch.load(model_path)?

Once the checkpoint callback writes a model (i.e., parameters, history and optimizer), you can always specify the parameter file path in net.load_params alone via the f_params parameter. In your example you have

coarse_checkpoint = Checkpoint(dirname=‘results/bearing/coarse’)

which will write the model parameter to 'results/bearing/coarse/params.pt' (documentation). You can then load the parameters individually using

net.load_params(f_params='results/bearing/coarse/params.pt')

The basic loading/saving workflow is documented here.