A littile question about pytorch training parameters

I’m learning pytorch and DL, I’m training a classifier on a image dataset. I have splited train dataset and validation dataset and I compelete traning on training set like this:

model = Classifier().cuda()
loss = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
num_epoch = 30
for epoch in range(num_epoch):

After that, I want to merge training set and validation set to get a bigger dataset and train more 30 epochs.
Could I code like this?:

train_val_x = np.concatenate((train_x, val_x), axis=0)
train_val_y = np.concatenate((train_y, val_y), axis=0)
train_val_set = ImgDataset(train_val_x, train_val_y, train_transform)
train_val_loader = DataLoader(train_val_set, batch_size=batch_size, shuffle=True)
model_best = Classifier().cuda()
loss = nn.CrossEntropyLoss() # 因為是 classification task,所以 loss 使用 CrossEntropyLoss
optimizer = torch.optim.Adam(model_best.parameters(), lr=0.001) # optimizer 使用 Adam
num_epoch = 30
for epoch in range(num_epoch):

Or should I use the formmer var model instead of model_best
Actually my question is that in pytorch, model.Module.parameter() method can remain my pre-train parameters as static?
Thanks a lot. ^.^