How to save weights after training in pytorch, and then use the dumped weights for test anytime?

How to save each updated weights in Pytorch after final training?
Actually I have to test multiple data in different time. If I can dump my weights, can call the weights any time for test dataset.

You can store the state_dict, which will include the parameters and buffers of the model as described in the serialization docs.

Later you can create an instance of your model and load this state_dict before testing the model.
Also, don’t forget to call model.eval() to switch the behavior of some layers, such as disabling dropout.

I have used the idea you gave and implement it. The code is now working without any error. I have to check further that is it working perfectly or not. Have a look if the implementation is okay?

    def __init__(self,n_inputs):
        super(Network,self).__init__()

        #input descriptor
        self.hidden1 = Linear(n_inputs, H)                           # input to 1st hidden layer
        kaiming_uniform_(self.hidden1.weight, nonlinearity='relu')
        self.tanh = Tanh()
        #self.sigmoid = Sigmoid()
        self.hidden2 = Linear(H,H)                                 # 1st hidden to 2nd hidden layer
        kaiming_uniform_(self.hidden2.weight, nonlinearity='relu')
        self.tanh = Tanh()
        #self.sigmoid = Sigmoid()
        # Output layer
        self.output = Linear(H, 1)                                  # 2nd hidden layer to output
        xavier_uniform_(self.output.weight)
        # Define sigmoid activation and softmax output 
        self.sigmoid = Sigmoid()

    def forward(self, X):
        X = self.hidden1(X)
        X = self.tanh(X)
        X = self.hidden2(X)
        X = self.tanh(X)
        X = self.output(X)
        X = self.tanh(X)


        return X

def train_model(train_dl, model):
    criterion = MSELoss()                                               #loss function
    optimizer = Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999))  #optimizer should be used
    for epoch in range(200):
        for i, (inputs, targets,sign_targets) in enumerate(train_dl):
            optimizer.zero_grad()
            yhat = model(inputs)
            loss = criterion(yhat, targets)
            loss.backward()
            optimizer.step()
            print ("loss.item",loss.item())
    print('Finished Training')

def evaluate_model(test_dl, model):                                          #send the test dataset through the network
    predictions, actuals = list(), list()
    for i, (inputs, targets,sign_targets) in enumerate(test_dl):
        yhat = model(inputs)
        yhat = yhat.detach().numpy()
        actual = targets.numpy()
        
        actual = actual.reshape((len(actual), 1))
        predictions.append(yhat)
        actuals.append(actual)
    predictions, actuals = vstack(predictions), vstack(actuals)
    # calculate accuracy
    acc = mean_squared_error(actuals, predictions)
    return acc

train_model(train_dl, model)

torch.save(model, "model.pth")
the_model = torch.load("model.pth")


# evaluate the model
acc = evaluate_model(test_dl, the_model)



I would recommend to store and load the state_dict, not directly the model as it may break in various ways (e.g. if you change the file structures, etc.).