How to inference on image using .pth model file

Hello, I am using PyTorch for the first time. I have no background in ML. I already have images and the .pth model file. Pardon me I ask very basic questions
My doubts are:

  • How to run inference on my images using the .pth file.

  • What will be the output of the inference.

Do you know how the .pth file was created?
I.e. is it containing

  • a state_dict? In that case you would need to get the source code of the model definition, create a model instance and load the state_dict. Once this is done, you can pass the input tensor to the model to get the output.
  • a directly stored model? In that case, you would still need to get the source files and make sure they are in the same locations as described in the serialization docs. Note that this approach is not recommended, as it may break in various ways.
  • a scripted model? In that case you could load the model via torch.jit.load and pass the input tensor to the model to get the output.
1 Like

I did not create the model, but one of our ex-employee created this using YOLCAT++. is this enough info?

Not really, as I don’t know how the model was stored based on this search term.

I am using the pytorch object detection tutorial… and successfully started crafting a COCODataset … all is looking well. but now I want to start saving the model at the end of the training epochs… and load it later in a subsequent scoring.py

I see your advice and attempting to save/load likeso

Training.py

    for epoch in range(num_epochs):
        # train for one epoch, printing every 10 iterations
        train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
        # update the learning rate
        lr_scheduler.step()
        # evaluate on the test dataset
        evaluate(model, data_loader_test, device=device)

    print("That's it!")
    torch.save(model.state_dict(), "/somepath/faster_r_cnn.pth")

Scoring.py

import torch
from torch.utils.data import Dataset
from torchvision import transforms
from torchvision.datasets.folder import default_loader

from train import get_model_instance_segmentation


class ImageDataset(Dataset):
    def __init__(self, paths, transform=None):
        self.paths = paths
        self.transform = transform

    def __len__(self):
        return len(self.paths)

    def __getitem__(self, index):
        image = default_loader(self.paths[index])
        if self.transform is not None:
            image = self.transform(image)
        return image


def score():
    # train on the GPU or on the CPU, if a GPU is not available
    device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')

    num_classes = 2

    model = get_model_instance_segmentation(num_classes, has_mask=False)

    path_to_model = "/mymodelfromtraining.pth"
    model = model.load_state_dict(torch.load(path_to_model))
    model.eval()
    model.to(device)

    transform = transforms.Compose([
        # transforms.Resize(224),
        # transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.485, 0.456, 0.406],
                             std=[0.229, 0.224, 0.225])
    ])

    paths = ['/imagetoscore_01.png',
             '/imagetoscore_02.png']

    images = ImageDataset(paths, transform=transform)
    loader = torch.utils.data.DataLoader(images, batch_size=1, num_workers=1)

    all_predictions = []
    with torch.no_grad():
        for batch in loader:
            predictions = list(model(batch.to(device)).numpy())
            for prediction in predictions:
                all_predictions.append(prediction)

    #  Take predictions and masks and draw on the images


if __name__ == "__main__":
    score()

But this just crashes…

score.py
Traceback (most recent call last):
  File "/home/score.py", line 64, in <module>
    score()
  File "/home/score.py", line 32, in score
    model.eval()
AttributeError: '_IncompatibleKeys' object has no attribute 'eval'

EDIT:

Aahhh I cracked it… I should not assign the model back into itself… and instead simply call it like so

    state_dict = torch.load(path_to_model)
    model.load_state_dict(state_dict)