Imagenet classes

I am trying to use a pretrained resnet model to test on a elephant image. How do we get the class name after getting class id. Also I am not sure I am doing preprocessing correctly. Is this the right approach?

import torch
import torchvision.transforms as transforms
from torch.autograd import Variable
from torchvision.models import resnet50
from PIL import Image

net = resnet50(pretrained=True)
centre_crop = transforms.Compose([
    transforms.Scale(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
img = Image.open('elephant.jpg')
out = net(Variable(centre_crop(img).unsqueeze(0)))
print(out[0].sort()[1][-10:])
4 Likes

I found a map of id -> label https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a .
So for example

with open("imagenet1000_clsid_to_human.txt") as f:
    idx2label = eval(f.read())

for idx in out[0].sort()[1][-10:]:
    print(idx2label(idx))

will work, though eval may be not good way.

3 Likes

After downloading this URL used by keras:
https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json

you could use:

import json
class_idx = json.load("imagenet_class_index.json")
idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]

for idx in out[0].sort()[1][-10:]:
    print(idx2label[idx])

Best regards

Thomas

9 Likes

@tom, thanks!
A tiny update:slight_smile:

import json
class_idx = json.load(open("imagenet_class_index.json"))
4 Likes

This works:

import json
idx2label = []
cls2label = {}
with open("../../data/imagenet_class_index.json", "r") as read_file:
    class_idx = json.load(read_file)
    idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]
    cls2label = {class_idx[str(k)][0]: class_idx[str(k)][1] for k in range(len(class_idx))}
1 Like

I want to know where these imagenet_class_index.json files originate from, and how to generate them. I find it frustrating that the ONNX model zoo does not publish these artifacts, and I find little documentation on how to create them myself from scratch.

I need to know how to generate them, because I wish to model serve, and I see currently 2 options/choices for this purpose. It appears in the first example it states it needs the index_to_name.json file and in the second the model definition file. However, in the case of the second, it also requires a signature.json file input as well. This is all to convert an ONNX model from the model zoo to .MAR format for the model server.

  1. PTH to MAR format using TorchServe

Pre-requisites to create a torch model archive (.mar) :

serialized-file (.pt) : This file represents the state_dict in case of eager mode model.

model-file (.py) : This file contains model class extended from torch nn.modules representing the model architecture. This parameter is mandatory for eager mode models. This file must contain only one class definition extended from torch.nn.modules

index_to_name.json : This file contains the mapping of predicted index to class. The default TorchServe handles returns the predicted index and probability. This file can be passed to model archiver using —extra-files parameter.

version : Model’s version.

handler : TorchServe default handler’s name or path to custom inference handler(.py)

PTH to MAR format (TorchServe)

2.) ONNX to MAR using Multi-model-server

The downloaded model artifact files are:

Model Definition (json file) - contains the layers and overall structure of the neural network.

Model Params and Weights (params file) - contains the parameters and the weights.

Model Signature (json file) - defines the inputs and outputs that MMS is expecting to hand-off to the API.

assets (text files) - auxiliary files that support model inference such as vocabularies, labels, etc. These vary depending on the model.

ONNX to MAR format (Multi-Model-Server)

Hi Thomas,

I got this error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Input In [47], in <module>
      1 import json
----> 2 class_idx = json.load("imagenet_class_index.json")
      3 idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]
      5 for idx in out[0].sort()[1][-10:]:

File /usr/lib/python3.8/json/__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
    274 def load(fp, *, cls=None, object_hook=None, parse_float=None,
    275         parse_int=None, parse_constant=None, object_pairs_hook=None, **kw):
    276     """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
    277     a JSON document) to a Python object.
    278 
   (...)
    291     kwarg; otherwise ``JSONDecoder`` is used.
    292     """
--> 293     return loads(fp.read(),
    294         cls=cls, object_hook=object_hook,
    295         parse_float=parse_float, parse_int=parse_int,
    296         parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)

AttributeError: 'str' object has no attribute 'read'
import json
class_idx = json.load("imagenet_class_index.json")
idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]

for idx in out[0].sort()[1][-10:]:
    print(idx2label[idx])

for

! wget  'https://www.freeiconspng.com/thumbs/cat-png/cat-png-26.png' 
from PIL import Image

cat_img = Image.open('cat-png-26.png')
cat_img_preprocessed = test_transforms(cat_img)
batch_img_cat_tensor = torch.unsqueeze(cat_img_preprocessed, 0)
out = model_ft(batch_img_cat_tensor.cuda())

I have this when I print out:

That seems more generally Python and I’m not the best person for this type of question, but the error seems to be that json.load expects an open file instead of a filename.

Best regards

Thomas

I am surprised Pytorch doesn’t have already a dictionary from outputs to classes implemented. Each image classification model should have such a dictionary.

I came across this same problem! This page in the docs was really helpful: Models and pre-trained weights — Torchvision main documentation

The resnet50(pretrained=True) command is deprecated in the latest version (v0.13 as time of writing) and can be replaced by an imported weights object:

from torchvision.models import resnet50, ResNet50_Weights

weights = ResNet50_Weights.DEFAULT
net = resnet50(weights=weights)

The preprocessing transforms and categories are bundled into the weights object:

centre_crop = weights.transforms()
categories = weights.meta["categories"]

Hopefully this is useful! :slight_smile:

3 Likes

Yes, it is, thanks for sharing it.

Does it really work, can anybody also help me with this?

further up in the thread @tom gives a link for the S3 bucket used by keras that is still active. I used that to receive the class names with this code snippet. I have omitted the processing logic

import json

with open('imagenet_class_index.json', 'r') as f:
    idx_to_class = json.load(f)


for i, (prob, idx) in enumerate(zip(top5_probabilities[0], top5_class_indices[0])):
    class_name = idx_to_class[str(idx.item())][1]  # Convert index to string, get class name
    print(f"Rank {i+1}: {class_name} with probability {prob.item():.2f}%")