Trying to use jit.trace, results in 'AttributeError: 'FaceAlignment' object has no attribute '__name__' error

Hello everyone,
I’m trying to have weight caching , but on one of my models it fails . for caching I’m simply running this snippet :

fname = 'cached_weights_s3fd.pt'
test_imgs = Detector.gen_fake_img(batch=1)
traced = torch.jit.trace(self.detector, test_imgs, check_trace=False)
script = traced.save(fname)
self.detector = torch.jit.load(fname)

and the full stacktrace is as follows:

Traceback (most recent call last):
  File "d:\Codes\Pytorch_Retinaface\detection_core\main_detector.py", line 2074, in <module>
    run_test()
  File "d:\Codes\Pytorch_Retinaface\detection_core\main_detector.py", line 2065, in run_test
    run_capture_sfd()
  File "d:\Codes\Pytorch_Retinaface\detection_core\main_detector.py", line 2002, in run_capture_sfd
    after_detection_fn=None)
  File "Pytorch\detection_core\main_detector.py", line 1523, in __init__
    self._init()
  File "Pytorch\detection_core\main_detector.py", line 1536, in _init
    traced = torch.jit.trace(self.detector, test_imgs, check_trace=False)
  File "C:\Users\User\Anaconda3\lib\site-packages\torch\jit\__init__.py", line 911, in trace
    name = _qualified_name(func)
  File "C:\Users\User\Anaconda3\lib\site-packages\torch\_jit_internal.py", line 683, in _qualified_name
    name = obj.__name__
AttributeError: 'FaceAlignment' object has no attribute '__name__'

Whats wrong?
As far as I know all classes has __name__ attribute, including the mentioned class in the error.(the name is ‘FaceAlignment’ obviously)
So I’m not sure why I’m getting this. Any help is greatly appreciated.

Upon further investigations, I turns out in \torch\_jit_internal.py", line 684, the object that is causing the error is

Object is type: <class 'function'> and the object itself is : <function LSTM.forward at 0x00000214C495D048>

The Irony is, I do not have any LSTM cell in my model!
What am I seeing this?

Thanks for doing some investigation! Are you able to share the binary for cached_weights_s3fd.pt so we can try to debug this on our end?

I think what you’re seeing here is some initialization the jit does when you first call torch.jit.script or torch.jit.trace. We have to grab the qualified names for a few modules that have overloaded forward methods (only nn.LSTM and nn.GRU), which is what you’re seeing. You could try setting a breakpoint and skipping past these calls to get to the time it’s actually invoked with your class.

The cached weights are not generated, it fails before that.
You can replicate the issue by cloning this repo : https://github.com/1adrianb/face-alignment
and running this snippet :

import face_alignment
import torch
import torchvision as tv

def gen_fake_img(samples=1000, batch=10, image_size=(3, 224, 224), num_classes=2):
        fake_dt = tv.datasets.FakeData(samples,
                             image_size, num_classes,
                             tv.transforms.ToTensor())
        dt_ldr = torch.utils.data.DataLoader(fake_dt, batch, pin_memory=True)
        sample_imgs, _ = next(iter(dt_ldr))
        return sample_imgs

detector = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device='cpu')

fname = 'cached_weights_s3fd.pt'
test_imgs = gen_fake_img(batch=1)
traced = torch.jit.trace(detector, test_imgs, check_trace=False)
script = traced.save(fname)
detector = torch.jit.load(fname)

fa.get_landmarks('test/assets/aflw-test.jpg')

Found the casue, I was sending the wrong model!
its all fixed and fine now!

1 Like