Trying to use jit.trace, results in 'AttributeError: 'FaceAlignment' object has no attribute '__name__' error

Hello everyone,
I’m trying to have weight caching , but on one of my models it fails . for caching I’m simply running this snippet :

fname = ''
test_imgs = Detector.gen_fake_img(batch=1)
traced = torch.jit.trace(self.detector, test_imgs, check_trace=False)
script =
self.detector = torch.jit.load(fname)

and the full stacktrace is as follows:

Traceback (most recent call last):
  File "d:\Codes\Pytorch_Retinaface\detection_core\", line 2074, in <module>
  File "d:\Codes\Pytorch_Retinaface\detection_core\", line 2065, in run_test
  File "d:\Codes\Pytorch_Retinaface\detection_core\", line 2002, in run_capture_sfd
  File "Pytorch\detection_core\", line 1523, in __init__
  File "Pytorch\detection_core\", line 1536, in _init
    traced = torch.jit.trace(self.detector, test_imgs, check_trace=False)
  File "C:\Users\User\Anaconda3\lib\site-packages\torch\jit\", line 911, in trace
    name = _qualified_name(func)
  File "C:\Users\User\Anaconda3\lib\site-packages\torch\", line 683, in _qualified_name
    name = obj.__name__
AttributeError: 'FaceAlignment' object has no attribute '__name__'

Whats wrong?
As far as I know all classes has __name__ attribute, including the mentioned class in the error.(the name is ‘FaceAlignment’ obviously)
So I’m not sure why I’m getting this. Any help is greatly appreciated.

Upon further investigations, I turns out in \torch\", line 684, the object that is causing the error is

Object is type: <class 'function'> and the object itself is : <function LSTM.forward at 0x00000214C495D048>

The Irony is, I do not have any LSTM cell in my model!
What am I seeing this?

Thanks for doing some investigation! Are you able to share the binary for so we can try to debug this on our end?

I think what you’re seeing here is some initialization the jit does when you first call torch.jit.script or torch.jit.trace. We have to grab the qualified names for a few modules that have overloaded forward methods (only nn.LSTM and nn.GRU), which is what you’re seeing. You could try setting a breakpoint and skipping past these calls to get to the time it’s actually invoked with your class.

The cached weights are not generated, it fails before that.
You can replicate the issue by cloning this repo :
and running this snippet :

import face_alignment
import torch
import torchvision as tv

def gen_fake_img(samples=1000, batch=10, image_size=(3, 224, 224), num_classes=2):
        fake_dt = tv.datasets.FakeData(samples,
                             image_size, num_classes,
        dt_ldr =, batch, pin_memory=True)
        sample_imgs, _ = next(iter(dt_ldr))
        return sample_imgs

detector = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device='cpu')

fname = ''
test_imgs = gen_fake_img(batch=1)
traced = torch.jit.trace(detector, test_imgs, check_trace=False)
script =
detector = torch.jit.load(fname)


Found the casue, I was sending the wrong model!
its all fixed and fine now!

1 Like

Hi @Shisho_Sama ,
I am also having the same error.
I used the same github repo.

What changes did you make in the code to make it up and running

I used the below code.

import torch
import torchvision
import face_alignment

Optionally set detector and some additional detector parameters

face_detector = ‘sfd’
face_detector_kwargs = {
“filter_threshold” : 0.8

model = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, device=‘cpu’, flip_input=True,face_detector=face_detector)

example = io.imread(’/test/assets/aflw-test.jpg’)
traced_script_module = torch.jit.trace(model,example)“”)