?Compiling misses parent vars? limitation of inspect.getattr_static?

If ‘parent’ vars isn’t the right term then replace it with the correct term in what follows.

M = <ldm.models.diffusion.ddpm.LatentDiffusion model>
print(f"xxx in dir {‘xxx’ in dir(M)}") # True
print(M.xxx) # no problem

M = torch.compile(M) # M now torch._dynamo.eval_frame.OptimizedModule
print(f"xxx in dir {‘xxx’ in dir(M)}") # False
print(M.xxx) # no problem even though no longer in M

Does torch compile, which produces an OptimizedModule object, make this a subclass of the original
LatentDiffusion object?

In any case the following then fails for the compiled model if using a compiled function:

print(f"M.xxx = {M.xxx}“) # Still ok
@torch.compile
def _wacko(sdm):
print(f"sdm.xxx = {sdm.xxx}”)
wacko(M)

This crashes with “AttributeError: xxx” and the following stack:
File “~/a1test/webui.py”, line 111, in initialize
modules.sd_models.load_model()
File “~/a1test/modules/sd_models.py”, line 443, in load_model
wacko(sd_model)
File “~/a1test/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py”, line 209, in _fn
return fn(*args, **kwargs)
File “~/a1test/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py”, line 330, in catch_errors
return callback(frame, cache_size, hooks)
File “~/a1test/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py”, line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File “~/a1test/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py”, line 104, in _fn
return fn(*args, **kwargs)
File “~/a1test/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py”, line 262, in _convert_frame_assert
return _compile(
File “~/a1test/venv/lib/python3.10/site-packages/torch/_dynamo/utils.py”, line 163, in time_wrapper
r = func(*args, **kwargs)
File “~/a1test/venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py”, line 394, in _compile
raise InternalTorchDynamoError() from e
torch._dynamo.exc.InternalTorchDynamoError

Could you create an issue on GitHub, since an internal error is raised, so that we could track and fix it, please?

Ok. I just wanted to make sure there was a better than 50/50 change I wasn’t making a mistake.

Even if your code would be broken in some way, I wouldn’t expect to see internal errors so at least the error handling should be checked. Also, thanks for forwarding this issue in the first place.

@torch.compile has problems accessing compiled model vars · Issue #94478 · pytorch/pytorch · GitHub is now created

1 Like