torch._dynamo.exc.Unsupported: Tensor.backward

Hi! When seeing the PT 2.0 announcement blog I got really interested in Dynamo. I installed the nightlies using pip3 install numpy --pre torch torchvision torchaudio --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cpu and I’m following the Tutorial here: https://pytorch.org/tutorials/intermediate/dynamo_tutorial.html

My goal is to export the ATen graph for the function below using Dynamo, but I get an error. Am I doing something wrong?

>>> def train(model, data):                                                                                                                                                                                                                    
    pred = model(data[0])                                                                                                                                                                                                                  
    loss = nn.CrossEntropyLoss()(pred, data[1])                                                                                                                                                                                            
    loss.backward()                                                                                                                                                                                                                        
    return loss
...
>>> train_exp = dynamo.export(train, model, generate_data(16), aten_graph=True)`
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 622, in export
    result_traced = opt_f(*args, **kwargs)
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 212, in _fn
    return fn(*args, **kwargs)
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 333, in catch_errors
    return callback(frame, cache_size, hooks)
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 103, in _fn
    return fn(*args, **kwargs)
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 88, in time_wrapper
    r = func(*args, **kwargs)
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 339, in _convert_frame_assert
    return _compile(
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 400, in _compile
    out_code = transform_code_object(code, transform)
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 341, in transform_code_object
    transformations(instructions, code_options)
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 387, in transform
    tracer.run()
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1684, in run
    super().run()
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 538, in run
    and self.step()
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 501, in step
    getattr(self, inst.opname)(inst)
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 307, in wrapper
    return inner_fn(self, inst)
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 966, in CALL_FUNCTION
    self.call_function(fn, args, {})
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 435, in call_function
    self.push(fn.call_function(self, args, kwargs))
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 654, in call_function
    return self.obj.call_method(tx, self.name, args, kwargs).add_options(self)
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/variables/tensor.py", line 266, in call_method
    unimplemented(f"Tensor.{name}")
  File "/home/david/projects/ml-compiler-talk/pytorch-examples/pytorch-2.0/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 67, in unimplemented
    raise Unsupported(msg)
torch._dynamo.exc.Unsupported: Tensor.backward

Could you post a minimal, executable code snippet to reproduce the issue, please?

Sure, thanks for looking!

$ python3
import torch
import torch._dynamo as dynamo
from torch import nn

class Simple(nn.Module):
    def __init__(self, H=28, W=28, C=10):
        super(Simple, self).__init__()
        self.linear = nn.Linear(H*W, C)
    def forward(self, x):
        x = torch.flatten(x, start_dim=1)
        x = self.linear(x)
        return nn.functional.relu(x)

def generate_data(b):
    return (torch.randn(b,28,28).to(torch.float32), torch.randint(10, (b,)))

def train(model, data):
    pred = model(data[0])
    loss = nn.CrossEntropyLoss()(pred, data[1])
    loss.backward()
    return loss

model = Simple()
model_exp = dynamo.export(train, model, generate_data(16), aten_graph=True)

Thanks for the code snippet!
This seems to be a known limitation as seen here. I couldn’t find a tracking issue for its support on GitHub, so don’t know about its current status.

Okay, makes sense. Is there any other currently supported way to export the ATen training graph in Dynamo (i.e., when there is a backward pass)?

Hey, I get this error as well but for numpy and using Inductor as the backend compiler. Is there an tracking issue or workaround for this by any chance?

Using numpy works for me if I disable the backward call and reuse the same code:

import torch
import torch._dynamo as dynamo
from torch import nn

class Simple(nn.Module):
    def __init__(self, H=28, W=28, C=10):
        super(Simple, self).__init__()
        self.linear = nn.Linear(H*W, C)
    def forward(self, x):
        x = torch.flatten(x, start_dim=1)
        x = self.linear(x)
        x = np.exp(x.detach().cpu().numpy())
        return nn.functional.relu(torch.from_numpy(x))

def generate_data(b):
    return (torch.randn(b,28,28).to(torch.float32), torch.randint(10, (b,)))

def train(model, data):
    pred = model(data[0])
    loss = nn.CrossEntropyLoss()(pred, data[1])
    #loss.backward()
    return loss

model = Simple()
model_exp = dynamo.export(train, model, generate_data(16), aten_graph=True)
1 Like