When exporting an arbitrary model, how can one know the model has been fully decomposed into Core ATen ops?

# for example, model = transformers.MixtralForCausalLM
  curr_model = model(config=model.config_class())
  if show_intermediate_result:
  # generate prompt tokens
  tokenizer = AutoTokenizer.from_pretrained(model_config_dict[model][1])
  inputs = tokenizer(input_str, return_tensors="pt")
  # configure the export setup.
  example_args = ()
  example_kwargs = {
      'input_ids': inputs['input_ids'],
      'attention_mask': inputs['attention_mask']}
  # export the model
  exported_program: torch.export.ExportedProgram = export(
      curr_model, args=example_args, kwargs=example_kwargs)

I can see the core ATen ops in exported_program, but am not sure if the model has been fully decomposed. What should I be looking for? Thanks.

It should be fully decomposed when you call to_edge().

1 Like

Thank you @Martin_Yuan. I’ll give it a try. So if a given model cannot be fully decomposed when someone calls to_edge() on it, will some errors or decomposition coverage report be provided to the users? It’s about the general feedback mechanism of export() and to_edge() on its decomposition coverage. Thanks!