[PyTorch 2.7.0] Custom privateuse1 FakeTensor add + float scalar triggers AttributeError: 'float' object has no attribute 'item_memo'

I’m working with PyTorch 2.7.0 and have registered a custom device named privateuse1, implementing both the real-kernel add.Tensor and its fake/meta counterpart. I’ve also implemented both real and fake versions of add.Scalar, but in practice add.Scalar is never invoked when adding a float scalar. When I try to add a Python float (e.g. 1e-6) to a tensor on this device under Dynamo/FX FakeTensor mode, I hit an AttributeError about item_memo. Adding via another tensor works fine, and CPU + float also behaves as expected.


Environment

  • PyTorch: 2.7.0
  • Python: 3.10
  • OS: Ubuntu 20.04

Implementation & Reproduction Code

import torch

# 1) Registered the custom device 'privateuse1'
#    – Implemented and registered real-kernel add.Tensor
#    – Implemented and registered fake/meta add.Tensor
#    – Implemented and registered real-kernel add.Scalar
#    – Implemented and registered fake/meta add.Scalar
#
#    Despite this, when performing x + float, only add.Tensor fake runs
#    (add.Scalar is not called at all)

x = torch.Tensor([1, 2, 3], device='privateuse1')
y = x + 1e-6

Error Message

torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors:
  call_function <built-in function add>(
    *(FakeTensor(..., device='privateuse1:0', size=(3,)), 1e-06), **{}
  ): got AttributeError("'float' object has no attribute 'item_memo'")
  • Adding a tensor scalar works:
x + torch.tensor(1e-6, device='privateuse1')

succeeds without error.

  • On the cpu device, x_cpu + 1e-6 also works normally.

Troubleshooting Attempts

  1. Auto-wrapping floats into a tensor inside the fake add.Tensor implementation
  2. Adding an item_memo attribute to the FakeTensor class so floats would carry a memo entry
  3. Disabling Dynamo/FX (pure Python fall-back) to isolate FakeTensor issues
  4. Comparing with other community reports (e.g., FakeTensor errors when prototyping backends)
  5. Reviewing custom-device extension PRs for PrivateUse1

Questions

  1. Why is add.Scalar never invoked when adding a Python float scalar under FakeTensor/Dynamo mode?
  2. What specifically triggers the item_memo AttributeError in this scenario?
  3. Which methods or attributes must a FakeTensor (or its mode) implement to support scalar floats?
  4. Are there any official guides, examples, or reference implementations for handling scalar inputs in custom FakeTensor kernels?

Thank you for your insights!