AMP mixed precision in custome module: RuntimeError: hook '<lambda>' has changed the type of value

I have a module which uses backwards hooks, buffers, and parameters, that works properly with either float of half tensors when specified. I am now using it with AMP mixed precision for the first time and it is causing this error, but I don’t know what to do to resolve it. I have set torch.autograd.set_detect_anomaly(True) which grants the following error:

ENVdavit/lib/python3.12/site-packages/torch/autograd/graph.py:823: UserWarning: 
Error detected in SigmoidBackward0. Traceback of forward call that caused the error:
  File
......
  File "myModule.py", line 878, in forward
    Outs[i] = torch.sigmoid(earlierOuts[i]).to(device)
 (Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:122.)
  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
  File line 827, in train_one_epoch
    loss_scaler(
  File "davit/timm/utils/cuda.py", line 43, in __call__
    self._scaler.scale(loss).backward(create_graph=create_graph)
  File "ENVdavit/lib/python3.12/site-packages/torch/_tensor.py", line 626, in backward
    torch.autograd.backward(
  File "ENVdavit/lib/python3.12/site-packages/torch/autograd/__init__.py", line 347, in backward
    _engine_run_backward(
  File "ENVdavit/lib/python3.12/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
           ^^^^^^^^^^^

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: hook '<lambda>' has changed the type of value (was torch.cuda.HalfTensor got torch.cuda.FloatTensor)