[jit] bad constant exponent (e+38.f) in default_program fused_mul_div_add

Hi - I have been using DJL (GitHub - deepjavalibrary/djl: An Engine-Agnostic Deep Learning Framework in Java) with PyTorch of late, and hit this bug: [nightly][jit] bad constant exponent (e+38.f) in default_program fused_mul_div_add · Issue #107503 · pytorch/pytorch · GitHub.

This seems to be a regression that was introduced after PyTorch 2.0.1, at least for Linux. I found for Windows it was already an issue in 2.0.1.

The response on the bug report indicates torchscript is in “maintenance mode” and is not a problem, but I would say a regression should be fixed as a priority even in maintenance mode, as this used to work, and it affects downstream projects like DJL.

Is there a way for this issue to be given priority please? This effectively blocks DJL users from using more recent versions of PyTorch, and Windows seems to be broken.

It seems to be an issue of how floating point numbers are written out in an incompatible format.

Many thanks in advance for any help provided…