param.requires_grad_(b) may also work, but frankly this may be not an anticipated use case. Intuitively, you own tensors created in functions, and can toggle requires_grad freely, but parameters are owned by a module, so it is not clear if toggling works cleanly from inside.
as you said above, “trace only tracks tensor operations”, this limits passable python code pretty severely. “dangerous” is associated hardcoding of non-tensor values as constants, though this issues warnings.
Contrived example that fails (late) with jit.trace:
k = x.size(0)
ls = [x[i] for i in range(k)]