Type casting logic?

torch.tensor([-1.0, -2.0]).to(torch.uint8)

result: [255, 254]

I’m testing the above casting operation on machines that are ARM and Intel.
Generally, fp32 → uint8 is an undefined behavior.

How is the above cast in torch consistent on all machines? Are there type casting rules somewhere?

Is the above cast taking a path with intermediate hidden operation to cast to uint8?

Additionally, torch.can_cast(torch.float32, torch.uint8) returns false.

bumping this. anyone?