Stopping integer addition wrapping when we exceed bounds

Current behavior when I add two int16 values such that the sum exceeds int16 bounds is that they wrap around, rather than stay at bounds or produce infs. This is inconsistent with float behaviour which produces ints.

I assume the behaviour is intentional, but I cant find a reference to int 16 addition wrapping. Perhaps I am searching incorrectly.

Example comparison of behaviour

data = torch.tensor([0])

# fp16
data_fp16 = data.to(torch.float16)
data_fp16 + 66000     # Gets an inf

# int16
data_int16 = data.to(torch.int16)
data_int16 + 66000    # Wraps and returns 464

Is there a way of modifying the behaviour? Ideally I want to be able to add two int16 tensors and nothing to be able to exceed bounds, with no wrapping.

At the moment I’m bypassing this by casting the first tensor as int32, then adding the two together, then clamping, then casting back. However its a little inconvinient. Is there a way to change the behaviour?