# Special treatment of "0/0"

I am implementing an algorithm that can lead to `0/0` by design with the convention that `0/0 -> 1`. The relevant part of the algorithm is as follows

``````Tensor a = [a1, a2, ..., an]
Tensor b = [b1, b2, ..., bn]
Tensor c = (x - a) / (b - a)
``````

Here, `/` denotes element wise division and `x` is a scalar. I can ensure that entries in b-a are equal to zero if and only if the relevant entries in x-a are zero as well. I also need to extend the above algorithm to `a`, `b`, and `c` being matrices and `x` being a vector of appropriate size.

What is the most efficient way to make the element wise division operator treat `0/0 -> 1`?

How about using `torch.where`?

``````torch.where(a == b , 1 , (x - a) / (b - a) )
``````

That also will depend on how you want to treat x/0. Assuming you want both to be =1, here is what I would do:

``````x=6
a=torch.cat([torch.tensor([1, 2, 3, 4])]*5)
b=torch.cat([torch.tensor([1, 2, 3, 4,5])]*4)

c =torch.where(b-a==0, 1, (x - a) / (b - a))
``````

Hi Shivam (and Matthias)!

Two issues to be aware of:

First, if `a` and `b` are floating-point tensors (which I am guessing they
might be), you have the issue that `a == b` is an exact floating-point
comparison that wonâ€™t work (because of round-off error) for typical
use cases. You would probably need to substitute something like
`close_enough_to_be_considered_equal_for_my_use_case (a, b)`
for the exact equality test.

Second, `torch.where()` doesnâ€™t backpropagate nicely when `nan`s and
`inf`s occur, even in the â€śbranch not followed.â€ť (See for example this
torch.where() github issue.) In instances where `a` and `b` are equal, you
will get `nan`s or `inf`s.

You could do something like:

``````mask = (almost_equal (x, b)).float()   # 1.0 if b is "equal" to x, otherwise 0.0