# Compare each element with each other element

What is the fastest way to compare all the elements of a matrix, with all other elements of the matrix?
for example,

``````x = torch.randn(5, 5)
y = x.unfold(0, 3, 1).unfold(1, 3, 1)
``````

now if I want to compare every element in these 9 3x3 blocks, with every other 8 elements, without using for loop, then how do I do this?

Depending on the operation you would like to compare the values, you might be able to use pdist to calculate the p-norm distance and use it with a threshold.

I saw pdist, but it work a bit differently than what I want to do, I want to compare each element with each other element, but not row-wise, because I want to change the value of element, if I use row wise comparison, then I will have to change the value of all the elements in a row, currently I use two for loops, to go through tensor, and compare values, and update values.

Plus one more problem is that if a value in any one of the 9 3x3 blocks gets updated, then I want to store the updated value in my output tensor.

Would flattening the kernel to `[9, 1]` values work for the comparison?
This would treat each value as a row.

If that’s not working, could you post a pseudo code, which demonstrates your use case?

hello, if I do something like this,

``````z = torch.randn(5, 5).unfold(0, 3, 1).unfold(1, 3, 1).reshape(1, 9, 9)
``````

and now I zero out some elements in this tensor, then how do I get a 5x5 tensor back, if a value is zeroed out in any of these 9 3x3 blocks, then it should be zeroed out in my output 5x5 block also, rest of them should stay as they were?

if I use fold, then it adds up the non zero values in these 9 3x3 blocks, but I do not want addition.

for example,
if

``````z
``````

is something like,

``````tensor([[[ 0.4094,  1.3269,  2.1112, -1.8682,  0.0420, -0.9150,  1.7852,
1.2070,  0.6966],
[ 1.3269,  2.1112, -0.1709,  0.0420, -0.9150,  0.9318,  1.2070,
0.6966, -0.0834],
[ 2.1112, -0.1709, -0.7779, -0.9150,  0.9318,  0.3695,  0.6966,
-0.0834, -0.7832],
[-1.8682,  0.0420, -0.9150,  1.7852,  1.2070,  0.6966, -0.8919,
-0.7964,  0.1060],
[ 0.0420, -0.9150,  0.9318,  1.2070,  0.6966, -0.0834, -0.7964,
0.1060, -0.4739],
[-0.9150,  0.9318,  0.3695,  0.6966, -0.0834, -0.7832,  0.1060,
-0.4739,  0.5941],
[ 1.7852,  1.2070,  0.6966, -0.8919, -0.7964,  0.1060, -0.2107,
1.1313,  0.1733],
[ 1.2070,  0.6966, -0.0834, -0.7964,  0.1060, -0.4739,  1.1313,
0.1733, -0.9812],
[ 0.6966, -0.0834, -0.7832,  0.1060, -0.4739,  0.5941,  0.1733,
-0.9812, -0.3873]]])
``````

and then, I do,

``````z[0][1][0] = 0
``````

so,

``````z
``````

is now,

``````tensor([[[ 0.4094,  **1.3269**,  *2.1112*, -1.8682,  0.0420, -0.9150,  1.7852,
1.2070,  0.6966],
[ **0.0000**,  *2.1112*, -0.1709,  0.0420, -0.9150,  0.9318,  1.2070,
0.6966, -0.0834],
[ *2.1112*, -0.1709, -0.7779, -0.9150,  0.9318,  0.3695,  0.6966,
-0.0834, -0.7832],
[-1.8682,  0.0420, -0.9150,  1.7852,  1.2070,  0.6966, -0.8919,
-0.7964,  0.1060],
[ 0.0420, -0.9150,  0.9318,  1.2070,  0.6966, -0.0834, -0.7964,
0.1060, -0.4739],
[-0.9150,  0.9318,  0.3695,  0.6966, -0.0834, -0.7832,  0.1060,
-0.4739,  0.5941],
[ 1.7852,  1.2070,  0.6966, -0.8919, -0.7964,  0.1060, -0.2107,
1.1313,  0.1733],
[ 1.2070,  0.6966, -0.0834, -0.7964,  0.1060, -0.4739,  1.1313,
0.1733, -0.9812],
[ 0.6966, -0.0834, -0.7832,  0.1060, -0.4739,  0.5941,  0.1733,
-0.9812, -0.3873]]])
``````

and now, I apply fold, it give me,

``````x = nn.Fold((5, 5), 3)
x(z)
``````
``````tensor([[[[ 0.4094,  **1.3269**,  *6.3337*, -0.3418, -0.7779],
[-3.7364,  0.1680, -5.4902,  3.7274,  0.7390],
[ 5.3557,  7.2419,  6.2697, -0.5001, -2.3496],
[-1.7838, -3.1855,  0.6361, -1.8955,  1.1882],
[-0.2107,  2.2625,  0.5198, -1.9624, -0.3873]]]])
``````

for the value of `1.3269`, I want zero, because it was zeroed out in one of the 9 3x3 blocks, and for rest of the values, I do not want addition, like `2.1112*3` give `6.3337`, but I want `2.112` only.

As you’ve explained, `nn.Fold` accumulated overlapping patches. If you want a custom operation, such as a masking with zeros, you could potentially create a mask from the patches, fold the tensors back, and mask the zeros spots afterwards with a multiplication.

I am able to do mentioned folding operation, but I face one small issue,
is there an easy way to solve this, if I have two tensors, something like,

``````a
``````
``````tensor([[[[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000, -2.1455, -0.2417,  0.5321]]]])
``````

and

``````b
``````
``````tensor([[[[ 0.3462,  0.0000,  0.0000,  0.0000,  0.4700],
[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
[ 0.9115,  0.0000, -2.1455,  0.0000,  0.0000]]]])
``````

now I want a combined tensor, that would have values if there is a non zero value at any of these two tensors, how do I do this, that is an OR operation on these two tensors. I cannot do

``````c = a or b
``````

or

``````c = a | b
``````

it give error, so I do, something like,

``````d = torch.tensor(a!=0).int()
e = torch.tensor(b!=0).int()
f = d + e
final = (a + b)/f
torch.where(final == final, final, zeros) # to remove nan values
``````

and I get,

``````tensor([[[[ 0.3462,  0.0000,  0.0000,  0.0000,  0.4700],
[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
[ 0.9115,  0.0000, -2.1455, -0.2417,  0.5321]]]])
``````

but is there an easier way to do this.

How would the result tensor look like, if you have different values at the same position or is this use case invalid?

E.g. you have `-2.1455` at the same position, while the output is the same value, so would you just pick a random value or calculate the mean?

both tensors will not have different non zero values at the same position, I apply same technique once in forward, and once in reverse order, to obtain both tensors, so OR operation is valid, that is values would

1. either be zero in both tensors -> output would have zero
2. one tensor has zero, other has non zero value -> output would have non zero value
3. both have same non zero value -> output would have non zero value

at a particular index

Thanks for the update.
I came up with another approach, but you will only see a slightly performance benefit for larger sizes.

``````def fun1(a, b):
c = torch.cat((a, b), dim=0)
idx = c.nonzero()
res = torch.zeros(c.size()[1:])
res[idx[:, 1:].split(1, dim=1)] = c[idx.split(1, dim=1)]
return res

def fun2(a, b):
d = (a!=0).int()
e = (b!=0).int()
f = d + e
final = (a + b)/f
res = torch.where(final == final, final, torch.tensor(0.)) # to remove nan values
return res

a = torch.zeros(1, 1, 100, 100)
a[:, :, torch.randperm(100)[:10], torch.randperm(100)[:10]] = torch.randn(1)
b = torch.zeros(1, 1, 100, 100)
b[:, :, torch.randperm(100)[:10], torch.randperm(100)[:10]] = torch.randn(1)

res1 = fun1(a, b)
res2 = fun2(a, b)
print((res1 == res2).all())
> tensor(True)

%timeit fun1(a, b)
%timeit fun2(a, b)
``````

thanks for you reply, the technique you mention look better, because with my technique, when I do

``````loss.backward()
``````

then it give error, because of nan