Given a tensor, an element is interior if it has its upper and down neighbors in each direction. For instance, for a tensor
t = np.arange(12).reshape(3,4).astype(float)
t
> array([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]])
5 and 6 are interior elements. 5 has its neighbors [1, 4, 9, 6].
By averaging a tensor, we mean each elements of interior has been replaced by average of its squared neighbors. In the above, 5 and 6 shall be replaced by new values. For instance, 5 <--- np.mean([1**2, 4**2, 9**2, 6**2])
.
For test purpose, here is another 3-d tensor, where its interior elements are 16 and 19:
t1 = np.arange(36).reshape(3,4,3).astype(float)
t1
> array([[[ 0., 1., 2.],
[ 3., 4., 5.],
[ 6., 7., 8.],
[ 9., 10., 11.]],
[[12., 13., 14.],
[15., 16., 17.],
[18., 19., 20.],
[21., 22., 23.]],
[[24., 25., 26.],
[27., 28., 29.],
[30., 31., 32.],
[33., 34., 35.]]])
Here is numpy version of such a function for averaging any dimensional tensor.
def averaging(t):
t_shape = np.array(t.shape)
t_dim = len(t.shape)
t_new = t.astype(float).copy()
it = np.nditer(t, flags=['multi_index'])
while not it.finished:
#ipdb.set_trace()
ind = it.multi_index
ind_np = np.array(ind)
#ipdb.set_trace()
if np.min(ind_np)>0 and np.min(t_shape - ind_np>1): #if ind is not at the boundary
tmp = 0
for i in range(t_dim): #i-th direction
ind_np_up = ind_np.copy()
ind_np_up[i] += 1 #upper index
ind_np_dn = ind_np.copy()
ind_np_dn[i] -= 1 #down index
tmp += t[tuple((ind_np_up).tolist())]**2 + t[tuple((ind_np_dn).tolist())]**2. #add sum of squares
t_new[ind] = tmp/t_dim/2. #avaerage and fill new value
it.iternext()
return t_new
Here are two test results:
> averaging(t)
> array([[ 0. , 1. , 2. , 3. ],
[ 4. , 33.5, 44.5, 7. ],
[ 8. , 9. , 10. , 11. ]])
> averaging(t1)
> array([[[ 0. , 1. , 2. ],
[ 3. , 4. , 5. ],
[ 6. , 7. , 8. ],
[ 9. , 10. , 11. ]],
[[ 12. , 13. , 14. ],
[ 15. , 307.33333333, 17. ],
[ 18. , 412.33333333, 20. ],
[ 21. , 22. , 23. ]],
[[ 24. , 25. , 26. ],
[ 27. , 28. , 29. ],
[ 30. , 31. , 32. ],
[ 33. , 34. , 35. ]]])
How can I use torch to have the same function on tensors? I essentially want to know if there is replacement of np.nditer for torch.