Hi everyone,

I’d need to count the number of times that a tensor changes its values(from the previous one to the next one), in order to include this information as a term in my loss function.

Example:

```
>>> import numpy as np
>>> a = [1, 1, 1, 0, 0, 1, 0]
>>> diffs = np.diff(a)
>>> diffs
array([ 0, 0, -1, 0, 1, -1])
>>> (diffs != 0).sum()
3
```

In PyTorch I did it in the following way:

```
def count_diffs(x):
diff_x = x[1:] - x[:-1]
return (diff_x != 0).sum()
def loss_diffs(x, batch_size, num_classes):
# x.shape == (batch_size, steps, num_classes)
loss = 0.0
for idx_batch in range(batch_size):
for idx_class in range(num_classes):
loss += count_diffs(x[idx_batch, :, idx_class])
loss /= (batch_size * num_classes)
return loss
```

In other words, choosing a sample and a class I have a sequence, and I would like to explicitly tell to the loss to prefer this sequence

`[1, 1, 1, 0, 0, 0, 0]`

over this one `[1, 1, 1, 0, 1, 0, 0]`

Any suggestions on how to do it? My PyTorch implementation does not work, I think because it is not differentiable.

Thank you