I try to define a information entropy loss. The input is a tensor(1*n), whose elements are all between [0, 4]. The EntroyLoss will calculate its information entropy loss.

For exampe, if the input is

[0,1,0,2,4,1,2,3]

then

p(0) = 2 / 8 = 0.25

p(1) = 2 / 8 = 0.25

p(2) = 2 / 8 = 0.25

p(3) = 1 / 8 = 0.125

p(4) = 1 / 8 = 0.125

so information entropy loss is

Loss = -( p(0)*log2(p(0)) + p(1)*log2(p(1)) + p(2)*log2(p(2)) + p(3)*log2(p(3)) )

my code is here:

```
class EntroyLoss(nn.Module):
def __init__(self):
super(EntroyLoss, self).__init__()
def forward(self, x):
y = x.view(-1)
p = torch.zeros([5])
for i in range(y.shape[0]):
p[y[i].int()] = p[y[i].int()] + 1
p = p.float() / y.shape[0]
entropy = -p.mul(p.log2()).sum()
return entropy
```

But pytorch can not calcualate grad:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn