Is there a way we can enforce non-negativity constrains in embeddings?

In Keras, one could do like using the `keras.constraints.non_neg`

```
from keras.constraints import non_neg
movie_input = keras.layers.Input(shape=[1],name='Item')
movie_embedding = keras.layers.Embedding(n_movies + 1, n_latent_factors, name='NonNegMovie-Embedding', embeddings_constraint=non_neg())(movie_input)
```

I’m trying to rewrite in PyTorch, a blogpost I wrote on non-neg. matrix factorization in Keras.

SimonW
(Simon Wang)
January 14, 2018, 9:06pm
#2
How is the keras one implemented? Is it just simply clamping below at 0 after each gradient update? If so you can just also do so after each optim update in pytorch.

In https://github.com/keras-team/keras/blob/master/keras/constraints.py we have:

```
class NonNeg(Constraint):
"""Constrains the weights to be non-negative.
"""
def __call__(self, w):
w *= K.cast(K.greater_equal(w, 0.), K.floatx())
return w
```

I guess this is just clamping it to minimum of 0.

@Nipun_Batra
Clamping all embedding vectors per every itersion is so time consuming.
How to clamp only updated embedding vectors with cuda variable?

SimonW
(Simon Wang)
February 5, 2018, 3:19pm
#5
After `.backward()`

, look at the embeddings `.weight.grad`

attribute to get the updated indices. After `optim.step()`

, clamp weights at those indices.

jroberayalas
(Jose Roberto Ayala Solares)
February 6, 2019, 1:37pm
#6
Would the same effect be achieved if we add a ReLU layer after the embedding layer?

Richard_S
(Richard Jones)
November 13, 2020, 12:58pm
#7

SimonW:

clamping

How did you manage to implement this in pytorch? would you mind sharing your code?