Is there a way we can enforce non-negativity constrains in embeddings?
In Keras, one could do like using the
from keras.constraints import non_neg
movie_input = keras.layers.Input(shape=,name='Item')
movie_embedding = keras.layers.Embedding(n_movies + 1, n_latent_factors, name='NonNegMovie-Embedding', embeddings_constraint=non_neg())(movie_input)
I’m trying to rewrite in PyTorch, a blogpost I wrote on non-neg. matrix factorization in Keras.
How is the keras one implemented? Is it just simply clamping below at 0 after each gradient update? If so you can just also do so after each optim update in pytorch.
In https://github.com/keras-team/keras/blob/master/keras/constraints.py we have:
"""Constrains the weights to be non-negative.
def __call__(self, w):
w *= K.cast(K.greater_equal(w, 0.), K.floatx())
I guess this is just clamping it to minimum of 0.
Clamping all embedding vectors per every itersion is so time consuming.
How to clamp only updated embedding vectors with cuda variable?
.backward(), look at the embeddings
.weight.grad attribute to get the updated indices. After
optim.step(), clamp weights at those indices.
Would the same effect be achieved if we add a ReLU layer after the embedding layer?
How did you manage to implement this in pytorch? would you mind sharing your code?