How to restrict the neural network gradient value w.r.t. some input

This API torch.autograd.grad is very useful, but I am not sure whether I can use this in my scripts. Anyway, thanks for your suggetions.

Hi,

have you found a proper way to compute the jacobians instead of the iteration to create many graphs?

Very helpful - thanks!

This method enforces monotonicity only for local data points but not for the entire network. This could cause the model to be non-monotonic in areas that aren’t part of the dataset. Any ideas on how to enforce monotonicity for the entire network? Trying to enforce non-negative weights could work, however this seems restrictive. Sampling data for the monotonicity regularization seems extensive but should work. Any other ideas?