Hello! I’ve been searching quite a bit, but I’m having trouble finding the proper way to implement a custom regularization loss on the weights. (Say I wanted to implement L3Loss, but only on a particular layer. Or a Laplacian (2nd/derivative) loss on a subset of weight tensors along certain dimensions?) I’m interested in losses that are easily implemented using only torch operations on Variables. (i.e. no custom CUDA code, no implementing autograd)
In Tensorflow, you have access to the weight tensors, and there’s really no difference between a loss on the network output, and a regularization loss on some of the weights as far as the framework is concerned. What I really want to be able to do is something like the following:
first_layer = nn.Conv2d(...)
my_regularization = my_custom_loss_function(first_layer.weight)
model = nn.Sequential(first_layer, nn.Conv2d(...), nn.Conv2d(...), ...)
loss = my_regularization + some_other_loss(model.forward())
model.backward()
optimizer.step()
However, the above code doesn’t work at all. Do I need to define my loss as a Module? If so, how do I get at the weights for only a certain layer? Thanks!
That code looks very tensorflow – you have to do initialization things at init time and runtime things in your training loop. So lines 2, 4, 5, and 6 should be in your training loop while 1 and 3 are in your initialization.
Yea, I’ve figured it out. Thank you, and you’re right, I wasn’t really grasping all of the framework differences yet (and probably still am not). It would be wonderful if there were a “PyTorch for Tensorflow users” tutorial XD.
Have you solved your problem? I have the same problem.
how do I get at the weights for only a certain layer? and how to Implement Custom Regularization Losses on the Weights?