Creating a Clipped Loss Function


(Diego Gomez) #1

Hi!

According to the DeepMind DQL Paper, the error term is clipped between -1 and 1. I am using clamp for that, but using it doesn’t allow me to use a default loss function like MSELoss.

How can I proceed to do this?

Code may be found in this repo.


(Simon Wang) #2

I don’t understand. Why can’t you use error fns with clamp?


(Diego Gomez) #3

I don’t know how to do it correctly. If I try to use default errors, they usually take two arguments (target, prediction), but then I am unable to clamp on the loss for each pair. On the other hand, if I use clamp(target - prediction, min, max), I end up with only one tensor, and then I can’t use default errors.

Does it make sense?


(Simon Wang) #4

Oh I see. You can try the reduce=False kwarg on loss functions so they give you a tensor. Then you can do clamp and reduction yourself :slight_smile:


(Diego Gomez) #5

Hi @SimonW, thanks for your help! I’ve just updated the optimizer:

loss_func = torch.nn.MSELoss(size_average=False, reduce=False)

And also coded the backward pass accordingly:

# Run backward pass
error = loss_func(q_phi, y)
error = torch.clamp(error, min=-1, max=1)**2
error = error.sum() 
error.backward()

And it seems like no errors appear, which implies that the ‘backward’ operation is running correctly!

Will test it out in the Atari Environment and let you know how it goes. The code is in here in case anyone wants to check it out meanwhile.

Thanks again!


(Hailin Chen) #6

With code snippet below

import torch
from torch.autograd import Variable
w = Variable(torch.Tensor([1.]), requires_grad=True)
l = w**2
l_clip = l.clamp(max=0.8)
torch.autograd.grad(l, w, retain_graph=True) # prints (tensor([ 2.]),)
torch.autograd.grad(l_clip, w, retain_graph=True) # prints (tensor([ 0.]),)

It seems that clamp operation stops gradient flow if the value is clipped. Thus I think your code above might not work as you intended.