I have defined a function called
LiocFunction and it uses proposals refined by regression values output from
self.RCNN_bbox_pred (nn.Linear) to calculate some scores.
Its self-defined backward() returns the gradients of proposals, labels, mask
So it looks like this:
bbox_pred = self.RCNN_bbox_pred(pooled_feat) proposals = bbox_transform.apply(boxes, bbox_pred) scores = LiocFunction.apply(proposals, labels, mask)
class LiocFunction(torch.autograd.Function): @staticmethod def forward(ctx, proposals,labels, mask): #proposals = [1,num_boxes,5] ctx.save_for_backward(proposals, labels, mask) back = False output = calculate_Loic(proposals, labels, mask, back) #output = [1,num_boxes,1] return output @staticmethod def backward(ctx, grad_pro): proposals, labels, mask = ctx.saved_variables grad_proposals = grad_labels = grad_mask = None back = True grad_proposals = calculate_Loic(proposals, labels, mask, back) return grad_proposals, grad_labels, grad_mask
I also implement
bbox_transform and the backward() returns the gradients of boxes, bbox_pred
I have found that the backward() of two self-defined functions are called during training, but the back propagation stops at bbox_transform, doesn’t go further to
nn.Linear layer. What could have gone wrong?