Solve for classification threshold in batch

Hi, I’m trying to do the following. I have a net, with pre-trained weights. For each mini-batch i have some gold standard labels i want to compare to the scores output from the net and then a sigmoid function. within this mini-batch I want to create a for loop in which I optimize params to achieve best f1 metric. Currently, I am trying to compute grads with respect to f1_loss for one step. It fails at loss.backward()

net.eval()
data = next(iter(trainLoader))
inputs, labels = data['image'], data['labels']
outputs = net(inputs)

params = torch.ones((2,)).new_full((28,), .5, requires_grad=True)

scores = torch.sigmoid(outputs)
preds = scores.data.gt(params)
loss = f1_loss(preds, labels) # tensor(0.4878)

loss.backward()

throws an error:


RuntimeError Traceback (most recent call last)
in
----> 1 loss.backward()

~/anaconda3/envs/k-protein/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
91 products. Defaults to False.
92 “”"
—> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph)
94
95 def register_hook(self, hook):

~/anaconda3/envs/k-protein/lib/python3.7/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
88 Variable._execution_engine.run_backward(
89 tensors, grad_tensors, retain_graph, create_graph,
—> 90 allow_unreachable=True) # allow_unreachable flag
91
92

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Params is the correct size, i.e. a 28 element tensor to be broadcast along the row dimension of the preds tensor which is (batch size, 28).

Types:

preds.type()
‘torch.ByteTensor’

loss.type()
‘torch.FloatTensor’