criterion = ConcordanceCorCoeff()
model = nn.Linear(10, 10)
data = torch.randn(10, 10)
target = torch.randn(10, 10)
optimizer = torch.optim.SGD(model.parameters(), lr=10.)
for epoch in range(10):
optimizer.zero_grad()
output = model(data)
loss = criterion(output ,target)
loss.mean().backward()
optimizer.step()
print('Epoch {}, loss {}'.format(epoch, loss.mean().item()))
> Epoch 0, loss 0.9816429018974304
Epoch 1, loss 0.4328025281429291
Epoch 2, loss 0.43783488869667053
Epoch 3, loss 0.49120745062828064
Epoch 4, loss 0.5858393907546997
Epoch 5, loss 0.4273451864719391
Epoch 6, loss 0.11961951106786728
Epoch 7, loss 0.10366281121969223
Epoch 8, loss 0.6455065011978149
Epoch 9, loss -0.01740434765815735
I assume the error in the title is solved already, as I cannot find any inplace operations?
Also, I’m not familiar with this loss function, but it will return a NaN value, if you pass a single sample (but that might be an illegal use case).