Partial derivative of custom loss

Hello @hyuntae

I had a followup question

  • Do you want to just have a custom Loss Function between input and output
def lossCalc(x,y):
    return torch.sum(torch.add(x,-y)).pow(2)
.....
# Input Data
X=np.random.rand(numRows * numSentences * numWords,inputDim)

# Resizing
X=X.reshape(totalBatches,numSentences,numWords,inputDim)

# Creating Y
Y=np.random.randint(2,size=(numRows,1))

for epoch in range(epochRange):
    lossVal=0
    for curBatch in range(totalBatches):
        model.zero_grad()
        dataInput=torch.autograd.Variable(torch.Tensor(X[curBatch]))
        dataOutput=model(dataInput)
        loss=lossCalc(dataOutput,torch.Tensor(Y[curBatch]))
        loss.backward()
        lossVal = lossVal + loss
        optimizer.step()
    if(epoch % 1==0):
        print("For epoch {}, the loss is {}".format(epoch,lossVal))
print("Model Training completed")
  • Or do you want to use some operation on the gradients
    If this is the case, how will you know the expected value for comparison?