I am trying to create a custom function/loss, but I am not able to use standard backward() to calculate and backprop gradient. can you please help? Thank you.
import torch.nn as nn
import torch
import numpy as np
from torch.autograd import Variable, Function
x = Variable(torch.from_numpy(np.random.normal(0,1,(10,10))), requires_grad=False) # original
depends on size of the dictionary, number of atoms.
D = Variable(torch.from_numpy(np.random.normal(0,1,(500,10,10))), requires_grad=True)
hx sparse representation
ht = Variable(torch.from_numpy(np.random.normal(0,1,(500,1,1))), requires_grad=True)
x_e = (Dht).sum(dim=0) # torch tensor. this is an intermediate calculation
loss_ht = 0.5torch.norm((x.data-x_e.data),p=2)**2
loss_ht.backward()
I got error below. I do not know what it is wrong here.
AttributeError Traceback (most recent call last)
in ()
1 #loss_ht = ht_loss()
----> 2 loss_ht.backward()
AttributeError: ‘float’ object has no attribute ‘backward’