Not able to use backward() to calculate and backprop gradient

I am trying to create a custom function/loss, but I am not able to use standard backward() to calculate and backprop gradient. can you please help? Thank you.

import torch.nn as nn
import torch
import numpy as np
from torch.autograd import Variable, Function

x = Variable(torch.from_numpy(np.random.normal(0,1,(10,10))), requires_grad=False) # original

depends on size of the dictionary, number of atoms.

D = Variable(torch.from_numpy(np.random.normal(0,1,(500,10,10))), requires_grad=True)

hx sparse representation

ht = Variable(torch.from_numpy(np.random.normal(0,1,(500,1,1))), requires_grad=True)

x_e = (Dht).sum(dim=0) # torch tensor. this is an intermediate calculation
loss_ht = 0.5
torch.norm((x.data-x_e.data),p=2)**2
loss_ht.backward()

I got error below. I do not know what it is wrong here.

AttributeError Traceback (most recent call last)
in ()
1 #loss_ht = ht_loss()
----> 2 loss_ht.backward()

AttributeError: ‘float’ object has no attribute ‘backward’

In future, please use the code formatting tool.

This works.

loss_ht = 0.5*torch.norm((x-x_e),p=2)**2
loss_ht.backward()

Don’t use x.data if you want to be able to backprop, or you really know what you are doing.

1 Like

Thanks mate! Yes it works. will use formatting in the future. new here. :wink:

2 Likes