Hi all,
I am now trying to implement a forward function in which I need to construct a matrix with Tensor. The Tensor is the immediate output of my network. The code is shown below :
def viewAda(self, x):
x_VA = torch.Tensor(x.size())
# transition adaptation
trans = self.t_para.repeat(1,1,NUM_JOINT)
x = x - trans
# rotation adaptation
for b_idx in xrange(x.size(0)):
for t_idx in xrange(x.size(1)):
thX = torch.squeeze(self.r_para[b_idx, t_idx, 0]) * PI
thY = torch.squeeze(self.r_para[b_idx, t_idx, 1]) * PI
thZ = torch.squeeze(self.r_para[b_idx, t_idx, 2]) * PI
thX_cos = torch.cos(thX)
thX_sin = torch.sin(thX)
thY_cos = torch.cos(thY)
thY_sin = torch.sin(thY)
thZ_cos = torch.cos(thZ)
thZ_sin = torch.sin(thZ)
Rx = Variable(torch.FloatTensor([[1, 0, 0],
[0, thX_cos, thX_sin],
[0, -thX_sin, thX_cos]]),
requires_grad = True).cuda()
Ry = Variable(torch.FloatTensor([[thY_cos, 0, -thY_sin],
[0, 1, 0],
[thY_sin, 0, thY_cos]]),
requires_grad = True).cuda()
Rz = Variable(torch.FloatTensor([[ thZ_cos, thZ_sin, 0],
[-thZ_sin, thZ_cos, 0],
[ 0, 0, 1]]),
requires_grad = True).cuda()
R = torch.mm(Rx,Ry)
R = torch.mm(R,Rz)
feat = x[b_idx, t_idx, :].view(-1,3)
feat_VA = torch.mm(feat, R)
x_VA[b_idx, t_idx, :] = torch.squeeze(feat_VA.view(-1,NUM_JOINT*3))
return x_VA
x is the batched input samples, and the function viewAda() is called in my forward function. Errors are reported as,
File "model.py", line 95, in viewAda
[0, -thX_sin, thX_cos]]),
RuntimeError: tried to construct a tensor from a nested float sequence, but found an item of type torch.cuda.FloatTensor at index (1, 1)
I know where my problem is. However I don’t know how to handle it. How can I transfer thX to a “non-Tensor” variable?
BTW, this implementation is quite slow since I involved two for loop in the forward function. Is there any better way to do it ?