# Create a matrix from Tensors

Hi all,

I am now trying to implement a forward function in which I need to construct a matrix with Tensor. The Tensor is the immediate output of my network. The code is shown below :

`````` def viewAda(self, x):

x_VA = torch.Tensor(x.size())

trans = self.t_para.repeat(1,1,NUM_JOINT)
x = x - trans

for b_idx in xrange(x.size(0)):
for t_idx in xrange(x.size(1)):

thX = torch.squeeze(self.r_para[b_idx, t_idx, 0]) * PI
thY = torch.squeeze(self.r_para[b_idx, t_idx, 1]) * PI
thZ = torch.squeeze(self.r_para[b_idx, t_idx, 2]) * PI

thX_cos = torch.cos(thX)
thX_sin = torch.sin(thX)

thY_cos = torch.cos(thY)
thY_sin = torch.sin(thY)

thZ_cos = torch.cos(thZ)
thZ_sin = torch.sin(thZ)

Rx = Variable(torch.FloatTensor([[1,        0,        0],
[0,  thX_cos,  thX_sin],
[0, -thX_sin,  thX_cos]]),

Ry = Variable(torch.FloatTensor([[thY_cos,  0, -thY_sin],
[0,        1,        0],
[thY_sin,  0,  thY_cos]]),

Rz = Variable(torch.FloatTensor([[ thZ_cos, thZ_sin,  0],
[-thZ_sin, thZ_cos,  0],
[ 0,       0,        1]]),

R = torch.mm(Rx,Ry)
R = torch.mm(R,Rz)

feat = x[b_idx, t_idx, :].view(-1,3)
feat_VA = torch.mm(feat, R)

x_VA[b_idx, t_idx, :] = torch.squeeze(feat_VA.view(-1,NUM_JOINT*3))

return x_VA
``````

x is the batched input samples, and the function viewAda() is called in my forward function. Errors are reported as,

``````File "model.py", line 95, in viewAda
[0, -thX_sin,  thX_cos]]),
RuntimeError: tried to construct a tensor from a nested float sequence, but found an item of type torch.cuda.FloatTensor at index (1, 1)
``````

I know where my problem is. However I don’t know how to handle it. How can I transfer thX to a “non-Tensor” variable?

BTW, this implementation is quite slow since I involved two for loop in the forward function. Is there any better way to do it ?

If `thX` is a one-dimensional tensor containing one element you can just do `thX[0]`.

With regards to your second question, I’m not sure what you’re trying to do with your code. If you could explain a little more what you’re doing with the two for loops we might be able to find a better way

Thank you so much for the reply. I am now trying to re-implement a paper (http://openaccess.thecvf.com/content_ICCV_2017/papers/Zhang_View_Adaptive_Recurrent_ICCV_2017_paper.pdf). The paper tries to rotate a 3D point sequence and then use a LSTM to do the classification. To rotate the 3D sequence, it use two LSTM networks to predict the rotation and transition parameter respectively. Therefore the coordinate transformation is processed inside the network.

I have already get the transformation parameter for rotation (r_para) and transition(t_para). In each iteration, I have (batch_size) 3D sequence to rotate. Each 3D sequence has (time_step) frames. Each frame includes 25 points and each point is represented by (x,y,z). Each frame of each sequence sample has its own transformation parameters. The two for loop part is trying to apply the rotation and transition operation on the input samples, looping on each frame of each 3D sequence sample in the batch.

So x inside the code is the input sample. Its size is (batch_size, time_step, 25 * 3). The size of the two parameters, t_para and r_para, are (batch_size, time_step, 3).

I am not sure whether I have made my problem clear. Thanks again for the help.