Converting output into different form using transformation

I am trying to convert the output from a layer in my network whose size is [batch, 3] using a function which I wrote:

  def W300_EulerAngles2Vectors(self,x):

    '''
    rx: pitch
    ry: yaw
    rz: roll
    '''
    b,_ = x.shape
    rx=x[:,0]* (3.14 / 180.0)
    ry=x[:,1]* (3.14 / 180.0)
    rz=x[:,2]* (3.14 / 180.0)
    ry= ry * (-1)
    '''
    R_x = torch.tensor([[1.0, 0.0, 0.0],
                    [0.0, rx.cos(), -rx.sin()],
                    [0.0, rx.sin(), rx.cos()]], requires_grad=True)

    R_y =torch.tensor([[np.cos(ry), 0.0, np.sin(ry)],
                    [0.0, 1.0, 0.0],
                    [-np.sin(ry), 0.0, np.cos(ry)]], requires_grad=True)

    R_z = torch.tensor([[np.cos(rz), -np.sin(rz), 0.0],
                    [np.sin(rz), np.cos(rz), 0.0],
                    [0.0, 0.0, 1.0]], requires_grad=True)
    '''
    tensor_0 = torch.zeros(b)
    tensor_1 = torch.ones(b)
    print('rx',x[1][0].size(),rx.size())
    R_x = torch.stack([
            torch.stack([tensor_1, tensor_0, tensor_0]),
            torch.stack([tensor_0, torch.cos(rx), -torch.sin(rx)]),
            torch.stack([tensor_0, torch.sin(rx), torch.cos(rx)])]).reshape(b,3,3)
    R_y = torch.stack([
            torch.stack([torch.cos(ry), tensor_0, torch.sin(ry)]),
            torch.stack([tensor_0, tensor_1, tensor_0]),
            torch.stack([-torch.sin(ry), tensor_0, torch.cos(ry)])]).reshape(b,3,3)

    R_z = torch.stack([
            torch.stack([torch.cos(rz), -torch.sin(rz), tensor_0]),
            torch.stack([torch.sin(rz), torch.cos(rz), tensor_0]),
            torch.stack([tensor_0, tensor_0, tensor_1])]).reshape(b,3,3)
    R = torch.matmul(R_x,R_y)
    R = torch.matmul(R,R_z)
    l_vec = torch.matmul(R, torch.t(torch.tensor([1, 0, 0])))
    b_vec =  torch.matmul(R, torch.t(torch.tensor([0, 1, 0])))
    f_vec = torch.matmul(R, torch.t(torch.tensor([0, 0, 1])))   # R @ np.array([0, 0, 1]).T
    return [l_vec, b_vec, f_vec]

But, unfortunately, if I use the definitions of R_x, R_y, R_z in strings (comments), then it gives me error:
‘‘Only one element tensors can be converted to Python scalars’’
If I use define R_x, R_y, R_z using stacks, then it gives me error:
‘‘RuntimeError: All input tensors must be on the same device. Received cpu and cuda:0’’

I want to use the output this function in my loss function. So, I want to backpropagate.

Please help. I will be very thankful for it :slight_smile: @ptrblck