I have the following function:
It works okay when I have input as tensor, e.g.
input = torch.FloatTensor([[1, 2], [5, 6]])

But when I have input as Variable:
x = Variable(input, requires_grad=True)

it gives me the error that:

---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-177-e1cc7f22bef8> in <module>()
----> 4 print(Test(x))
<ipython-input-176-405f14576c56> in Test(InputMatrix)
7 DCT_Matrix = (torch.sin(Grid_R+0.05)*(Grid_R /D))
8 print(DCT_Matrix)
----> 9 DCT_Output = torch.mm(DCT_Matrix,torch.t(torch.FloatTensor(InputMatrix)))
10 return DCT_Output
RuntimeError: already counted a million dimensions in a given sequence. Most likely your items are also sequences and there's no way to infer how many dimension should the tensor have

Can anyone tell me what I need to do to fix this issue please.
P.S. I need to do backprob on this function later.

Q2: can i use a function from numpy, e.g. numpy.sin(x) and then use variable as an input? or should i write the function in torch for getting the propagation?

Thanks

def Test(InputMatrix):
D = len(InputMatrix)
Grid_R = torch.zeros(D,D)
for i in range(len(Grid_R)):
Grid_R[i][:] = i
DCT_Matrix = (torch.sin(Grid_R+0.05)*(Grid_R /D))
print(DCT_Matrix)
DCT_Output = torch.mm(DCT_Matrix,torch.t(torch.FloatTensor(InputMatrix)))
return DCT_Output

Q1: Why are you trying to create a Tensor from a Variable?
Q2: No.

You should keep in mind for both your problem that for the autograd to work, you need to work ONLY with Variables. So any function that does not use Variable (like numpy ones) will not work. Also you should not convert your Variable back to a Tensor, otherwise you will not be able to backpropagate properly.

Well I guess I dont know what I am doing here. I am confused.
I wanted to define my function using torch library, because I want the output to be tensorFloat, now I need to do backprob, so what I should do? how should I define this function?
Can you help me with defining this function the way that autograd works for it?

You should just take your original function and instead of giving it a Tensor as input, give it a Variable containing the Tensor.
This way, all operations will be made on the Variables and you will be able to backpropagate from the output.

def Test(InputMatrix):
D = len(InputMatrix)
Grid_R = torch.zeros(D,D)
for i in range(len(Grid_R)):
Grid_R[i][:] = i
DCT_Matrix = (torch.sin(Grid_R+0.05)*(Grid_R /D))
print(DCT_Matrix)
# Always package temporary elements into Variable when they are going to be
# used with other elements for which you require gradients.
DCT_Matrix_var = Variable(DCT_Matrix)
DCT_Output = torch.mm(DCT_Matrix_var,torch.t(InputMatrix))
return DCT_Output

Thank you!!!
So let me restate what you mentioned to make sure I got it correct.
Inside a function, all the constant and temporary elements should map to Variable.
However, we should be very careful and dont Variable the elements that are gonna be changed.
e.g. in my function I should not say DCT_Output = torch.mm(DCT_Matrix_var,Variable(torch.t(InputMatrix))). But I can say D = Variable(len(InputMatrix)) or DCT_Matrix = (torch.sin(Grid_R+Variable(0.05))*(Grid_R /D)).
Because using a variable for an element that is changing inside the function would mess up the gradient.

If you want to get gradients wrt the input, you input should already be a Variable.
For the autograd to work properly, you have to only work with Variable (note that python numbers can be used as is without wrapping them in a Variable).
This means that your function will take Variable as input, and should return a Variable and only perform operations with Variables inside.
If you need to create a constant matrix inside your function, you can create it as a Tensor, but before using it in an operation with your input (which is a Variable), this Tensor should be wrapped in a Variable. Keep in mind that when you do Variable(DCT_matrix), this is the same as Variable(DCT_matrix, requires_grad=False): you create a new constant in your graph that does not require gradients and thus we should not backpropagate to it.