will the outcome of autograd.Variable converge?

I use the function autograd.Variable to process data, but the outcome of differentiation always change each time. I wonder if the result is ordinary or not.Does anyone know the problem?Any help you provide is appreciated.

Variables are deprecated since PyTorch 0.4, so you should use tensors directly now.
However, using it might not explain your issue, so could you add more information about your use case and, if possible, a code snippet showing this issue?

I’m studying action recognition and now learn a github program pytorch-coviar,but I find that the result always change even for a same train model.After check the code ,I think the code line input_var = torch.autograd.Variable(data, volatile=True) (pytorch-coviar/test.py at master · chaoyuaw/pytorch-coviar · GitHub line94) is the reason,because it will produce different number each time and finally make the result different.However,I don’t know if it is because function autograd.Variable can‘t converge or other unkonwn reasons.Could you please tell me something about it?

don’t know whether you have received my messege or not.maybe the information that I present is not enough?if there are something that I should provide,please tell me,thank you

If you are concerned about the behavior of Variable (which is reasonable, as it’s deprecated), remove it and rerun the code.
I couldn’t follow the entire explanation, but would recommend to also check, if model.eval() would solve the issue.
If not, feel free to post an executable, minimal code snippet to reproduce the issue.

Thank you for your help.After thinking about it, I found that the previous question is not accurate.What I want to find out is whether the pytorch function “autograd” can converge,When it try to differentiates a data(such as picture).In other words, when function autograd calculates the differentiation of the same data(matrix of an image or other), will it get the same result if it runs many times? Or it have some differences (errors) every time

Assuming you are using the same input as well as the same parameters in the model and also disable any randomness, it would come down to the determinism of the used operations.
Take a look at the Reproducibility docs to see how deterministic ops can be selected (if available).