Are pytorch tensors on a stack when used in python functions?

In Python, when a variable is initialized within a function, the variable is placed on a stack such that the memory it uses will be freed at the end of the function. Is this the case for pytorch tensors, or do the tensors have to be manually freed within a function?

python objects have their own refcounting so you don’t need to worry about freeing as a user in python.