In Python, when a variable is initialized within a function, the variable is placed on a stack such that the memory it uses will be freed at the end of the function. Is this the case for pytorch tensors, or do the tensors have to be manually freed within a function?
KennyD (Kenny Duran) #1
SimonW (Simon Wang) #2
python objects have their own refcounting so you don’t need to worry about freeing as a user in python.