Implementation of torch.autograd.Variable

I know the Variable class is deprecated and Tensor class should be used instead.

However, I did a small study to find out how you have done this migration

I noticed you have created a base class torch._C._LegacyVariableBase with new meta class. I searched about the implementation of _LegacyVariableBase, and noticed you have created the C++ class as below. I have some background in C++ and I haven’t noticed such implementation before

PyTypeObject THPLegacyVariableType = {
  PyVarObject_HEAD_INIT(nullptr, 0)
  "torch._C._LegacyVariableBase",        /* tp_name */
  0,                                     /* tp_basicsize */
  0,                                     /* tp_itemsize */
  0,                                     /* tp_dealloc */
  0,                                     /* tp_print */
  0,                                     /* tp_getattr */
  0,                                     /* tp_setattr */
  0,                                     /* tp_reserved */
  0,                                     /* tp_repr */
  0,                                     /* tp_as_number */
  0,                                     /* tp_as_sequence */
  0,                                     /* tp_as_mapping */
  0,                                     /* tp_hash  */
  0,                                     /* tp_call */
  0,                                     /* tp_str */
  0,                                     /* tp_getattro */
  0,                                     /* tp_setattro */
  0,                                     /* tp_as_buffer */
  Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
  nullptr,                               /* tp_doc */
  0,                                     /* tp_traverse */
  0,                                     /* tp_clear */
  0,                                     /* tp_richcompare */
  0,                                     /* tp_weaklistoffset */
  0,                                     /* tp_iter */
  0,                                     /* tp_iternext */
  0,                                     /* tp_methods */
  0,                                     /* tp_members */
  0,                                     /* tp_getset */
  0,                                     /* tp_base */
  0,                                     /* tp_dict */
  0,                                     /* tp_descr_get */
  0,                                     /* tp_descr_set */
  0,                                     /* tp_dictoffset */
  0,                                     /* tp_init */
  0,                                     /* tp_alloc */
  THPVariable_pynew                      /* tp_new */
};

I expect two things from the community

  1. What is the technique Pytorch has used to create the class “torch._C._LegacyVariableBase” above
  2. Please verify whether my above observations are correct in the description?

Hi,

  1. This is the standard way to create a class in C using python: define a PyTypeObject with all the type methods. Here we only have a tp_new one that is custom that is used to create a Variable.
    You can see below in the same file that it is then added to the current module (_C) with the name “_LegacyVariableBase”.

  2. Which observations are you talking about? the fact that this did not exist? It always existed (with a different name). You will find in all the python_** files in the same folder similar declarations for all the python objects that we use to wrap autograd objects

Hi alban
Thank you for the response.
I got what you said and it is useful.
I searched about creating C class which can access from python little bit further. I am sharing the below link which would help community to understand this further.