What is the equivalent of tf.get_variable() in Pytorch?

Not at all

The main utility of a dynamic computation graph is that it allows you to process complex inputs and outputs, without worrying to convert every batch of input into a tensor. Tensorflow is define and run and as such write all placeholders beforehand. Pytorch define by run so graphs are created as you go