I just wanted to try PyTorch after several years of TensorFlow.
- But I wonder, why do I need to define layers in init and use them in forward?
In TensorFlow, for example, I just build graph in one place, and then use it all over the code, like so:
x = Conv2D(filters=16, kernel_size=(7, 7), strides=(1, 1), padding="valid")(x)
x = ReLU()(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(filters=32, kernel_size=(5, 5), strides=(1, 1), padding="valid")(x)
x = ReLU()(x)
x = MaxPooling2D((2, 2))(x)
x = Flatten()(x)
x = Dense(100)(x)
x = ReLU()(x)
x = Dense(self._num_classes)(x)
return x
- It seems not so attractive for me to calculate inputs and outputs dims for each layer.
Moreover, If I want to change the input dataset dim, I have to manually recalculate inputs outputs of layers (Linear at least).
So, is it possible to define models like in TensorFlow with automatically recalculation dims of inputs and outputs?
Thanks for attention.