The other 3 lines in the init are filling the weights and biases with certain values before training begins.
The forward pass is then taking any tensors sent into the model and passing them through those two layers. But the torch.relu(out) will probably give an error because I don’t believe that exists. Should be F.relu(out) or torch.nn.functional.relu(out).
If you really want to understand what’s going on, you have to ignore PyTorch code and get familiar with the basic, in this case:
Convolutional Layers
Batch Normalization (vs. Layer Normalization)
Activation functions (here: ReLU)
Initialization of weights (here: Kaiming initialization)
Residual layers (core idea behind ResNet)
The whole purpose of PyTorch, Tensorflow, etc. is essentially to abstract from these nitty-gritty details. Of course, it still make sense to have at least some understanding, simply for understanding the input parameters of the different layers.