Can PyTorch Operations be written as functions instead of Classes?

I am trying to reconstruct a paper, which was written in tensorflow. There, certain architecture operations such as using residual blocks are written as functions. Here is an example:

def nccuc(input_A, input_B, n_filters, padding, training, name):
    with tf.variable_scope("layer{}".format(name)):
        for i, F in enumerate(n_filters):
            if i < 1:
                x0 = input_A
                x1 = tf.layers.conv2d(x0, F, (4, 4), strides=(1, 1), activation=None, padding=padding,
                                      kernel_regularizer=tf.contrib.layers.l2_regularizer(0.1), name="conv_{}".format(i + 1))
                x1 = tf.layers.batch_normalization(
                    x1, training=training, name="bn_{}".format(i + 1))
                x1 = tf.nn.relu(x1, name="relu{}_{}".format(name, i + 1))

            elif i == 1:
                up_conv = tf.layers.conv2d_transpose(x1, filters=F, kernel_size=4, strides=2, padding=padding,
                                                     kernel_regularizer=tf.contrib.layers.l2_regularizer(0.1), name="upsample_{}".format(name))

                up_conv = tf.nn.relu(up_conv, name="relu{}_{}".format(name, i + 1))
                return tf.concat([up_conv, input_B], axis=-1, name="concat_{}".format(name))

            else:
                return x1

I am wondering, if I get problems, in case I do the same in PyTorch. Usually I have always created a Class(nn.Module) for a certain operation and implemented a forward function. Is there any potential problem when not implementing everything in classes?

You can use the functional API to implement your model and training and would need to create all parameters and buffers needed for these operations.

Does that mean, simply rewriting it as:

def nccuc(input_A, input_B, n_filters):
    for i, F in enumerate(n_filters):
        if i < 1:
            x0 = input_A
            x1 = nn.Conv2d(in_channels=x0.shape[1], out_channels=F, kernel_size=(3, 3), stride=(1, 1), padding=1)(x0)
            x1 = nn.BatchNorm2d(x1.shape[1])(x1)
            x1 = nn.ReLU()(x1)

        elif i == 1:
            up_conv = nn.ConvTranspose2d(in_channels=x1.shape[1], out_channels=F, kernel_size=4, stride=2, padding=1)(x1)
            up_conv = nn.ReLU()(up_conv)
            return torch.cat((up_conv, input_B), dim=1)

        else:
            return x1

would not work, since the parameters to keep track off are not declared?

Hi,

no he meant to use the torch.nn.functional API.
So instead of writting nn.Conv2d and defining a Python object, which contains all the parameters, you would write F.conv2d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1), which is just a Python function calling the convolution operation on your input. BUT you have to define your parameters by yourself in your nccuc function and give them as input to the function (e.g. F.conv2d).

Edit: What you where doing is defining you layers (as a object) every time you call the nccuc function and running them once with your input. This way your model and parameters get new defined every time you call nccuc.