I am trying to reconstruct a paper, which was written in tensorflow. There, certain architecture operations such as using residual blocks are written as functions. Here is an example:
def nccuc(input_A, input_B, n_filters, padding, training, name):
with tf.variable_scope("layer{}".format(name)):
for i, F in enumerate(n_filters):
if i < 1:
x0 = input_A
x1 = tf.layers.conv2d(x0, F, (4, 4), strides=(1, 1), activation=None, padding=padding,
kernel_regularizer=tf.contrib.layers.l2_regularizer(0.1), name="conv_{}".format(i + 1))
x1 = tf.layers.batch_normalization(
x1, training=training, name="bn_{}".format(i + 1))
x1 = tf.nn.relu(x1, name="relu{}_{}".format(name, i + 1))
elif i == 1:
up_conv = tf.layers.conv2d_transpose(x1, filters=F, kernel_size=4, strides=2, padding=padding,
kernel_regularizer=tf.contrib.layers.l2_regularizer(0.1), name="upsample_{}".format(name))
up_conv = tf.nn.relu(up_conv, name="relu{}_{}".format(name, i + 1))
return tf.concat([up_conv, input_B], axis=-1, name="concat_{}".format(name))
else:
return x1
I am wondering, if I get problems, in case I do the same in PyTorch. Usually I have always created a Class(nn.Module)
for a certain operation and implemented a forward
function. Is there any potential problem when not implementing everything in classes?