Messy Code = Slow Code?

Does messy code, for example when you are defining your model, cause your code to run slower?

Thank you

It depends, what “messy code” means.
If you are computing unnecessary stuff, sure your code will run slower.

Also, using global variables is usually slower than local ones, but that probably shouldn’t make much a difference (of course it again depends what you are doing).

Could you explain your code a bit and what makes it messy?

For example, when defining a model, especially a huge model like Resnet for something similar.

Method A:

import torch.nn as nn
import torch.nn.functional as F

class MethodA(nn.Module):
   def __init__(self):
      super(MethodA, self).__init__()

      self.conv1 = nn.Conv2d(...)
      self.conv2 = nn.Conv2d(...)
      .
      .
      .

  def forward(self, x):
      x = F.relu(F.batcnorm2d(self.conv1(x)))
      x = F.relu(F.batchnorm(self.conv2(x)))
      .
      .
      .
      return x

Method B:


import torch.nn as nn
import torch.nn.functional as F

def convbn(in_feat, out_feat, kernel_size, stride, pad, dilation):

    return nn.Sequential(nn.Conv2d(in_feat, out_feat, kernel_size=kernel_size, stride=stride, padding=pad, dilation=dilation, bias=False),
                         nn.BatchNorm2d(out_feat))

class MethodB(nn.Module):
   def __init__(self):
      super(self, MethodB).__init__()
      self.conv1 = convbn(...)
      self.conv2 = convbn(...)
      self.conv3 = convbn(...)

   def forward(self, x): 
      x = self.conv1(x)
      x = self.conv2(x)
      x = self.conv3(x)
      return (x) 

The example I provide here may not be clear. Another example will be when defining the resnet model, it can be tedious to type out line by line and I saw someone used a _make_layer function. And by using this function the resnet model can be defined easily.

def _make_layer(self, block, planes, blocks, stride=1, dilation=1, multi_grid=1):
        downsample = None
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.inplanes, planes * block.expansion,
                          kernel_size=1, stride=stride, bias=False),
                SynchronizedBatchNorm2d(planes * block.expansion))

        layers = []
        generate_multi_grid = lambda index, grids: grids[index%len(grids)] if isinstance(grids, tuple) else 1
        layers.append(block(self.inplanes, planes, stride,dilation=dilation, downsample=downsample, multi_grid=generate_multi_grid(0, multi_grid)))
        self.inplanes = planes * block.expansion
        for i in range(1, blocks):
            layers.append(block(self.inplanes, planes, dilation=dilation, multi_grid=generate_multi_grid(i, multi_grid)))

        return nn.Sequential(*layers)

So does it affect the speed of my model if I use a function like _make_layer or define the model layer by layer?

Given that you already coded the model, you could for example follow https://stackoverflow.com/questions/7370801/measure-time-elapsed-in-python to measure the training time (e.g. for one epoch) on both implementations.

Good advice! Never thought of that. Thanks