# Batch normalisation in 1D CNN architecture

I am performing a binary classification task with ECG signals. I didn’t normalise in the beginning because I read some papers that say pre and post-processing are not required for the deep learning model and batch normalization should be done in the CNN architecture and that should be sufficient.
I created the architecture and trained the model but I got a zig-zag curve for train and valid loss.

If I want to normalize in the beginning, how can I normalize? I tried to add normalization but it doesn’t work. Here is the code for beat extraction from signal

``````def extract_beat(signal, win_pos, qrs_positions, win_msec=40, fs=360, start_beat=36, end_beat=108):

#extract signal
signal = np.array(signal)

#print(signal.shape)

#segment the beats
start = int(max(win_pos-start_beat,0))
stop  = start+start_beat+end_beat+1

beat =  signal[start:stop]

#print(" =========== BEAT = ",len(beat))
# do the normalisation
#beat = transforms.Normalize(beat)

#compute the nearest neighbor of win_pos among qrs_positions
tolerance = (fs*win_msec)//1000  #samples at a distance <tolerance are matched
nbr = NearestNeighbors(n_neighbors=1).fit(qrs_positions)
distances, indices = nbr.kneighbors(np.array([[win_pos]]).reshape(-1,1))

#label
if distances[0][0] <= tolerance:
label =  1

else:
label =  0
#print(distances[0],tolerance,label)

return beat, label

``````

This is my CNN architecture, how can I do the batch normalisation here ?

``````class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()

self.conv1 = nn.Sequential(
nn.Conv1d(1, 32, kernel_size=2, stride=1),
nn.Dropout(0.5),
nn.ReLU())

self.conv2 = nn.Sequential(
nn.Conv1d(32,32, kernel_size=2, stride=1),
nn.Dropout(0.5),
nn.ReLU(),
nn.MaxPool1d(2,stride=3))

self.conv3 = nn.Sequential(
nn.Conv1d(32, 32, kernel_size=2, stride=1),
nn.Dropout(0.5),
nn.ReLU())

#fully connected layers
self.fc1 = nn.Linear(32*47,32)
#self.fc2 = nn.Linear(32,1)
self.fc2 = nn.Linear(32,1)
self.activation = nn.Softmax()

def forward(self, x):
# input x :
#expected conv1d input = minibatch_size * num_channel * width
batch_size=x.size(0)
y = self.conv1(x.view(batch_size,1,-1))
y = self.conv2(y)
y = self.conv3(y)

#print(y.size())
batch_size= y.size(0)
y = y.flatten(start_dim=1)
#print(y.size())
y = self.fc1(y.view(y.size(0), -1))
#y = self.fc1(y.view(batch_size,1,-1))
y = self.fc2(y.view(batch_size,1,-1))

return y

``````
``````beat = transforms.Normalize(beat)
``````

This code won’t work, as `transforms.Normalize` expects the `mean` and `std` input arguments as described in the docs. After creating the transformation object, you could apply it:

``````transform = transforms.Normalize(mean=..., std=...)
out = transform(input)
``````

Just add it to the layers via `nn.BatchNorm1d(...)`.

Do I have to normalize in the beginning and do a batch normalisation too? Or in deep learning can you normalisé the data in architecture?

The beat is a NumPy array. what’s the best way to normalise?

just use torch.tensors? I.e. `input=torch.tensor(input)`