# Doing a very basic (simple) moving average?

I was trying to do a moving average but was worried that it would negatively interfere with my backprop or something weird (sorry new to pytorch. If I do:

``````W = Variable(w_init, requires_grad=True)
for i in range(nb_iterations):
#some GD stuff...
W_avg = (1/nb_iter)*W + W_avg
``````

would that be ok? Would it compute my average parameters and not interfere with other stuff?

weird got this error:

``````W_avg = Variable(torch.FloatTensor(W).type(dtype), requires_grad=False)
RuntimeError: already counted a million dimensions in a given sequence. Most likely your items are also sequences and there's no way to infer how many dimension should the tensor have``````

The error is because you’re trying to create a `FloatTensor` from a `Variable`:

``````torch.FloatTensor(W)  # W is of type 'Variable'
``````

Here’s a way to do a average (assuming you don’t care about back-propagating through the average):

``````W = Variable(w_init, requires_grad=True)
W_avg = torch.zeros(W.size()).type(dtype)  # Tensor, not Variable since you don't care about gradients
for i in range(nb_iterations):
#some GD stuff...
W_avg += (1/nb_iter)*W.data
``````

Note the `W.data`. This extracts the Tensor from the Variable wrapper, since you don’t care about back-propagating through the average.

1 Like

@colesbury

How would I go about using this to write a mean-only batch normalization module?

`out = x - torch.mean(x, dim=0)`

1 Like

@smth

Admittedly, still not getting it. I tried creating a mean only bn module

``````def __init__(self, num_features,momentum=0.1):
super(_MeanOnlyBatchNorm, self).__init__()
self.num_features = num_features
self.momentum = momentum
self.register_buffer('running_mean', torch.zeros(num_features))
self.reset_parameters()

def forward(self, input):
mu = torch.mean(input,dim=0).data
self.running_mean = self.momentum*mu + (1-self.momentum) * self.running_mean
``````

but get errors. I’m lost.

EDIT:

I’ve settled on this, but still am not sure if it’s correct.

``````def forward(self, input):

# mu = Variable(torch.mean(input,dim=0, keepdim=True).data, requires_grad=False)
if self.training is True:
mu = input.mean(dim=0, keepdim=True)
mu = self.momentum * mu + (1 - self.momentum) * Variable(self.running_mean)

self.running_mean = mu.data
else:

The fastest and most efficient way is to make use of the AvgPool1d module. Then you get the advantage of parallelization and can also run the calc on GPU.

You can specify the period of the simple moving average via the kernel.

https://pytorch.org/docs/stable/generated/torch.nn.AvgPool1d.html

``````import torch
import torch.nn as nn

list = torch.arange(0,10,1,dtype=torch.float).view(1,1,-1)
kernel = 3
sma = nn.AvgPool1d(kernel_size=kernel, stride = 1)
out = sma(list)
``````

Of course, you’ll have to figure out how you want the ends handled. You could just repeat the first and final value and torch.cat them so that your output is the same size as the input.

1 Like