Hello Ahmed!
As written, this doesn’t actually have a decay factor. (Each value of
y
only has at most a single power of w
.)
I think you mean to write something like
y[i] = w * y[i - 1] + x[i]
You can get what (I think) you want by pre-multiplying x
with powers
of the weight, and then using cumsum()
, all using pytorch tensor
operations, and no explicit loops.
Here is a (pytorch version 0.3.0) demonstration script:
import torch
torch.__version__
# weighted cumulative sum with tensor operations
def wtCumSum (vector, wt):
wts = wt**((len (vector) - 1.0) - torch.FloatTensor (range (len (vector))))
return torch.cumsum (wts * vector, dim = 0) / wts
torch.manual_seed (2020)
vector = torch.randn (10,)
wt = 0.5
wcs = wtCumSum (vector, wt)
vector
wt
wcs
# check against non-tensor iterative calculation
wcs2 = []
wcs2.append (vector[0])
for i in range (len(vector) - 1):
wcs2.append (wt * wcs2[i] + vector[i+1])
wcs2
wcs - torch.FloatTensor (wcs2)
And here is the output:
>>> import torch
>>> torch.__version__
'0.3.0b0+591e73e'
>>>
>>> # weighted cumulative sum with tensor operations
... def wtCumSum (vector, wt):
... wts = wt**((len (vector) - 1.0) - torch.FloatTensor (range (len (vector))))
... return torch.cumsum (wts * vector, dim = 0) / wts
...
>>> torch.manual_seed (2020)
<torch._C.Generator object at 0x000001BEE6816630>
>>>
>>> vector = torch.randn (10,)
>>> wt = 0.5
>>>
>>> wcs = wtCumSum (vector, wt)
>>>
>>> vector
1.2372
-0.9604
1.5415
-0.4079
0.8806
0.0529
0.0751
0.4777
-0.6759
-2.1489
[torch.FloatTensor of size 10]
>>> wt
0.5
>>> wcs
1.2372
-0.3418
1.3706
0.2775
1.0193
0.5626
0.3564
0.6559
-0.3480
-2.3229
[torch.FloatTensor of size 10]
>>>
>>> # check against non-tensor iterative calculation
... wcs2 = []
>>> wcs2.append (vector[0])
>>> for i in range (len(vector) - 1):
... wcs2.append (wt * wcs2[i] + vector[i+1])
...
>>> wcs2
[1.2372283935546875, -0.34178459644317627, 1.370636761188507, 0.2774530053138733, 1.0193056166172028, 0.5625850260257721, 0.3564097583293915, 0.6558640152215958, -0.34795021265745163, -2.3229051791131496]
>>>
>>> wcs - torch.FloatTensor (wcs2)
0
0
0
0
0
0
0
0
0
0
[torch.FloatTensor of size 10]
Please note that this illustrates the idea, but is written only for a
one-dimensional vector.
Also note that if wt
differs much from one and vector
is rather
long, then the powers of wt
will underflow to zero, and you will
get 0 / 0
nan
s in the result of this version.
So for real work, you should write out the loop with the conventional
iterative formula, losing autograd and the performance benefit of
tensor operations (or implement a tensor version of weighted
cumsum()
).
Best.
K. Frank