Efficiently computing max drawdown

I’m working with financial data representing a portfolio, and one of the things I want to optimize for is max drawdown. The drawdown at any point is the difference of the max portfolio value so far minus the current value.

Unoptimized Python code to compute this might look like this:

def max_drawdown(vec):
    drawdown = 0.
    max_seen = vec[0]
    for val in vec[1:]:
        max_seen = max(max_seen, val)
        drawdown = max(drawdown, 1 - val / max_seen)
    return drawdown

This can be improved using numpy.maximum.accumulate:

def max_drawdown(vec):
    maximums = np.maximum.accumulate(vec)
    drawdowns = 1 - vec / maximums
    return np.max(drawdowns)

Translating the naive for-loop implementation to Pytorch is straightforward, but it causes my model to become incredibly slow during autograd. I want to optimize the code, ideally with something that looks similar to the np.maximum.accumulate version I gave above.

It is possible to do this with Pytorch? If so, how?

What does “vec” stand for?