hi pytorch
Without getting too bogged down, I have a problem where I have a function f that I would like to apply for each row in the first dimension of a tensor. Is there a method like apply that doesn’t break autograd, but still parallelizes well? Or is my current method the best way to do this?
for i in range(y.size(0)): f(x,y[i])