Tf.map_fn or theano.scan equivalent in PyTorch

Hello. I’m new to PyTorch. I am coming from Keras, Theano and TensorFlow and I love the simplicity and performance in PyTorch so far.

I have some custom loss functions that use either Theano’s scan (more flexibility than I need) or TensorFlows tf.map_fn calls and I would like to transfer that over to PyTorch. I found torch.map() but that seems to apply a simple function to all elements in the tensor.

Is there a map_fn that works on the GPU that I have not yet found?

For equivalents of theano.scan, use Python for and while loops.

1 Like

Very helpful. Will the resulting loss function be differentiable with autograd? I was under the impression that I needed to use only torch functions for my loss function.

The result will be differentiable with autograd – you can use anything in Python as long as you don’t manually modify a variable’s .data attribute; it’ll throw an error if you do something non-differentiable.

I have to say it is incredible to me that this works. so much simpler than the TF or Theano approach to the problem. working example here:

import torch
from torch.autograd import Variable
import numpy as np
np.random.seed(20170525)
x = np.random.rand(100)
y = np.ones(100)+np.random.rand(100)
def np_mse(a,b):
    return np.mean((a-b)**2)
mse_parts = [np_mse(x.reshape(20,5)[:,i],y.copy().reshape(20,5)[:,i]) for i in range(5)]
print sum(mse_parts)
x = Variable(torch.from_numpy(x.astype('float32')),requires_grad=True)
y = Variable(torch.from_numpy(y.astype('float32')),requires_grad=True)
out = sum([torch.mean((x.view(20,5)[:,i]-y.view(20,5)[:,i])**2) for i in range(5)])
print out
out.backward() # works!!

Interstingly np.mean() works but np.std() does NOT work. Any insight into why?
Looks like the variable that comes back has a strange format:

out = np.std([torch.mean((x.view(20,5)[:,i]-y.view(20,5)[:,i])**2) for i in range(5)])

0.149130464538 [[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing: 0.1491 [torch.FloatTensor of size 1] ]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]] Traceback (most recent call last): File "pytorch_customloss.py", line 17, in <module> out.backward() # works!! AttributeError: 'numpy.ndarray' object has no attribute 'backward'

You should not use numpy functions in torch Variables.
You can use instead torch.std

Will torch.std work over a list of torch variables? That is what I get back after my for loop.

I get this error with my example. I feel like I must be missing something fundamental here in my approach. Please advise.

out = torch.std(results)
*** TypeError: torch.std received an invalid combination of arguments - got (list), but expected one of:
 * (torch.FloatTensor source)
      didn't match because some of the arguments have invalid types: (list)
 * (torch.FloatTensor source, int dim)

No, torch.std requires a tensor or a Variable. But if your tensors have the same size, you can concatenate them other a new dimension (via torch.stack for example) and apply torch.std over that dimension

1 Like

That works. Thanks for the pointer. Very helpful.