Does scatter_ support autograd?


(No Name) #1

Given a categorical feature map, for example, mat (batchxnclassesxHxW), I’d like encode it to one-hoe format. I know one way is to using scatter_. My question is that this kind of operation is autograd supported or not?

Thanks.

result1 = torch.unsqueeze(results, 1) 
results_one_hot = Variable(torch.cuda.FloatTensor(inputSZ).zero_()) 
results_one_hot.scatter_(1,result1,1)

(No Name) #2

Personally, I donot think this operation support autograd.


(Alban D) #3

Hi,
It does support autograd, but can compute gradients only wrt the input tensor and not the indices (as the gradients wrt the indices does not exist).
This code snippet should make it clear:

import torch
from torch.autograd import Variable


inp = Variable(torch.zeros(10), requires_grad=True)
# You need to set requires_grad=False because scatter does not give gradient wrt to indices
indices = Variable(torch.Tensor([2, 5]).long(), requires_grad=False)

# We need this otherwise we would modify a leaf Variable inplace
inp_clone = inp.clone()
inp_clone.scatter_(0, indices, 1)

inp_clone.sum().backward()
# So the values that are not modified by scatter have a 1 gradient
# The values changed by scatter have a 0 gradient as they were overwritten by the scatter
print(inp.grad)

(No Name) #4

Thanks. @albanD

I’m wondering how can I know whether it support autograd or not? For example, torch.dot doesn’t support autograd, but torch.mm support. I am not sure what’s the rule to decide whether the operation can support autograd or not?


(Alban D) #5

Hi,

All function that works when you feed them with Variables support autograd.
torch.dot actually support autograd:

a = Variable(torch.rand(10))
out = torch.dot(a, a)
assert(isinstance(out, Variable)) # works

The output is actually a Variable with a Tensor containing one element.


(No Name) #6

Thanks.

But I’m quite confused, as in the official website, torch.dot returns a float type, how can I make it into a Variable without packing it?

torch.dot(tensor1, tensor2) → float


(Alban D) #7

Unfortunately, right now (this will change in the near future), a Variable can only contain a Tensor and not directly a number. To get around this, the function will return a Variable containing a Tensor with one element instead of a Variable containing just a number. See below:

import torch
from torch.autograd import Variable

a = torch.rand(10)

print("Operating on Tensor")
print(torch.dot(a, a))

v_a = Variable(a)

print("Operating on Variable")
print(torch.dot(v_a, v_a))

(No Name) #8

I see. @albanD

if we operation on a tensor (without Variable), torch.dot returns a float.
if we operation on a Variable tensor, torch.dot returns a variable tensor which contains one elements.

Then I think the office specification should better make it more clear. I like pytorch a lot, but I think some part of the official specification is not quite clear.


(Alban D) #9

It is currently work in progress to make Variable being able to contain both a Tensor or a python number.
When this is out, this will work as you expect.


(No Name) #10

Thanks a lot. Expect to it.