Sparse Embedding Error

Hi,

the code I execute uses SGD optimizer without momentum and embedding function from torch.nn.functional.
If I run the code below with sparse=True, I get following error: “AttributeError: ‘torch.sparse.FloatTensor’ object has no attribute ‘ge’”. If I pass sparse=False, everything works fine.

Code:

      embedded = embedding(inputs, self.weight,
                         max_norm=self.max_norm,
                         norm_type=self.norm_type,
                         scale_grad_by_freq=self.scale_grad_by_freq,
                         sparse=self.sparse)

Thanks,
Maksym

Here is traceback:

Traceback (most recent call last):
File “run.py”, line 17, in
main(prog=“python run.py”)
File “/home/del/Desktop/RESEARCH-PROJECTS/umt/sp/allennlp/commands/init.py”, line 77, in main
args.func(args)
File “/home/del/Desktop/RESEARCH-PROJECTS/umt/sp/allennlp/commands/train.py”, line 73, in train_model_from_args
train_model_from_file(args.param_path, args.serialization_dir)
File “/home/del/Desktop/RESEARCH-PROJECTS/umt/sp/allennlp/commands/train.py”, line 89, in train_model_from_file
return train_model(params, serialization_dir)
File “/home/del/Desktop/RESEARCH-PROJECTS/umt/sp/allennlp/commands/train.py”, line 178, in train_model
trainer.train()
File “/home/del/Desktop/RESEARCH-PROJECTS/umt/sp/allennlp/training/trainer.py”, line 374, in train
train_metrics = self._train_epoch(epoch)
File “/home/del/Desktop/RESEARCH-PROJECTS/umt/sp/allennlp/training/trainer.py”, line 224, in _train_epoch
loss.backward()
File “/home/del/anaconda2/envs/umt/lib/python3.6/site-packages/torch/autograd/variable.py”, line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File “/home/del/anaconda2/envs/umt/lib/python3.6/site-packages/torch/autograd/init.py”, line 99, in backward
variables, grad_variables, retain_graph)
File “/home/del/Desktop/RESEARCH-PROJECTS/umt/sp/allennlp/training/trainer.py”, line 159, in
clip_function = lambda grad: grad.clamp(-self._grad_clipping, self._grad_clipping)
File “/home/del/anaconda2/envs/umt/lib/python3.6/site-packages/torch/autograd/variable.py”, line 336, in clamp
return Clamp.apply(self, min, max)
File “/home/del/anaconda2/envs/umt/lib/python3.6/site-packages/torch/autograd/_functions/pointwise.py”, line 99, in forward
ctx._mask = (i.ge(min_val) * i.le(max_val))
AttributeError: ‘torch.sparse.FloatTensor’ object has no attribute ‘ge’

I’m not sure the SGD optimizer supports sparse tensors. It looks like the ge (>=) and le (<=) ops aren’t defined for sparse tensors. Perhaps try something like http://pytorch.org/docs/master/optim.html#torch.optim.SparseAdam