[solved] Strange result of log_softmax

I’m implementing a bag-of-word model. This is my model:

class BOWModel(nn.Module):
    def __init__(self, nwords, ntags):
        super(BOWModel, self).__init__()
        self.W = nn.Parameter(torch.rand(ntags, nwords))
        self.b = nn.Parameter(torch.rand(ntags, 1))
    def forward(self, vec):
        res = torch.sum(self.W[:,vec.data], dim=1).view(ntags, 1)
        res.add_(self.b)
        res = F.log_softmax(res)
        return res

But it always returns a zero vector.
Before res = F.log_softmax(res), the information of res is:

res = {Variable} Variable containing:\n 7.1540\n 8.9048\n 7.4846\n 8.1397\n 8.3597\n[torch.FloatTensor of size 5x1]\n
 _backward_hooks = {NoneType} None
 _execution_engine = {_EngineBase} <torch._C._EngineBase object at 0x101ad15b0>
 _fallthrough_methods = {set} {'dim', 'is_set_to', 'ndimension', 'is_signed', 'nelement', 'numel', 'element_size', 'is_contiguous', 'is_cuda', 'size', 'stride', 'get_device'}
 _grad = {NoneType} None
 _grad_fn = {AddBackward} <torch.autograd.function.AddBackward object at 0x114f2ab88>
 _torch = {type} <class 'torch.autograd.variable.Variable._torch'>
 _version = {int} 1
 data = {FloatTensor} \n 7.1540\n 8.9048\n 7.4846\n 8.1397\n 8.3597\n[torch.FloatTensor of size 5x1]\n
 grad = {NoneType} None
 grad_fn = {AddBackward} <torch.autograd.function.AddBackward object at 0x114f2ab88>
 is_leaf = {bool} False
 output_nr = {int} 0
 requires_grad = {bool} True
 volatile = {bool} False

After res = F.log_softmax(res), the information of res is:

res = {Variable} Variable containing:\n 0\n 0\n 0\n 0\n 0\n[torch.FloatTensor of size 5x1]\n
 _backward_hooks = {NoneType} None
 _execution_engine = {_EngineBase} <torch._C._EngineBase object at 0x101ad15b0>
 _fallthrough_methods = {set} {'dim', 'is_set_to', 'ndimension', 'is_signed', 'nelement', 'numel', 'element_size', 'is_contiguous', 'is_cuda', 'size', 'stride', 'get_device'}
 _grad = {NoneType} None
 _grad_fn = {LogSoftmaxBackward} <torch.autograd.function.LogSoftmaxBackward object at 0x114f2ad68>
 _torch = {type} <class 'torch.autograd.variable.Variable._torch'>
 _version = {int} 0
 data = {FloatTensor} \n 0\n 0\n 0\n 0\n 0\n[torch.FloatTensor of size 5x1]\n
 grad = {NoneType} None
 grad_fn = {LogSoftmaxBackward} <torch.autograd.function.LogSoftmaxBackward object at 0x114f2ad68>
 is_leaf = {bool} False
 output_nr = {int} 0
 requires_grad = {bool} True
 volatile = {bool} False

I stepped into the definition but still have no idea about it…

I solved it. I should explicitly indicate dim=X.