This work

```
m = nn.Conv2d(16, 32, (3, 3)).float()
loss = nn.NLLLoss2d()
input = autograd.Variable(torch.randn(3, 16, 10, 10))
target = autograd.Variable(torch.LongTensor(3, 8, 8).random_(0, 4))
input = m(input)
output = loss(input, target)
```

But this one do not work

```
m = nn.Conv2d(16, 32, (3, 3)).float()
loss = nn.NLLLoss2d()
input = autograd.Variable(torch.randn(3, 16, 10, 10))
target = np.arange(3*32*8*8).reshape(3,32,8,8).astype('int64')
target = Variable(torch.from_numpy(target))
input = m(input)
output = loss(input, target)
```

Input and target of both have the same shape and type, but the later one(transformed from ndarray) always throw runtime error

**RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4 at /b/wheel/pytorch-src/torch/lib/THNN/generic/SpatialClassNLLCriterion.c:39**

Why this happen?What is the correct way to convert from numpy array to torch tensor?Thanks