I have a minimal example to reproduce the issue:
It looks like Conv1d only accepts FloatTensor
, and when it is fed DoubleTensor
it errors out.
Here is a short example
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
x_stub = Variable(torch.DoubleTensor(100, 15, 12).normal_(0, 1))
conv_1 = nn.Conv1d(15, 15, 3)
y = conv_1(x_stub)
so show what’s going on, I added the following line to the source
def conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1,
groups=1):
f = ConvNd(_single(stride), _single(padding), _single(dilation), False,
_single(0), groups, torch.backends.cudnn.benchmark, torch.backends.cudnn.enabled)
=this line=> print(input, weight, bias)
return f(input, weight, bias)
when running code, it gives me the following error message:
Variable containing:
( 0 ,.,.) =
1 1 1 ... 0 0 0
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
... ⋱ ...
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
0 0 0 ... 0 0 1
...
(127,.,.) =
1 1 1 ... 0 0 0
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
... ⋱ ...
0 0 0 ... 0 0 0
0 0 0 ... 0 1 0
0 0 0 ... 0 0 1
[torch.DoubleTensor of size 128x12x15]
Parameter containing:
(0 ,.,.) =
0.1286 -0.1301
-0.0871 0.0397
0.0317 0.0072
-0.0406 0.0803
-0.1885 0.1544
0.1090 -0.1772
-0.1818 0.0865
-0.1696 -0.0973
-0.1179 -0.0781
-0.0745 -0.1268
0.1303 -0.0950
0.0804 -0.1008
...
(11,.,.) =
-0.1984 -0.1655
-0.0531 -0.0365
-0.1009 0.2038
-0.0382 0.1492
-0.1048 -0.1378
0.0774 0.0515
-0.0548 -0.1791
-0.1805 0.0558
-0.1805 -0.0603
-0.1938 0.0465
-0.1470 -0.0298
-0.1597 -0.1718
[torch.FloatTensor of size 12x12x2]
Parameter containing:
0.2023
0.0939
-0.2037
0.1501
-0.0270
0.0494
0.0637
-0.1420
0.1512
-0.1538
-0.1828
0.0366
[torch.FloatTensor of size 12]
Traceback (most recent call last):
File "/Users/usr/projects/deep_learning_notes/pytorch_playground/grammar_variational_autoencoder/grammar_vae.py", line 69, in <module>
losses += sess.train(train_loader, epoch)
File "/Users/usr/projects/deep_learning_notes/pytorch_playground/grammar_variational_autoencoder/grammar_vae.py", line 26, in train
recon_batch, mu, log_var = self.model(data)
File "/Users/usr/anaconda/envs/deep-learning/lib/python3.6/site-packausrs/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/Users/usr/projects/deep_learning_notes/pytorch_playground/grammar_variational_autoencoder/model.py", line 90, in forward
mu, log_var = self.encoder(x)
File "/Users/usr/anaconda/envs/deep-learning/lib/python3.6/site-packausrs/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/Users/usr/projects/deep_learning_notes/pytorch_playground/grammar_variational_autoencoder/model.py", line 47, in forward
h = self.conv_1(x)
File "/Users/usr/anaconda/envs/deep-learning/lib/python3.6/site-packausrs/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/Users/usr/anaconda/envs/deep-learning/lib/python3.6/site-packausrs/torch/nn/modules/conv.py", line 143, in forward
self.padding, self.dilation, self.groups)
File "/Users/usr/anaconda/envs/deep-learning/lib/python3.6/site-packausrs/torch/nn/functional.py", line 69, in conv1d
return f(input, weight, bias)
RuntimeError: expected Double tensor (got Float tensor)