I have a 2D input [Observations x Features] that I am trying to expand to 3D using nonlinear transformations so that I can use various convolution functions and architectures on it.
I did some searching and found the “stack” function which seems to achieve this. In my “forward” function I have:
x = tc.stack([tc.atan(x),tc.exp(x),x], dim = 1)
This results in an output that looks like [Observations x Channels x Features]
However, when I try to run my code (pasted below) I get a “expected 3D tensor”. I tried using different “dim” positions and I still get the error.
What is the most correct way to go about expanding a 2D input into a 3D input for use inside the model?
# Setting up the net
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.elu = nn.ELU(alpha = 1.0, inplace = False)
self.conv1 = nn.Conv2d(in_channels = 3,
out_channels = 8,
kernel_size = 1,
padding = 0)
self.conv2 = nn.Conv2d(in_channels = 8,
out_channels = 12,
kernel_size = 3,
padding = 0)
self.fc1 = nn.Linear(19*19*12, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 2)
def forward(self, x):
x = tc.stack([tc.atan(x),tc.exp(x),x], dim = 1)
x = self.elu(self.conv1(x))
x = self.elu(self.conv2(x))
x = x.view(-1, 19*19*12)
x = self.elu(self.fc1(x))
x = self.elu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
# Loss function and optimizer
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(),
lr = 0.001,
betas = (0.9, 0.99),
weight_decay = 1e-4)
# Training
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i in range(10000):
# Mini batch indices
inx = np.random.choice(a = list(range(len(train_out))),
size = 32,
replace = False)
# get the inputs
inputs, labels = train_in[tc.LongTensor(inx)],train_out[tc.LongTensor(inx)]
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
Side note, I am using Spyder and no PyTorch variables seem to appear in the “Variable explorer”. Is there a quick guide on how to figure out dimentionality, types, memory size, etc. of torch arrays?