Index the tuple to extract the tensor

I am working to denoising autoencoder. I use the ConcatDataset below to concatenate noisy and original images. I am running into a problem where the ConcatDataset(train_dataset_noisy, train_dataset_original) produces a tuple as a list. How can I index the tuple to extract the tensor?

class ConcatDataset(torch.utils.data.Dataset):
def init(self, *datasets):
self.datasets = datasets
def getitem(self, i):
return tuple(d[i] for d in self.datasets)
def len(self):
return min(len(d) for d in self.datasets)

Error:

AttributeError Traceback (most recent call last)
in ()
6
7 for epoch in range(1, max_epoch+1):
----> 8 train(epoch, device=device)
9 test(epoch, device=device)

in train(epoch, device)
6
7 optimizer.zero_grad()
----> 8 images = images.to(device)
9 output = AE(images)
10 loss = loss_fn(output, images) # Here is a typical loss function (Mean square error)

AttributeError: ‘list’ object has no attribute ‘to’

Does your model accept multiple inputs?
If images are separate inputs, do:

inputs = [x.to(device) for x in images]
outputs = AE(*inputs)

if images are single inputs, as a batch, use the collate_fn to process the list of samples to form a batch.

Roy

Sorry for my lack of understanding. My model accepts single inputs. Since my ConcatDataset return a tuple where it contains the tensor and label, I still confuse whether I need to make changes to the Concat Dataset() or make change in my model to form a batch. I look for collete_fn function but I can’t find it (torch.utils.data).

class our_AE(nn.Module):
def init(self):
super(our_AE, self).init()
self.encoder = nn.Sequential(
nn.Conv2d(1, 16, 3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 32, 3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(32, 64, 7)
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(64, 32, 7),
nn.ReLU(),
nn.ConvTranspose2d(32, 16, 3, stride=2, padding=1, output_padding=1),
nn.ReLU(),
nn.ConvTranspose2d(16, 1, 3, stride=2, padding=1, output_padding=1),
nn.Sigmoid()
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
AE = our_AE().to(device)
optimizer = optim.Adam(AE.parameters(), lr=1e-4)
loss_fn = nn.MSELoss(reduction=‘sum’)

def train(epoch, device):
AE.train()
for batch_idx, (images, _) in enumerate(train_loader):
optimizer.zero_grad()
images = images.to(device)
output = AE(images)
loss = loss_fn(output, images) # Here is a typical loss function (Mean square error)
loss.backward()
optimizer.step()
if batch_idx % 10 == 0: # We record our output every 10 batches
train_losses.append(loss.item()/batch_size_train) # item() is to get the value of the tensor directly
train_counter.append(
(batch_idx*64) + ((epoch-1)len(train_loader.dataset)))
if batch_idx % 100 == 0: # We visulize our output every 100 batches
print(f’Epoch {epoch}: [{batch_idx
len(images)}/{len(train_loader.dataset)}] Loss: {loss.item()/batch_size_train}’)

def test(epoch, device):
AE.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for images, _ in test_loader:
images = images.to(device)
output = AE(images)
test_loss += loss_fn(output, images).item()

test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
test_counter.append(len(train_loader.dataset)*epoch)
print(f’Test result on epoch {epoch}: Avg loss is {test_loss}’)

train_losses = []
train_counter = []
test_losses = []
test_counter = []
max_epoch = 3
for epoch in range(1, max_epoch+1):
train(epoch, device=device)
test(epoch, device=device)

Sorry, I’ll try to explain my previous response:

You mentioned your error is

AttributeError: ‘list’ object has no attribute ‘to’

From what I see, the only place you have (possibly problematic) “to” is in

images = images.to(device)

Can you describe “images”? it appears to be a list, so that gives 2 possible options:

  • Does this list represent a batch? aka a list of tensor samples? if so, you need to turn this list into a tensor before calling images.to(device)

  • Does this list represents multiple inputs to forward function?
    a.k.a your forward function looks like: forward(self, x1, x2, x3) then you need to do

inputs = [x.to(device) for x in images]
outputs = AE(*inputs)

From the new code you posted, it seems like your model’s forward is forward(self, x) so no multiple inputs, then I guess “images” is a list of samples (a mini-batch), and all you need to do it to turn it to a tensor before calling to(device)

Side note 1:
The collate_fn is an argument to DataLoader class, in the doc it says:

collate_fn (callable , optional ) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset.

It is a function you provide to handle the list of samples and turn them into a Tensor batch that your model can get as an input

Side note 2:
The default collate_fn expects all the images in a batch to have the same size because it uses torch.stack() to pack the images. If the images provided by Dataset have variable size, you have to provide your custom collate_fn