How to iterate in dataloader ? "TypeError: only integer tensors of a single element can be converted to an index"

I’m currently learning about multi optimization.
However, in my neural net work, I can’t train model , because of “TypeError: only integer tensors of a single element can be converted to an index”.

How do I iterate ‘task’ ?

The code is:

class torchdata(utils.data.Dataset):

    def __init__(self, x1, x2, target, task):

        self.x1 = torch.FloatTensor(x1)

        self.x2 = torch.FloatTensor(x2)

        self.target = torch.FloatTensor(target)

        self.task = torch.FloatTensor(task)

    def __len__(self):

        return self.x1.shape[0]

    def __getitem__(self, i):

        return self.x1[i, :], self.x2[i, :], self.target[i], self.task[i]

class Net_new(nn.Module):
    def __init__(self, np_features, np_desc, p, m, mode=0):
        super().__init__()
        input_dim = 30
        hidden_dim = 128
        if mode == 1:
            self.embed = Encoder1(np_features, np_desc, p)
        elif mode == 2:
            self.embed = Encoder2(np_features, np_desc, p)
        else:
            self.embed = Encoder(np_features, np_desc, p)
            
        self.share_block = nn.Sequential(
            nn.BatchNorm1d(input_dim),
            nn.Dropout(p=0.1),
            nn.utils.weight_norm(nn.Linear(input_dim, hidden_dim)),
            nn.SELU(),
            nn.BatchNorm1d(hidden_dim),
            nn.Dropout(p=0.1),
            nn.utils.weight_norm(nn.Linear(hidden_dim, hidden_dim)),
            nn.SELU(),
            nn.BatchNorm1d(hidden_dim),
            nn.Dropout(p=0.1),
            nn.utils.weight_norm(nn.Linear(hidden_dim, hidden_dim)),
            nn.SELU(),
            nn.BatchNorm1d(hidden_dim),
            nn.Dropout(p=0.1)
        )
        self.head_list = nn.ModuleList(
            [nn.Linear(hidden_dim, 1) for i in range(2)]
            )        

    def forward(self, torch_x1, torch_x2, task):
        torch_x1 = torch_x1.to(device)
        torch_x2 = torch_x2.to(device)
        x = self.embed(torch_x1, torch_x2)
        x = self.share_block(x)
        y_pred = self.head_list[task](x)     <ーーーーーーerror occured here
        
        return y_pred
    

You can use

task = int(task.item())
y_pred = self.head_list[task](x)

Does it work?

Thank you for reply.
It doesn’t work. I want to specify predicting layer using ‘task’ number.
If I use dataloader, can’t I use ‘[]’ to specify?

Yes, you can use indexing inside the loader
Have you printed ‘task’ after converting it into an integer?

I printed ‘task’ before ’ y_pred = self.head_listtask’, output is like this.
tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])

We should consider the batch ops…

Using for loop is the only option you have.
Don’t forget to index x and task together

If I were you, I would separate the models according to tasks.

1 Like

@Takumi3 An Interesting problem actually :grinning: but as @thecho7 mentioned, you cannot run a single command to do index to index mapping of data row and layer object

This is the closes you can do for parallelization

module_list = nn.ModuleList([nn.Linear(10, 1) for i in range(10)])
input_data = torch.FloatTensor(3, 10)
indices = torch.LongTensor([4,7,1])
torch.stack([module_list[module_index](input_data[data_index]) for data_index, module_index in enumerate(indices)])
1 Like

Thank you for your help. I wanted to use common paramater between two tasks. So I coded like that. I will use for loop.

Thank you for your help. I will implement using your sample code .