IndexError: Target 1 is out of bounds

Trying to replicate the code below and I get the error “IndexError: Target 1 is out of bounds.” What could be the problem

class TexNet(nn.Module):
def init(self):
super(TexNet, self).init()

    self.fc = nn.Linear(256, 1)
    self.feature1 = nn.Sequential(
            STNet(),
            Stim(),
            Text()
            )
    self.inp = 0
    
def forward(self, x):
    x11 = self.feature1(x)
    x1 = x11.view(-1, 256)
    x1 = self.fc(x1) 
    self.inp = x1
    return x1, x11

model1 = TexNet().to(device)
model2.train()
model3.train()
criterion1 = nn.CrossEntropyLoss()
criterion2 = nn.MSELoss()
criterion3 = nn.CrossEntropyLoss()

train_dataset = torchvision.datasets.ImageFolder(root=’./images/’,
transform=Transform)
train_loader=torch.utils.data.DataLoader(train_dataset, batch_size=1, shuffle=True)

for j in range (10):
model1.train()

for i, data in enumerate(train_loader):
    inputs, target = data[0].to(device), data[1].to(device)
    target_f = torch.Tensor.float(target)

    optimizer1.zero_grad()
    optimizer2a.zero_grad()
    optimizer2b.zero_grad()

    output1 = model1(inputs)[0]
output2a = model2(inputs)
    output2b = model3(inputs)[0]

    loss = (criterion1(F.softmax(output1),target) + criterion2(output2a, target_f) + 
            criterion3(F.softmax(output2b), target))

Hello Esta!

You don’t say what kind of problem you are trying to solve, you don’t
tell us what model2 and model3` are, and you don’t indicate what your
data look like, so we will have to explore …

Print out the shapes of the tensors output1, output2a, output2b,
target, and target_f just before you call your loss line.

(As an aside, if you are using CrossEntropyLoss you will most likely
want to leave out the calls to softmax() as CrossEntropyLoss has,
in effect, softmax() built in.)

Best.

K. Frank

sorry about that. Am new in pytorch. Trying to replicate the code
problem: The code is trying to extract the fixed length representation from afingerprint image.
Model 1 is extracting texture features while model 2 and 3 are extracting minutiae features. Could be some customiztion of the inception v4 code

code is found on the following github link