What does 5 signify here in ResNet50(5) ?

If you look at the class definition there has to be one positional argument specifying the number of classes (5 was just an arbitrary value to create an instance).

Thanks a lot! How do i manage y in this case (from my_code) ?

I don’t know, what y is supposed to be. In your code is no definition of `self.classifier`

.

If you defined it somewhere else you could simply return it together with the dictionary

self.classifier = nn.Linear(2048,num_classes) . Missed it

```
def forward(self,x):
x = self.base(x)
x = F.avg_pool2d(x,x.size()[2:])
f = x.view(x.size(0),-1)
clf_outputs = {}
for i in range(self.num_fcs):
clf_outputs["fc%d" % i] = getattr(self, "fc%d" % i)(f)
clf_outputs["y"] = self.classifier(f)
return clf_outputs
```

The thing is I’ll freeze one fc layer, and then train the model with second fc layer. But in second phase, I’ll train the model with both fc layers (unfreeze) on different dataset, so how will i compute loss pertaining to two fc layers unlike the code which I provided.

you could simply calculate separate losses for each of the output and add them up (or calculate the mean) and then call backward on the total loss

I have to remove self.classifier because now i have multiple fc layers

When I’ll join train (fc0 and fc1 turned on), how will i manage loss then ?

calculate a mean/sum of both separate losses and call backward() on it. Autograd should handle the rest for you since only the parameters which are involved in the forward pass will be updated for each part of the loss.

You mean this ?

```
def forward(self,x):
x = self.base(x)
x = F.avg_pool2d(x,x.size()[2:])
f = x.view(x.size(0),-1)
clf_outputs = {}
for i in range(self.num_fcs):
clf_outputs["fc%d" %i] = getattr(self, "fc%d" %i)(f)
clf_outputs["y1"] = self.fc0(f)
clf_outputs=["y2"] = self.fc1(f)
l = (y1+y2)/2
if self.loss == {'xent'}:
return l
elif self.loss == {'xent','htri'}:
return l,f
elif self.loss == {'cent'}:
return l,f
else:
raise KeyError("Unsupported loss:{}".format(self.loss))
l.backward()
return clf_outputs
```

First of all:

you don’t need this part as

also stores the same results in `clf_outputs["fc0"]`

and `clf_outputs["fc1"]`

instead of `clf_outputs["y1"]`

and `clf_outputs["y2"]`

Second: thats not what I meant. you simply sould use your forward function like

```
def forward(self,x):
x = self.base(x)
x = F.avg_pool2d(x,x.size()[2:])
f = x.view(x.size(0),-1)
clf_outputs = {}
for i in range(self.num_fcs):
clf_outputs["fc%d" %i] = getattr(self, "fc%d" %i)(f)
if self.loss == {'xent'}:
return clf_outputs
elif self.loss == {'xent','htri'}:
return clf_outputs,f
elif self.loss == {'cent'}:
return clf_outputs,f
else:
raise KeyError("Unsupported loss:{}".format(self.loss))
```

after you created a model instance with `model = ResNet50(10)`

you cann do some predictions with `clf_outputs, f = model(data_tensor)`

and later on calculate a loss for each of the values in `clf_outputs`

:

In the following snippet I assume you have a list of targets (one per fc layer)!

```
model = ResNet(10, num_fcs=2)
optim = SGD(model.parameters())
clf_outputs, f = model(data_tensor)
# for simplicity I'm using MSE-Loss but you could simply use any other loss function as well
loss_fn = MSELoss()
loss_value = 0
for k, v in clf_outputs.items():
_curr_loss = loss_fn(v, target_list[int(k.replace("fc", ""))])
loss_val = loss_val + _curr_loss
optim.zero_grad()
loss_value.backward()
optim.step()
```

When I freeze FC layer 2

```
resnet50.fc1.train(False)
```

, it provides me with this:

```
AttributeError: 'ResNet' object has no attribute 'fc1'
```

how did you create your model?

I am using parse arguments

```
parser.add_argument('-a', '--arch', type=str, default='resnet50', choices=models.get_names())
```

Then this is the relevant part of train function

```
def train(epoch, model, criterion_xent, criterion_htri, optimizer, trainloader, use_gpu):
losses = AverageMeter()
batch_time = AverageMeter()
data_time = AverageMeter()
model.train()
for batch_idx, (imgs, pids, _) in enumerate(trainloader):
if use_gpu:
imgs, pids = imgs.cuda(), pids.cuda()
outputs, features = model(imgs)
if args.htri_only:
if isinstance(features, tuple):
loss = DeepSupervision(criterion_htri, features, pids)
else:
loss = criterion_htri(features, pids)
else:
if isinstance(outputs, tuple):
xent_loss = DeepSupervision(criterion_xent, outputs, pids)
else:
xent_loss = criterion_xent(outputs, pids)
loss = xent_loss + htri_loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
```

Then i am using this

```
model = models.init_model(name=args.arch, num_classes=dataset.num_train_pids, loss={'xent', 'htri'})
```

and what does your `init_model`

function look like?

```
def init_model(name, *args, **kwargs):
if name not in __factory.keys():
raise KeyError("Unknown model: {}".format(name))
return __factory[name](*args, **kwargs)
```

have you modified the class definition I wrote above? Because otherwise the code should have to work…

To make the post readable for everyone: