How can my net produce negative outputs when I use ReLU?

Do you get any error message?
I’m not sure if my current code snippet is suitable for multi-class predictions.
I’ll have to check it and get back to you.

As input I use as input the output of my model. That output I transform to the shape BxHxW and put that in a variable. As target I use my labels in shape of BxHxW I also put in a variable. The matrices contain only the value 0 for the background or 1 for the object.

With that approach I get the error:

AttributeError: 'float' object has no attribute 'backward'

When I transform the floate loss in a variable I get the following error:

RuntimeError: element 0 of variables does not require grad and does not have a grad_fn

When I put the loss in a variable with require_grad = True I get this error:

TypeError: 'require_grad' is an invalid keyword argument for this function

Following the code for the last approach:

            loss = torch.zeros(1,1)
            optimizer.zero_grad()

            inputs, labels = Variable(batch['X']), batch['l']

            outputs = model(inputs)

            pre_img = outputs.data.max(1, keepdim=True)[1].numpy().transpose(1,0,2,3)
            pre_img = torch.LongTensor(pre_img).squeeze_(0)

            loss_cal = criterion(pre_img,labels)
            loss[0,0] = loss_cal
            loss_cal = Variable(loss, require_grad = True)

            loss_cal.backward()
            optimizer.step()
            
            exit()

Could you try to scatter your target into a one-hot encoding and run it again?

batch_size = 1
num_classes = 16
h, w = 24, 24

x = torch.randn(batch_size, num_classes, h, w)
y = torch.empty(batch_size, h, w, dtype=torch.long).random_(num_classes)

model = nn.Conv2d(num_classes, num_classes, 3, 1, 1)
output = model(x)

target = torch.zeros(batch_size, num_classes, h, w).scatter_(1, y.unsqueeze(1), 1.)
loss = dice_loss(F.softmax(output, dim=1), target)
loss.backward()

print(model.weight.grad)
1 Like

Could you pls convert that in pytorch 0.3. I think its easier for you to convert because you know the corresponding functions. I already converted a few lines but still got some errors and im not sure if its still the same afterwards. That would be great if you could do that.

With one-hot encoding you mean that I convert my labels from NxWxH to NxCxWxH right?

This should work for 0.3:

x = Variable(torch.randn(batch_size, num_classes, h, w)
y = Variable(torch.zeros(batch_size, h, w).long().random_(num_classes))

Yes, that’s what I mean by one-hot encoding.

1 Like
TypeError: scatter_ received an invalid combination of arguments - got (int, Variable, float), but expected one of:
 * (int dim, torch.LongTensor index, float value)
      didn't match because some of the arguments have invalid types: (int, !Variable!, float)
 * (int dim, torch.LongTensor index, torch.FloatTensor src)
      didn't match because some of the arguments have invalid types: (int, !Variable!, !float!)

I got the labels one-hot encoded now.

This should work:

target = Variable(torch.zeros(batch_size, num_classes, h, w).scatter_(1, y.unsqueeze(1).data, 1.))
1 Like

Okay that worked I think. I got a variable as output who contain a torch.FloatTensor of size 2x2x3x3. I changed the attribute num_classes to two.

I try to adapt that for my training process now.

loss = dice_loss(F.softmax(output, dim=1), target)

You don’t want to have only zeros or ones in this `F.softmax(output, dim=1) matrix ?

Okay the first run with lr = 0.001 , momentum = 0.99 , batch size = 4 and epochs = 50 just finished and the model learned absolutely nothing. All validation images are black.

Thats the model:

model = UNet(in_channels=3,
             out_channels=4,
             n_class=2,
             kernel_size=3,
             padding=1,
             stride=1)

Following the loss of the training data:

epoch0, iter0, loss: 0.531706690788269
epoch0, iter10, loss: 0.5252878665924072
epoch0, iter20, loss: 0.506787121295929
epoch0, iter30, loss: 0.478976309299469
epoch1, iter0, loss: 0.4607107639312744
epoch1, iter10, loss: 0.41716182231903076
epoch1, iter20, loss: 0.37118929624557495
epoch1, iter30, loss: 0.3244401216506958
epoch2, iter0, loss: 0.3220394253730774
epoch2, iter10, loss: 0.26507627964019775
epoch2, iter20, loss: 0.1888013482093811
epoch2, iter30, loss: 0.112052321434021
epoch3, iter0, loss: 0.09292703866958618
epoch3, iter10, loss: 0.09756958484649658
epoch3, iter20, loss: 0.1650514006614685
epoch3, iter30, loss: 0.06728309392929077
epoch4, iter0, loss: 0.07964813709259033
epoch4, iter10, loss: 0.019459247589111328
epoch4, iter20, loss: 0.035044312477111816
epoch4, iter30, loss: 0.0942237377166748
epoch5, iter0, loss: 0.015083253383636475
epoch5, iter10, loss: 0.08860135078430176
epoch5, iter20, loss: 0.04318046569824219
epoch5, iter30, loss: 0.08206164836883545
epoch6, iter0, loss: 0.06812179088592529
epoch6, iter10, loss: 0.1348792314529419
epoch6, iter20, loss: 0.07231652736663818
epoch6, iter30, loss: 0.06797420978546143
epoch7, iter0, loss: 0.02555626630783081
epoch7, iter10, loss: 0.09260457754135132
epoch7, iter20, loss: 0.027100861072540283
epoch7, iter30, loss: 0.061148226261138916
epoch8, iter0, loss: 0.08196008205413818
epoch8, iter10, loss: 0.0842401385307312
epoch8, iter20, loss: 0.05114328861236572
epoch8, iter30, loss: 0.046675682067871094
epoch9, iter0, loss: 0.04830044507980347
epoch9, iter10, loss: 0.05659306049346924
epoch9, iter20, loss: 0.04548454284667969
epoch9, iter30, loss: 0.02356886863708496
epoch10, iter0, loss: 0.059834957122802734
epoch10, iter10, loss: 0.11435037851333618
epoch10, iter20, loss: 0.05681341886520386
epoch10, iter30, loss: 0.07030582427978516
epoch11, iter0, loss: 0.09754645824432373
epoch11, iter10, loss: 0.14845716953277588
epoch11, iter20, loss: 0.040486276149749756
epoch11, iter30, loss: 0.11036354303359985
epoch12, iter0, loss: 0.12390190362930298
epoch12, iter10, loss: 0.036484479904174805
epoch12, iter20, loss: 0.049709975719451904
epoch12, iter30, loss: 0.047188401222229004
epoch13, iter0, loss: 0.06732994318008423
epoch13, iter10, loss: 0.046963274478912354
epoch13, iter20, loss: 0.036335527896881104
epoch13, iter30, loss: 0.05386781692504883
epoch14, iter0, loss: 0.05846834182739258
epoch14, iter10, loss: 0.045666277408599854
epoch14, iter20, loss: 0.08679616451263428
epoch14, iter30, loss: 0.13323616981506348
epoch15, iter0, loss: 0.04021501541137695
epoch15, iter10, loss: 0.07973510026931763
epoch15, iter20, loss: 0.1168215274810791
epoch15, iter30, loss: 0.10620909929275513
epoch16, iter0, loss: 0.08056288957595825
epoch16, iter10, loss: 0.09507018327713013
epoch16, iter20, loss: 0.015556871891021729
epoch16, iter30, loss: 0.0761493444442749
epoch17, iter0, loss: 0.04823726415634155
epoch17, iter10, loss: 0.05434072017669678
epoch17, iter20, loss: 0.0782817006111145
epoch17, iter30, loss: 0.05046498775482178
epoch18, iter0, loss: 0.08287078142166138
epoch18, iter10, loss: 0.12303566932678223
epoch18, iter20, loss: 0.08887892961502075
epoch18, iter30, loss: 0.05346715450286865
epoch19, iter0, loss: 0.04924046993255615
epoch19, iter10, loss: 0.010765552520751953
epoch19, iter20, loss: 0.0532878041267395
epoch19, iter30, loss: 0.07475310564041138
epoch20, iter0, loss: 0.07094603776931763
epoch20, iter10, loss: 0.10239815711975098
epoch20, iter20, loss: 0.026489675045013428
epoch20, iter30, loss: 0.07835793495178223
epoch21, iter0, loss: 0.03335225582122803
epoch21, iter10, loss: 0.10371804237365723
epoch21, iter20, loss: 0.06829100847244263
epoch21, iter30, loss: 0.06634551286697388
epoch22, iter0, loss: 0.0979311466217041
epoch22, iter10, loss: 0.050434410572052
epoch22, iter20, loss: 0.06072264909744263
epoch22, iter30, loss: 0.0352557897567749
epoch23, iter0, loss: 0.05432921648025513
epoch23, iter10, loss: 0.07659560441970825
epoch23, iter20, loss: 0.07973504066467285
epoch23, iter30, loss: 0.076679527759552
epoch24, iter0, loss: 0.0349200963973999
epoch24, iter10, loss: 0.018978536128997803
epoch24, iter20, loss: 0.08947396278381348
epoch24, iter30, loss: 0.07369643449783325
epoch25, iter0, loss: 0.020847737789154053
epoch25, iter10, loss: 0.06610900163650513
epoch25, iter20, loss: 0.0463031530380249
epoch25, iter30, loss: 0.03460729122161865
epoch26, iter0, loss: 0.05167800188064575
epoch26, iter10, loss: 0.05129271745681763
epoch26, iter20, loss: 0.018238484859466553
epoch26, iter30, loss: 0.05236464738845825
epoch27, iter0, loss: 0.1501808762550354
epoch27, iter10, loss: 0.023464620113372803
epoch27, iter20, loss: 0.05756789445877075
epoch27, iter30, loss: 0.025055289268493652
epoch28, iter0, loss: 0.06327086687088013
epoch28, iter10, loss: 0.023083150386810303
epoch28, iter20, loss: 0.04206502437591553
epoch28, iter30, loss: 0.1029360294342041
epoch29, iter0, loss: 0.0843355655670166
epoch29, iter10, loss: 0.0331653356552124
epoch29, iter20, loss: 0.09662652015686035
epoch29, iter30, loss: 0.03496968746185303
epoch30, iter0, loss: 0.06544524431228638
epoch30, iter10, loss: 0.04288136959075928
epoch30, iter20, loss: 0.0919497013092041
epoch30, iter30, loss: 0.0328143835067749
epoch31, iter0, loss: 0.05694228410720825
epoch31, iter10, loss: 0.04676854610443115
epoch31, iter20, loss: 0.0436328649520874
epoch31, iter30, loss: 0.04769927263259888
epoch32, iter0, loss: 0.12614458799362183
epoch32, iter10, loss: 0.014427661895751953
epoch32, iter20, loss: 0.05201369524002075
epoch32, iter30, loss: 0.07009154558181763
epoch33, iter0, loss: 0.05474120378494263
epoch33, iter10, loss: 0.0661967396736145
epoch33, iter20, loss: 0.048420250415802
epoch33, iter30, loss: 0.03857839107513428
epoch34, iter0, loss: 0.068653404712677
epoch34, iter10, loss: 0.0418475866317749
epoch34, iter20, loss: 0.06338530778884888
epoch34, iter30, loss: 0.05677443742752075
epoch35, iter0, loss: 0.0827791690826416
epoch35, iter10, loss: 0.07025176286697388
epoch35, iter20, loss: 0.05845290422439575
epoch35, iter30, loss: 0.07139617204666138
epoch36, iter0, loss: 0.09488320350646973
epoch36, iter10, loss: 0.03512609004974365
epoch36, iter20, loss: 0.0455859899520874
epoch36, iter30, loss: 0.08292412757873535
epoch37, iter0, loss: 0.11776751279830933
epoch37, iter10, loss: 0.006020069122314453
epoch37, iter20, loss: 0.04364430904388428
epoch37, iter30, loss: 0.08667397499084473
epoch38, iter0, loss: 0.05192214250564575
epoch38, iter10, loss: 0.0430988073348999
epoch38, iter20, loss: 0.08344292640686035
epoch38, iter30, loss: 0.03333699703216553
epoch39, iter0, loss: 0.019287467002868652
epoch39, iter10, loss: 0.06876403093338013
epoch39, iter20, loss: 0.04774123430252075
epoch39, iter30, loss: 0.10537362098693848
epoch40, iter0, loss: 0.04325520992279053
epoch40, iter10, loss: 0.018261373043060303
epoch40, iter20, loss: 0.054218590259552
epoch40, iter30, loss: 0.04783660173416138
epoch41, iter0, loss: 0.09441018104553223
epoch41, iter10, loss: 0.13002794981002808
epoch41, iter20, loss: 0.06071120500564575
epoch41, iter30, loss: 0.0660746693611145
epoch42, iter0, loss: 0.018879354000091553
epoch42, iter10, loss: 0.1021730899810791
epoch42, iter20, loss: 0.05047255754470825
epoch42, iter30, loss: 0.0439227819442749
epoch43, iter0, loss: 0.027260184288024902
epoch43, iter10, loss: 0.05442458391189575
epoch43, iter20, loss: 0.05959349870681763
epoch43, iter30, loss: 0.03128087520599365
epoch44, iter0, loss: 0.1366807222366333
epoch44, iter10, loss: 0.06493407487869263
epoch44, iter20, loss: 0.070942223072052
epoch44, iter30, loss: 0.03685414791107178
epoch45, iter0, loss: 0.07615691423416138
epoch45, iter10, loss: 0.08995461463928223
epoch45, iter20, loss: 0.024002432823181152
epoch45, iter30, loss: 0.03360402584075928
epoch46, iter0, loss: 0.0527384877204895
epoch46, iter10, loss: 0.07392150163650513
epoch46, iter20, loss: 0.05764800310134888
epoch46, iter30, loss: 0.0834658145904541
epoch47, iter0, loss: 0.05906707048416138
epoch47, iter10, loss: 0.07792311906814575
epoch47, iter20, loss: 0.015068531036376953
epoch47, iter30, loss: 0.08586525917053223
epoch48, iter0, loss: 0.08368706703186035
epoch48, iter10, loss: 0.06973296403884888
epoch48, iter20, loss: 0.07041198015213013
epoch48, iter30, loss: 0.0461963415145874
epoch49, iter0, loss: 0.070728600025177
epoch49, iter10, loss: 0.010792255401611328
epoch49, iter20, loss: 0.04160726070404053
epoch49, iter30, loss: 0.08121895790100098

The loss of the test images:

0.4549177885055542
0.45712411403656006
0.46555548906326294
0.45447224378585815
0.4542546272277832
0.45376867055892944
0.4557144045829773
0.45556485652923584
0.45611482858657837
0.4564945697784424
0.45476579666137695
0.454031765460968
0.45494747161865234
0.45393508672714233
0.45417046546936035

0.2662029266357422
0.2773743271827698
0.32106560468673706
0.26286453008651733
0.2633289098739624
0.2600044012069702
0.2701232433319092
0.26976215839385986
0.2727097272872925
0.2739625573158264
0.2641099691390991
0.26010632514953613
0.26568669080734253
0.26075708866119385
0.26231205463409424

0.07509863376617432
0.09539300203323364
0.1747632622718811
0.0690343976020813
0.06987780332565308
0.06384396553039551
0.08222037553787231
0.08156448602676392
0.08691918849945068
0.08919495344161987
0.0712968111038208
0.06402808427810669
0.0741615891456604
0.06520581245422363
0.06803035736083984

0.020736217498779297
0.04360848665237427
0.1330609917640686
0.01390165090560913
0.014852166175842285
0.00805974006652832
0.028762638568878174
0.028023362159729004
0.03405827283859253
0.03662312030792236
0.01645141839981079
0.00826960802078247
0.01968008279800415
0.009586751461029053
0.012770116329193115

0.015643537044525146
0.03875255584716797
0.1291307806968689
0.008738279342651367
0.009698629379272461
0.0028383731842041016
0.023753046989440918
0.023006081581115723
0.029103457927703857
0.03169482946395874
0.01131439208984375
0.003051459789276123
0.014576494693756104
0.004378616809844971
0.0075950026512146

0.015196144580841064
0.03832519054412842
0.12878179550170898
0.00828486680984497
0.00924605131149292
0.00238037109375
0.023312687873840332
0.022565066814422607
0.028667747974395752
0.0312613844871521
0.010863244533538818
0.0025939345359802246
0.014128148555755615
0.003921449184417725
0.007140636444091797

0.015138566493988037
0.03827005624771118
0.1287361979484558
0.008226573467254639
0.009187817573547363
0.002321600914001465
0.02325594425201416
0.02250826358795166
0.02861154079437256
0.031205475330352783
0.010805249214172363
0.0025351643562316895
0.014070451259613037
0.003862738609313965
0.007082164287567139

And after this epoch the loss over the images was nearly constant.

I try another run now with more out channels but something else must be wrong here.

i don’t know whats still wrong. I use finished another run with lr = 0.01 , momentum = 0.9 , batch size = 4 and epochs = 50 but with a bigger model structure.

model = UNet(in_channels=3,
             out_channels=32,
             n_class=2,
             kernel_size=3,
             padding=1,
             stride=1)

After two epoch I got a nearly constant loss on the test data again.

0.01512134075164795
0.03825348615646362
0.1287221908569336
0.008209168910980225
0.009170472621917725
0.0023040771484375
0.023238956928253174
0.022491276264190674
0.0285947322845459
0.0311887264251709
0.010787904262542725
0.0025177001953125
0.01405322551727295
0.00384521484375
0.007064759731292725