Same output scores of Fast-RCNN

During the training stage, the output seems to correct, that the different RoI will have different output score’s.

tensor([[ 0.4181, -0.6164, -0.5613],
        [ 0.1907, -0.3460, -0.6357]], device='cuda:0') tensor([ 1,  1], device='cuda:0')

However, when i use the trained model to validate the result, different RoI will have the same output, even though they represents completely different areas.

tensor([[  57,  319,  360,  539],
        [ 544,   94,  715,  132],
        [  57,   84,  360,  310]], dtype=torch.int32) tensor([[ 0.1655,  0.0858, -0.2437],
        [ 0.1655,  0.0858, -0.2437],
        [ 0.1655,  0.0858, -0.2437]], device='cuda:0')

By the way, the training loss keep at a position in the training stage and has not the trending of decreasing. I’ve tried to use smaller learning rate(from 0.001 to 0.0005 to 0.0002 to 0.0001), but didn’t work.
If needed, i can post much more codes


Now I found that if i use model.train() before validatation, the results still are different, but if i use model.eval(), the model will have the same output, this is strange to me.

It sounds to me like if your model has some Dropout layers, which will explain the behavior you’re experiencing. At test time, dropout doesn’t do anything, but during training it randomly drops some units. Take a look at the documentation for Dropout.

You are right. It turns out that the bug came from my RoI pooling implementation, all the RoI are turned to the same after RoI pooling.