A extremely strange problem when using my own autograd function with pytorch 1.1.0

Hi, I met a confused problem when I used a customed autograd function–Laplacian Loss. The author used the function with Pytorch0.3.1, but my version is 1.1.0, both our python version is 2.7

==============================================================

When I run the code, there is no error raised except the lap_loss is divergent slowly. But what makes me confused is when I debug the same code, it is convergent !!!
Specifically speaking,

# face is a 'Variable' in author's version but I change it to 'tensor'
lap_loss_fn= Laplacian(face)
# vertex here is a 'tensor'
lap_loss = lap_loss_fn(vertex) 

As you can see, When I run the code above, the lap_loss would increase, But if I do like the following:

# face is a 'Variable' in author's version but I change it to 'tensor'
lap_loss_fn= Laplacian(face)
# vertex here is a 'tensor'
lap_loss = lap_loss_fn(vertex) 
# suspend here, just like set a breakpoint here
import ipdb
ipdb.set_trace()

Every iteration, when the program suspend, I write down lap_loss in the command line to show the value of lap_los, then type c to continue, the lap_loss would decrease.

==========================================

(epoch: 0, iters: 1) total_loss: 21.418 tri_loss: 0.202 
(epoch: 0, iters: 2) total_loss: 16.419 tri_loss: 0.177 
(epoch: 0, iters: 3) total_loss: 19.674 tri_loss: 0.174 
(epoch: 0, iters: 4) total_loss: 18.766 tri_loss: 0.178 
(epoch: 0, iters: 5) total_loss: 18.156 tri_loss: 0.179 
(epoch: 0, iters: 6) total_loss: 16.597 tri_loss: 0.184 
(epoch: 0, iters: 7) total_loss: 19.272 tri_loss: 0.191 
(epoch: 0, iters: 8) total_loss: 17.648 tri_loss: 0.196 
(epoch: 0, iters: 9) total_loss: 19.819 tri_loss: 0.201 
(epoch: 0, iters: 10) total_loss: 18.640 tri_loss: 0.206 
(epoch: 0, iters: 11) total_loss: 19.514 tri_loss: 0.210 
(epoch: 0, iters: 12) total_loss: 18.207 tri_loss: 0.214 
(epoch: 0, iters: 13) total_loss: 18.937 tri_loss: 0.217 
(epoch: 0, iters: 14) total_loss: 18.690 tri_loss: 0.221 
(epoch: 0, iters: 15) total_loss: 21.743 tri_loss: 0.227 
(epoch: 0, iters: 16) total_loss: 21.820 tri_loss: 0.229 
(epoch: 0, iters: 17) total_loss: 23.228 tri_loss: 0.233 
(epoch: 0, iters: 18) total_loss: 20.410 tri_loss: 0.233 
(epoch: 0, iters: 19) total_loss: 18.740 tri_loss: 0.237 
(epoch: 0, iters: 20) total_loss: 22.037 tri_loss: 0.242 
(epoch: 0, iters: 21) total_loss: 18.121 tri_loss: 0.244 
(epoch: 0, iters: 22) total_loss: 19.713 tri_loss: 0.250 
(epoch: 0, iters: 23) total_loss: 18.816 tri_loss: 0.248 
(epoch: 0, iters: 24) total_loss: 18.561 tri_loss: 0.247 
(epoch: 0, iters: 25) total_loss: 17.663 tri_loss: 0.254 
(epoch: 0, iters: 26) total_loss: 19.538 tri_loss: 0.251 
(epoch: 0, iters: 27) total_loss: 17.314 tri_loss: 0.253 
(epoch: 0, iters: 28) total_loss: 18.981 tri_loss: 0.253 
(epoch: 0, iters: 29) total_loss: 20.969 tri_loss: 0.256 
(epoch: 0, iters: 30) total_loss: 21.679 tri_loss: 0.254 
(epoch: 0, iters: 31) total_loss: 23.760 tri_loss: 0.256 
(epoch: 0, iters: 32) total_loss: 20.138 tri_loss: 0.255 
(epoch: 0, iters: 33) total_loss: 18.573 tri_loss: 0.257 
(epoch: 0, iters: 34) total_loss: 19.274 tri_loss: 0.254 
(epoch: 0, iters: 35) total_loss: 20.137 tri_loss: 0.256 
# Note here
self.triangle_loss
Out[2]: tensor(0.2522, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 36) total_loss: 19.709 tri_loss: 0.252 
self.triangle_loss
Out[3]: tensor(0.2305, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 37) total_loss: 19.766 tri_loss: 0.230 
self.triangle_loss
Out[4]: tensor(0.2132, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 38) total_loss: 20.943 tri_loss: 0.213 
self.triangle_loss
Out[5]: tensor(0.1881, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 39) total_loss: 17.842 tri_loss: 0.188 
self.triangle_loss
Out[6]: tensor(0.1584, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 40) total_loss: 17.609 tri_loss: 0.158 
self.triangle_loss
Out[7]: tensor(0.1341, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 41) total_loss: 18.527 tri_loss: 0.134 
self.triangle_loss
Out[8]: tensor(0.1137, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 42) total_loss: 14.190 tri_loss: 0.114 
self.triangle_loss
Out[9]: tensor(0.1057, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 43) total_loss: 15.329 tri_loss: 0.106 
self.triangle_loss
Out[10]: tensor(0.1062, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 44) total_loss: 13.310 tri_loss: 0.106 
self.triangle_loss
Out[11]: tensor(0.1036, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 45) total_loss: 14.644 tri_loss: 0.104 
self.triangle_loss
Out[12]: tensor(0.0994, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 46) total_loss: 13.806 tri_loss: 0.099 
self.triangle_loss
Out[13]: tensor(0.0951, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 47) total_loss: 17.485 tri_loss: 0.095 
self.triangle_loss
Out[14]: tensor(0.0944, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 48) total_loss: 13.895 tri_loss: 0.094 
self.triangle_loss
Out[15]: tensor(0.0849, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 49) total_loss: 16.152 tri_loss: 0.085 
self.triangle_loss
Out[16]: tensor(0.0765, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 50) total_loss: 15.243 tri_loss: 0.077 
self.triangle_loss
Out[17]: tensor(0.0801, device='cuda:0', grad_fn=<MeanBackward0>)
# Note here
(epoch: 0, iters: 51) total_loss: 12.692 tri_loss: 0.080 
(epoch: 0, iters: 52) total_loss: 13.648 tri_loss: 0.078 
(epoch: 0, iters: 53) total_loss: 13.412 tri_loss: 0.076 
(epoch: 0, iters: 54) total_loss: 12.592 tri_loss: 0.080 
(epoch: 0, iters: 55) total_loss: 12.751 tri_loss: 0.085 
(epoch: 0, iters: 56) total_loss: 15.345 tri_loss: 0.091 
(epoch: 0, iters: 57) total_loss: 15.774 tri_loss: 0.099 
(epoch: 0, iters: 58) total_loss: 15.821 tri_loss: 0.101 
(epoch: 0, iters: 59) total_loss: 15.549 tri_loss: 0.112 
(epoch: 0, iters: 60) total_loss: 18.010 tri_loss: 0.121 
(epoch: 0, iters: 61) total_loss: 19.235 tri_loss: 0.129 
(epoch: 0, iters: 62) total_loss: 17.546 tri_loss: 0.137 
(epoch: 0, iters: 63) total_loss: 18.545 tri_loss: 0.134 
(epoch: 0, iters: 64) total_loss: 14.783 tri_loss: 0.145 
(epoch: 0, iters: 65) total_loss: 15.025 tri_loss: 0.148 
(epoch: 0, iters: 66) total_loss: 20.534 tri_loss: 0.151 
(epoch: 0, iters: 67) total_loss: 16.141 tri_loss: 0.154 
(epoch: 0, iters: 68) total_loss: 15.893 tri_loss: 0.155 
(epoch: 0, iters: 69) total_loss: 18.221 tri_loss: 0.161 
(epoch: 0, iters: 70) total_loss: 15.467 tri_loss: 0.164 
(epoch: 0, iters: 71) total_loss: 20.485 tri_loss: 0.165 
(epoch: 0, iters: 72) total_loss: 19.062 tri_loss: 0.173 
(epoch: 0, iters: 73) total_loss: 17.584 tri_loss: 0.174 
(epoch: 0, iters: 74) total_loss: 15.599 tri_loss: 0.175 
(epoch: 0, iters: 75) total_loss: 18.613 tri_loss: 0.173 
(epoch: 0, iters: 76) total_loss: 15.810 tri_loss: 0.178 
(epoch: 0, iters: 77) total_loss: 17.360 tri_loss: 0.176 
(epoch: 0, iters: 78) total_loss: 20.326 tri_loss: 0.181 
(epoch: 0, iters: 79) total_loss: 15.506 tri_loss: 0.184 
(epoch: 0, iters: 80) total_loss: 19.754 tri_loss: 0.183 
(epoch: 0, iters: 81) total_loss: 17.360 tri_loss: 0.184 
(epoch: 0, iters: 82) total_loss: 16.473 tri_loss: 0.187 
(epoch: 0, iters: 83) total_loss: 19.158 tri_loss: 0.185 
(epoch: 0, iters: 84) total_loss: 17.171 tri_loss: 0.191 
(epoch: 0, iters: 85) total_loss: 16.336 tri_loss: 0.188 
(epoch: 0, iters: 86) total_loss: 15.817 tri_loss: 0.193 
(epoch: 0, iters: 87) total_loss: 16.934 tri_loss: 0.188 
(epoch: 0, iters: 88) total_loss: 17.058 tri_loss: 0.189 
(epoch: 0, iters: 89) total_loss: 19.658 tri_loss: 0.191 
(epoch: 0, iters: 90) total_loss: 15.331 tri_loss: 0.194 
(epoch: 0, iters: 91) total_loss: 16.565 tri_loss: 0.195 
(epoch: 0, iters: 92) total_loss: 19.189 tri_loss: 0.193 
# Note here
self.triangle_loss
Out[18]: tensor(0.1929, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 93) total_loss: 16.503 tri_loss: 0.193 
self.triangle_loss
Out[19]: tensor(0.1944, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 94) total_loss: 18.117 tri_loss: 0.194 
self.triangle_loss
Out[20]: tensor(0.1792, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 95) total_loss: 16.758 tri_loss: 0.179 
self.triangle_loss
Out[21]: tensor(0.1568, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 96) total_loss: 14.389 tri_loss: 0.157 
self.triangle_loss
Out[22]: tensor(0.1344, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 97) total_loss: 16.805 tri_loss: 0.134 
self.triangle_loss
Out[23]: tensor(0.1160, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 98) total_loss: 16.854 tri_loss: 0.116 
self.triangle_loss
Out[24]: tensor(0.1002, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 99) total_loss: 13.216 tri_loss: 0.100 
self.triangle_loss
Out[25]: tensor(0.0924, device='cuda:0', grad_fn=<MeanBackward0>)
time/itr 0.025
(epoch: 0, iters: 100) total_loss: 14.771 tri_loss: 0.092 
self.triangle_loss
Out[26]: tensor(0.0855, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 101) total_loss: 14.695 tri_loss: 0.086 
self.triangle_loss
Out[27]: tensor(0.0885, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 102) total_loss: 15.880 tri_loss: 0.088 
self.triangle_loss
Out[28]: tensor(0.0919, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 103) total_loss: 15.344 tri_loss: 0.092 
self.triangle_loss
Out[29]: tensor(0.0897, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 104) total_loss: 15.012 tri_loss: 0.090 
self.triangle_loss
Out[30]: tensor(0.0860, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 105) total_loss: 13.736 tri_loss: 0.086 
self.triangle_loss
Out[31]: tensor(0.0838, device='cuda:0', grad_fn=<MeanBackward0>)
(epoch: 0, iters: 106) total_loss: 13.436 tri_loss: 0.084 


==========================================

You can refer to the data. When I type down triangle_loss .thetri_loss decreased, otherwise the tri_loss increased. What should I do to get the correct result?
On the other word, for the same code, if I run it with pytorch 0.3.0 the loss would decrease, but it will increase with pytorch 1.1.0. Is there some difference which could lead to these result between pytorch 0.3.1 and pytorch 1.1.0. If you need any other information or you have some advice, please let me know. Thanks in advance:)