out = self.conv1(x)
out = self.norm1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.norm2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.norm3(out)
-
training
of conv and norm is false - track_running_stats of norm is True
When I use pycharm to debug the code, from conv1
to conv3
, the gpu memory not increase. But I exec norm3
, the gpu memory increase.
If training
of conv and norm is True, I test that I exec every sentence, the gpu memory increase.
When I exec every sentence, that create new out, why the gpu memory not increase?
But why I exec norm3 the gpu memory increase?