Thanks! Seems to work with a try: except
block around it (some objects like shared libraries throw exception when you try to do hasattr on them).
I extended my code that tracked memory usage to also track where memory allocations appeared by comparing set of tensors before and after operation. And results are somewhat surprising. I stopped execution after first batch (it breaks on gpu memory allocation on second batch) and memory consumption was higher in the case where less tensors were allocated O_o
Here is the diff between sorted two things below, i.e. the one that has more allocations ended up having less memory consumed. One possible explanation is that in model that requested more memory, single model was applied twice and gradient was computed, whereas in the “less consuming”, the trained model is applied once, and then a fixed model is applied ones.
< + start freeze_params:91 (128, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (128, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (128,) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (21, 21, 32, 32) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (21, 21, 4, 4) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (21, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (21, 512, 1, 1) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (21,) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (256, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (256, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (256,) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (4096, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (4096, 512, 7, 7) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (4096,) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (512, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (512, 512, 3, 3) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (512,) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (64, 3, 3, 3) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (64, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
< + start freeze_params:91 (64,) <class 'torch.cuda.FloatTensor'>
Here are original logs
:7938.9 Mb
+ __main__ match_source_target:174 (1, 3, 1052, 1914) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:178 (1, 3, 1052, 1914) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (128, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (128, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (128,) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (21, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (21,) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (256, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (256, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (256,) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (4096, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (4096, 512, 7, 7) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (4096,) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (512, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (512, 512, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (512,) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (64, 3, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (64, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (64,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (128, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (128, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (128,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (21, 21, 32, 32) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (21, 21, 4, 4) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (21, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (21, 512, 1, 1) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (21,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (256, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (256, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (256,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (4096, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (4096, 512, 7, 7) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (4096,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (512, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (512, 512, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (512,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (64, 3, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (64, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (64,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp _init_net:55 (1, 500) <class 'torch.cuda.FloatTensor'>
+ distances.mlp _init_net:55 (1,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp _init_net:55 (500, 21) <class 'torch.cuda.FloatTensor'>
+ distances.mlp _init_net:55 (500, 500) <class 'torch.cuda.FloatTensor'>
+ distances.mlp _init_net:55 (500,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp objective:26 (2,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp objective:33 (1,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp_base attempt_update_d:77 (1, 500) <class 'torch.cuda.FloatTensor'>
+ distances.mlp_base attempt_update_d:77 (1,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp_base attempt_update_d:77 (500, 21) <class 'torch.cuda.FloatTensor'>
+ distances.mlp_base attempt_update_d:77 (500, 500) <class 'torch.cuda.FloatTensor'>
+ distances.mlp_base attempt_update_d:77 (500,) <class 'torch.cuda.FloatTensor'>
+ model.fcn16 __init__:13 (21,) <class 'torch.cuda.FloatTensor'>
+ model.fcn16 features_at:45 (1, 4096, 34, 60) <class 'torch.cuda.FloatTensor'>
+ model.fcn16 features_at:48 (1, 4096, 34, 60) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (128, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (128, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (128,) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (21, 21, 32, 32) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (21, 21, 4, 4) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (21, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (21, 512, 1, 1) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (21,) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (256, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (256, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (256,) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (4096, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (4096, 512, 7, 7) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (4096,) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (512, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (512, 512, 3, 3) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (512,) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (64, 3, 3, 3) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (64, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
+ start freeze_params:91 (64,) <class 'torch.cuda.FloatTensor'>
and in the second case - it has less tensors (i.e. identical to above except ~10 tensors less), but higher memory consumption and breaks on second epoch. Any ides?
:11820.9Mb
+ __main__ match_source_target:174 (1, 3, 1052, 1914) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:178 (1, 3, 1052, 1914) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (128, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (128, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (128,) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (21, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (21,) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (256, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (256, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (256,) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (4096, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (4096, 512, 7, 7) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (4096,) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (512, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (512, 512, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (512,) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (64, 3, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (64, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ match_source_target:190 (64,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (128, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (128, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (128,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (21, 21, 32, 32) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (21, 21, 4, 4) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (21, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (21, 512, 1, 1) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (21,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (256, 128, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (256, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (256,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (4096, 4096, 1, 1) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (4096, 512, 7, 7) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (4096,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (512, 256, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (512, 512, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (512,) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (64, 3, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (64, 64, 3, 3) <class 'torch.cuda.FloatTensor'>
+ __main__ run_adaptation:355 (64,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp _init_net:55 (1, 500) <class 'torch.cuda.FloatTensor'>
+ distances.mlp _init_net:55 (1,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp _init_net:55 (500, 21) <class 'torch.cuda.FloatTensor'>
+ distances.mlp _init_net:55 (500, 500) <class 'torch.cuda.FloatTensor'>
+ distances.mlp _init_net:55 (500,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp objective:26 (2,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp objective:33 (1,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp_base attempt_update_d:77 (1, 500) <class 'torch.cuda.FloatTensor'>
+ distances.mlp_base attempt_update_d:77 (1,) <class 'torch.cuda.FloatTensor'>
+ distances.mlp_base attempt_update_d:77 (500, 21) <class 'torch.cuda.FloatTensor'>
+ distances.mlp_base attempt_update_d:77 (500, 500) <class 'torch.cuda.FloatTensor'>
+ distances.mlp_base attempt_update_d:77 (500,) <class 'torch.cuda.FloatTensor'>
+ model.fcn16 __init__:13 (21,) <class 'torch.cuda.FloatTensor'>
+ model.fcn16 features_at:45 (1, 4096, 34, 60) <class 'torch.cuda.FloatTensor'>
+ model.fcn16 features_at:48 (1, 4096, 34, 60) <class 'torch.cuda.FloatTensor'>