I was trying to generate 1 univariate normal vector for every parameter in a network. This should be at most the size of the network (probably more like about equal). However, whenever I do it I run out of memory (which doesn’t make sense cuz I am able to load the same network 3 times with not problems, so there must be something else wrong). Code:
data_path = './data'
trainset,trainloader, testset,testloader, classes = data_class.get_cifer_data_processors(data_path,256,256,0,0,standardize=True)
net = utils.restore_entire_mdl(path).cuda()
net2 = utils.restore_entire_mdl(path).cuda()
#net3 = utils.restore_entire_mdl(path).cuda()
nb_params = nn_mdls.count_nb_params(net)
print(f'nb_params {nb_params}')
v = torch.normal(torch.zeros(nb_params),torch.eye(nb_params)).cuda()
print('end no issue')
and also:
def count_nb_params(net):
count = 0
for p in net.parameters():
count += p.data.nelement()
return count