// Initialize 2-D array, each entry is some neural network phi = [[None] * n for _ in range(m)] for i in range(m): for j in range(n): phi[i][j] = NeuralNetwork() // Let k, i be arbitrary indices p1 = torch.nn.utils.parameters_to_vector(phi[k][i - 1].parameters()) p2 = torch.nn.utils.parameters_to_vector(mean of phi[:][i-1])
I want to basically compute the mean squared error between the parameters
phi[k][i-1] and average of the entire column
((p1 - p2)**2).sum() I tried in the following way:
tmp = [x.parameters() for x in self.phi[:][i - 1]] mean_params = torch.mean(torch.stack(tmp), dim=0) p2 = torch.nn.utils.parameters_to_vector(mean_params)
But this doesn’t work out because tmp is a list of generator objects. More specifically, I guess my problem is to compute the mean from that generator object.