Hi all,
I am implementing part of the code for Federated Matched Averaging (full code found here: https://github.com/IBM/FedMA/blob/master/language_modeling/language_fedma.py)
An error occurs when the following line runs:
batch_weights_norm = [w * s for w, s in zip(weights_bias, sigma_inv_layer)]
And, the runtime error is
RuntimeError Traceback (most recent call last)
c:\Users\65967\Desktop\Federated Learning\Python\PyT_FedMA.py in <module>
345 it=it,
346 n_layers=NUM_LAYERS,
---> 347 matching_shapes=matching_shapes)
348 matching_shapes.append(next_layer_shape)
349 assignments_list.append(assignments)
c:\Users\65967\Desktop\Papers\battery\Federated Learning\Python\language_fedma.py in layerwise_fedma(batch_weights, layer_index, sigma_layers, sigma0_layers, gamma_layers, it, n_layers, matching_shapes)
283 ########################################
284 assignment_c, global_weights_c, global_sigmas_c, popularity_counts = match_layer(weights_bias, sigma_inv_layer, mean_prior,
--> 285 sigma_inv_prior, gamma, it)
286
287 ########################################
c:\Users\65967\Desktop\Federated Learning\Python\language_fedma.py in match_layer(weights_bias, sigma_inv_layer, mean_prior, sigma_inv_prior, gamma, it)
146 #AA: On how to use built-in sorted() function with key attribute https://www.programiz.com/python-programming/methods/built-in/sorted
147 group_order = sorted(range(J), key=lambda x: -weights_bias[x].shape[0]) #AA: This sorts index of J from one with largest negative number (hence smallest value) to lowest negative number (hence largest value)?? Does it??
--> 148 batch_weights_norm = [w * s for w, s in zip(weights_bias, sigma_inv_layer)] #AA: RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
149 #batch_weights_norm = [w.detach().numpy() * s.detach().numpy() for w, s in zip(weights_bias, sigma_inv_layer)]
150 #batch_weights_norm = [torch.Tensor.cpu(w) * torch.Tensor.cpu(s) for w, s in zip(weights_bias, sigma_inv_layer)]
c:\Users\65967\Desktop\Federated Learning\Python\language_fedma.py in <listcomp>(.0)
146 #AA: On how to use built-in sorted() function with key attribute https://www.programiz.com/python-programming/methods/built-in/sorted
147 group_order = sorted(range(J), key=lambda x: -weights_bias[x].shape[0]) #AA: This sorts index of J from one with largest negative number (hence smallest value) to lowest negative number (hence largest value)?? Does it??
--> 148 batch_weights_norm = [w * s for w, s in zip(weights_bias, sigma_inv_layer)] #AA: RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
149 #batch_weights_norm = [w.detach().numpy() * s.detach().numpy() for w, s in zip(weights_bias, sigma_inv_layer)]
150 #batch_weights_norm = [torch.Tensor.cpu(w) * torch.Tensor.cpu(s) for w, s in zip(weights_bias, sigma_inv_layer)]
~\anaconda3\envs\torch17cuda11\lib\site-packages\torch\_tensor.py in __array__(self, dtype)
676 return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
677 if dtype is None:
--> 678 return self.numpy()
679 else:
680 return self.numpy().astype(dtype, copy=False)
RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
I have tried using w.detach().numpy()
and torch.Tensor.cpu(w)
to rectify the error but I got a message saying that w
is already a numpy array.
Could anyone please enlighten me on what is triggering this error?
Thank you very much.