What’s up everyone,
I currently have a distributed reinforcement learning framework built using PyTorch. Upon profiling my code, a major damper on my throughput (network updates per unit time) is getting the parameters (state_dicts) of my networks from the OrderedDict class torch uses to JSON format for sending over a network using gRPC.
A code example of what I’m currently doing:
# convert torch state dict to json for entry in actor_params: actor_params[entry] = actor_params[entry].cpu().data.numpy().tolist() actor_params = json.dumps(actor_params)
where actor_params is just a model state_dict.
To summarize, I just need a quick way to get from torch CPU state_dict → JSON (speed this block up). I do this for six networks sequentially, so that’s where my speed issue is. Any help or ideas are greatly appreciated.