I’ve implemented P2P distributed learning. It performs distributed learning using a peer-to-peer parameter averaging approach. The method is described in How to scale distributed deep learning? and Gossip training for deep learning. It has a PyTorch adapter so it can easily integrate with PyTorch projects.
This implementation can be used as a starting point for people who want to experiment with async p2p training algorithms, it has a distributed cifar training example, based on pytorch-cifar.