Getting Device of model.children()

I am using Model Parallel. Single-Machine Model Parallel Best Practices — PyTorch Tutorials 1.8.1+cu102 documentation

In the model, I assign the layers(or model.children()) to separate cuda devices via layer.to('cuda:0').

How can I later call the cuda device which the layer is assigned to? In torch.cuda, I can get the name of the GPU via torch.cuda.get_device_name(layer).
https://pytorch.org/docs/stable/cuda.html

However, that does not tell me which cuda device it is assigned to. I can even the gpu properties of the device that layer is assigned to. But I cannot get the 'cuda:0'. Any suggestions?

TIA

Found a hackish work around:

for layer in model.children():
    try:
        print(getattr(next(layer.parameters()), 'device')
    except:
        continue

Any better ideas are appreciated.