The type of input of DataParallel

Hi, I have a question. For the torch.nn.DataParallel, I use the following code:

model = DataParallel(MyModule1()).to('cuda:0')
y = model(x)

Does the type of x must be torch.Tensor? Can I use other types such as list, tuple …?
Thanks a lot

You can use whatever your forward function deals with

Just to clarify. Native nn.Modules such as convolutions fully conected etcetera do require a torch tensor.
If you create your own nn-Module, you will have to code its forward function.
Inside this function you can do “whatever” you want. If you code it assuming your input is a list of tensors then it’s fine. Remeber python lists are really slow to be dealt with if you use loops. Try not to use them

How does the data sharding work with a dict input?

A good example for using only a dict as input might be the base detector of detectron2: detectron2/dense_detector.py at main · facebookresearch/detectron2 · GitHub

According to the docu, dicts will be shallow copied. How does the model replica X know on which subset of the data to operate?