About translate the networks model to cuda

I am a pytorch beginer,and recnently when I try to translate a network model defined by myself,
I met a problem.

the problem can be described simply like this :
first I define a class

class cliqueBlock(nn.moudle):
     def  __init__(self,out_channels = 36):
          super(cliqueBlock,self).__init()
          self.w0 = w0  
     # here w0  is a list of nn.moudle like [ nn.ReLU(), nn.Conv2d(),nn.BatchNorm2d()]
     def  forward(self,x)
          out = self.w0[0](x)
          return out 

then I transfer the model to cuda : model = cliqueBlock.cuda()

the problem is , when I use this model to train my network ,

RuntimeError: get_device is not implemented for type torch.FloatTensor

I think this is due to my image input is in cuda while the paramters of my networks is in cpu
but I just confused about the reason, should not I define the class member in init as a list ?
and any solution to this problem ?
thank you very much

Try to use nn.ModuleList instead of a plain Python list.
This will make sure to properly register your modules and push them to your device if necessary.

I tried nn.ModuleList and your advice really works well. Thanks a lot!