model.cuda() will push the parameters to the default device.
If you print a parameter’s device, you should see, which GPU is used:
print(model.fc.weight.device)
> device(type='cuda', index=0)
Also, have a look at nvidia-smi to see, which GPUs are used.
If you would like to use data parallel (copy the model onto all GPUs and send chunks of the batch to each GPU), have a look at this tutorial.