- how to check the model on which specific cuda device, not just is_cuda, is there a solution?
- if the input on cuda, but the model hasn’t been explicitly declared on cuda, in this time, i execute mode(input),what happens? if model and input are not on the same device ,it’ll be wrong?
- about the details of DataParallel, what is the difference between the host cuda device data and other devices data, the model and input allocate and sth , what is the inner mechanics of DataParallel, it seems host device uses more memory.
totally,i feel a little confused on the details of data allocate…someone can help me ? thx!!