LSTM example with multuple GPU error: module 'torch' has no attribute 'long'

I understand what you are saying and it makes sense :slight_smile:

However, I got rid of my first error by updating pytorch, and now my error is (I’m still using 1 batch size):

TypeError: Broadcast function not implemented for CPU tensors

I understand this error is because in multi-GPUs mode, I have to make sure all input are cuda type? As in this example and this example. In both examples they don’t mention anything about batch size. So I’m wondering if the error is caused by some missing inputs that I havn’t converted into cuda type? See below code:

if not all(input.is_cuda for input in inputs):
raise TypeError(‘Broadcast function not implemented for CPU tensors’)

I wish those 2 examples include more details :confused: