Found param roberta.embeddings.word_embeddings.weight with type torch.FloatTensor, expected torch.cuda.FloatTensor

Hi, I am just trying apex for the first time and running into the following error when training camembert using fast-bert:

Found param roberta.embeddings.word_embeddings.weight with type torch.FloatTensor, expected torch.cuda.FloatTensor.

I also get this indication, but i still don’t know how to deal with it:

When using amp.initialize, you need to provide a model with parameters
located on a CUDA device before passing it no matter what optimization level
you chose. Use model.to(‘cuda’) to use the default device.

PS: I don’t have an Nvidia gpu so i didn’t install cuda

Unfortunately, you won’t be able to use mixed-precision training without a GPU.
The error message points towards an expected CUDATensor, while a CPUTensor was found.

That being said, once you get access to a GPU, we recommend to try out the native mixed-precision support from the current master branch or the nightly binaries as described here. :slight_smile:

1 Like

This is my first experience, how do I get access to a GPU ? :sweat_smile:

Do you think install cuda on an none nvidia provided gpu machine would fix it ?
Thank you

No, you would need to have a GPU installed in your machine in order to run CUDA code.

If you currently don’t have a GPU device, you could use Google Colab or any other cloud service, which provides GPUs.
If I’m not mistaken, you can use a GPU in Colab for a specific time frame (12h?) until the runtime disconnects.

1 Like