I am trying to replicate this.
Now, this magical(to me) line of code - torch.set_default_device(device) creates all the tensors on the device - for me it is cuda.
I want to know how would I implement this same line in cpp and things to keep in mind.
Currently I have tried assigning some tensors(probably all), model and optimizer to CUDA. Code link.
Thank you!
@Mohit_Kumar , the link raises 404.
My bad, the code was private - now it is public.
You code initializes the device to CPU, checks for CUDA and switch to CUDA if a card is found:
torch::Device device = torch::kCPU;
//-----------------------------------------------//
void initialize_device() {
if (torch::cuda::is_available()) {
device = torch::kCUDA;
std::cout << "Using CUDA device\n";
} else {
std::cout << "Using CPU device\n";
}
}
If you just want CPU, change initialize_device
void initialize_device() {
}
Now it will be CPU.
@Dirk10001 thanks for reading the code. We have set_default_device(my_device) in python - then my tensors over that script, probably my model and optimizer are initialized my_device. But we don’t have something similar for cpp.