Different ways of Tensor creation on GPU using vectors

Hi, what is the difference among the ways in which tensor a, b, c and d are created using vector v below, in terms of memory allocation on CPU and GPU?

vector<float> v(5,10.0);
auto options_a = torch::TensorOptions().dtype(torch::kFloat32).device(torch::kCUDA, 1);
torch::Tensor a = torch::from_blob(v, {5}, options_a);

auto options_b = torch::TensorOptions().dtype(torch::kFloat32);
torch::Tensor b = torch::from_blob(v, {5}, options_b).to(torch::kCUDA);

torch::Tensor c = torch::tensor(v).to(torch::kCUDA);

torch::Tensor d = torch::tensor(v, device = torch::kCUDA); 

I know for c a tensor is first created on CPU and then copied to GPU and for d, tensor is directly created on GPU. But then in the case of d, is the vector v itself transferred to GPU?