dashesy
(dashesy)
#1
What is the difference between `.options()`

and `.type()`

:

Lets say

```
std::vector<at::Tensor> forward(at::Tensor xy, int height, int width) {
auto tmp = at::empty({10, 4, height, width}, xy.type());
return {tmp}
}
```

v.s.

```
std::vector<at::Tensor> forward(at::Tensor xy, int height, int width) {
auto tmp = at::empty({10, 4, height, width}, xy.options());
return {tmp}
}
```

Would the latter include not only CUDA/Cpu but also the device that xy is on? in that case which device will be used in the first case?

colesbury
(Sam Gross)
#2
```
Would the latter include not only CUDA/Cpu but also the device that xy is on?
```

Yes

```
in that case which device will be used in the first case?
```

The current CUDA device (if xy is a CUDA tensor). You can set the device with `DeviceGuard`

in C++ or `torch.cuda.set_device`

in Python.

1 Like

dashesy
(dashesy)
#3
Thanks!

One more question:

If I want to create a tensor with the same options but with type int, is there something like

```
std::vector<at::Tensor> forward(at::Tensor xy, int height, int width) {
auto tmp = at::empty({10, 4, height, width}, xy.options().as_int());
return {tmp}
}
```

dashesy
(dashesy)
#4
This is the solution

```
std::vector<at::Tensor> forward(at::Tensor xy, int height, int width) {
auto tmp = at::empty({10, 4, height, width}, xy.options().dtype(at::kInt));
return {tmp}
}
```