I have a big numpy 2d array (30884x30884) that takes 7GB of memory. I want to do a few matrix manipulations such as calculating the inverse using pytorch. I have two nvidia 1080ti (each with 11GB memory). So I thought the following would work:

```
G_tensor = torch.from_numpy(G).cuda(device=0)
G_inverse = torch.DoubleTensor().cuda(device=1)
torch.inverse(G_tensor,out=G_inverse)
```

~~But I get the following error:~~

```
~~TypeError: torch.inverse received an invalid combination of arguments - got (torch.cuda.DoubleTensor, torch.cuda.DoubleTensor), but expected (torch.cuda.DoubleTensor source)~~
```

How can I utilize both GPUs to avoid running out of memory?

UPDATE:

I needed to pass named argument `out`

. Still, it does not work the way I assumed.