Using data from DataLoader without copying Pytorch 1.0

I’ve got a DataLoader loading a custom dataset.

loader =, batch_size=batch_size, sampler=train_sampler,
pin_memory=(torch.cuda.is_available()), num_workers=0)

I want to get rid of the Variable and use torch.tensor properly and without extra copying. An example of my training epoch looks like:

for batch_idx, (input, target) in enumerate(loader):

    # Create vaiables
    if torch.cuda.is_available():
        input_var = torch.autograd.Variable(input.cuda(async=True))
        target_var = torch.autograd.Variable(target.cuda(async=True))
        input_var = torch.autograd.Variable(input)
        target_var = torch.autograd.Variable(target)

    # compute output
    output = model(input_var)
    loss = torch.nn.functional.mse_loss(output, target_var)

With data from a DataLoader, do I change torch.autograd.Variable(input.cuda(async=True)) to torch.from_numpy(input).cuda(async=True) to prevent copying?

Also, when I test an epoch I have

input_var = torch.autograd.Variable(input.cuda(async=True), volatile=True)
target_var = torch.autograd.Variable(target.cuda(async=True), volatile=True)

What do I replace volatile with?


Using the new methods you would write:

device = 'cuda' if torch.cuda.is_available() else 'cpu'
for batch_idx, (input, target) in enumerate(loader):
    input =
    target =

The volatile argument was replaces with the woth torch.no_grad(): statement.
You should just wrap your validation code into this statement to disable gradients:

with torch.no_grad():
    for batch_idx, (data, target) in enumerate(val_loader):
1 Like