Why does applying DataParallel to an Embedding sometimes lead to "incomplete" batch parts?

:bug: Bug

I have only a functional description of the bug. Still trying to make a MWE.

Sometimes when using an Embedding in a model with DataParallel I hit errors like so:

x = ... # shape: [<batch part>, <something>]
y = self.embed(x) # shape: [<WRONG batch part>, <something>, <emb dim>]
#^ the return value of the Embedding is "incomplete" on some GPUs.
# Noticed it most on cuda:1, but that shouldn't matter.
# E.g. <batch> = 2000, <wrong batch> = 11

To Reproduce

Steps to reproduce the behavior:

  1. Have a model with Embedding.
  2. Use DataParallel on the model such that you’re close to saturating your system.
  3. Things shouldn’t end up working, if the bug is reproducible.
  4. You don’t need anything fancy. Just some pretrained model in eval mode and pass it some input.

Expected behavior

DataParallel models behave functionally identically to normal models. Convergence and gradient descent questions notwithstanding.

Environment

  • PyTorch Version (e.g., 1.0): 1.5 ~ 1.6.x-dev
  • OS (e.g., Linux): Ubuntu 16.04 LTS
  • How you installed PyTorch (conda, pip, source): conda
  • Build command you used (if compiling from source): N/A
  • Python version: 3.7.7
  • CUDA/cuDNN version: 10.2
  • GPU models and configuration: Geforce GTX Titan X * 2
  • Any other relevant information: N/A

Hey @Enamex, I cannot reproduce the error with the following code (mostly borrowed from this tutorial).
Could you please share a min repro of this error? Thanks!

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

torch.manual_seed(1)


CONTEXT_SIZE = 2
EMBEDDING_DIM = 10
# We will use Shakespeare Sonnet 2
test_sentence = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()
# we should tokenize the input, but we will ignore that for now
# build a list of tuples.  Each tuple is ([ word_i-2, word_i-1 ], target word)
trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])
            for i in range(len(test_sentence) - 2)]
# print the first 3, just so you can see what they look like
print(trigrams[:3])

vocab = set(test_sentence)
word_to_ix = {word: i for i, word in enumerate(vocab)}


class NGramLanguageModeler(nn.Module):

    def __init__(self, vocab_size, embedding_dim, context_size):
        super(NGramLanguageModeler, self).__init__()
        self.embeddings = nn.Embedding(vocab_size, embedding_dim)
        self.linear1 = nn.Linear(context_size * embedding_dim, 128)
        self.linear2 = nn.Linear(128, vocab_size)

    def forward(self, inputs):
        embeds = self.embeddings(inputs).view((1, -1))
        out = F.relu(self.linear1(embeds))
        out = self.linear2(out)
        log_probs = F.log_softmax(out, dim=1)
        return log_probs


losses = []
loss_function = nn.NLLLoss()
model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)
optimizer = optim.SGD(model.parameters(), lr=0.001)

model = torch.nn.DataParallel(model.to("cuda:0"))

for epoch in range(10):
    total_loss = 0
    for context, target in trigrams:

        # Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
        # into integer indices and wrap them in tensors)
        context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)
        context_idxs = torch.stack([context_idxs, context_idxs])

        # Step 2. Recall that torch *accumulates* gradients. Before passing in a
        # new instance, you need to zero out the gradients from the old
        # instance
        model.zero_grad()

        # Step 3. Run the forward pass, getting log probabilities over next
        # words
        log_probs = model(context_idxs.to("cuda:0")).cpu()

        # Step 4. Compute your loss function. (Again, Torch wants the target
        # word wrapped in a tensor)
        #loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))

        # Step 5. Do the backward pass and update the gradient
        log_probs.sum().backward()
        optimizer.step()

        # Get the Python number from a 1-element Tensor by calling tensor.item()
        #total_loss += loss.item()
    #losses.append(total_loss)

I’ll try to get an example written today!
If it helps, the models I was trying to evaluate are fairseq-based (I had to dive a bit to get the actual nn.Module out from underneath the tasks…) and are loaded using Model.from_pretrained(...).

On further investigation, it seems to be a problem with LSTM/RNN?

They’re getting split on the sequence dimension instead of the batch dimension, when in batch_first=False mode. I don’t own the module I’m trying to run in parallel, and this error is its guts, so not sure where to go from there.

cc @ngimel for LSTM + DataParallel questions.

1 Like