DataLoader, when num_worker >0, there is bug


(Carsten Ditzel) #21

thank you, your answers are very detailed and your contributions to this community are invaluable =)

I wonder, how this generalizes to the more common case in which there exist multiple data sets within the hdf5 file that needs to be accessed by the index. Is code duplication and loading of the respective sets in torch.tensors the way to go here?


(Piotr Januszewski) #22

Thank you for kind words :smile:

Here is an example of non-trivial Dataset that I use for preprocessing data for World Models’ memory module training:

class MemoryDataset(Dataset):
    """Dataset of sequential data to train memory.
    Args:
        dataset_path (string): Path to HDF5 dataset file.
        sequence_len (int): Desired output sequence len.
        terminal_prob (float): Probability of sampling sequence that finishes with
            terminal state.
        dataset_fraction (float): Fraction of dataset to use during training, value range: (0, 1]
            (dataset forepart is taken).
        is_deterministic (bool): If return sampled latent states or mean latent states.
    Note:
        Arrays should have the same size of the first dimension and their type should be the
        same as desired Tensor type.
    """

    def __init__(self, dataset_path, sequence_len, terminal_prob, dataset_fraction, is_deterministic):
        assert 0 < terminal_prob and terminal_prob <= 1.0, "0 < terminal_prob <= 1.0"
        assert 0 < dataset_fraction and dataset_fraction <= 1.0, "0 < dataset_fraction <= 1.0"

        self.dataset = None
        self.dataset_path = dataset_path
        self.sequence_len = sequence_len
        self.terminal_prob = terminal_prob
        self.dataset_fraction = dataset_fraction
        self.is_deterministic = is_deterministic

        # https://stackoverflow.com/questions/46045512/h5py-hdf5-database-randomly-returning-nans-and-near-very-small-data-with-multi
        with h5py.File(self.dataset_path, "r") as dataset:
            self.latent_dim = dataset.attrs["LATENT_DIM"]
            self.action_dim = dataset.attrs["ACTION_DIM"]
            self.n_games = dataset.attrs["N_GAMES"]

    def __getitem__(self, idx):
        """Get sequence at random starting position of given sequence length from episode `idx`."""

        offset = 1

        if self.dataset is None:
            self.dataset = h5py.File(self.dataset_path, "r")

        t_start, t_end = self.dataset['episodes'][idx:idx + 2]
        episode_length = t_end - t_start
        if self.sequence_len <= episode_length - offset:
            sequence_len = self.sequence_len
        else:
            sequence_len = episode_length - offset
            # log.info(
            #     "Episode %d is too short to form full sequence, data will be zero-padded.", idx)

        # Sample where to start sequence of length `self.sequence_len` in episode `idx`
        # '- offset' because "next states" are offset by 'offset'
        if np.random.rand() < self.terminal_prob:
            # Take sequence ending with terminal state
            start = t_start + episode_length - sequence_len - offset
        else:
            # NOTE: np.random.randint takes EXCLUSIVE upper bound of range to sample from
            start = t_start + np.random.randint(max(1, episode_length - sequence_len - offset))

        states_ = torch.from_numpy(self.dataset['states'][start:start + sequence_len + offset])
        actions_ = torch.from_numpy(self.dataset['actions'][start:start + sequence_len])

        states = torch.zeros(self.sequence_len, self.latent_dim, dtype=states_.dtype)
        next_states = torch.zeros(self.sequence_len, self.latent_dim, dtype=states_.dtype)
        actions = torch.zeros(self.sequence_len, self.action_dim, dtype=actions_.dtype)

        # Sample latent states (this is done to prevent overfitting of memory to a specific 'z'.)
        if self.is_deterministic:
            z_samples = states_[:, 0]
        else:
            mu = states_[:, 0]
            sigma = torch.exp(states_[:, 1] / 2)
            latent = Normal(loc=mu, scale=sigma)
            z_samples = latent.sample()

        states[:sequence_len] = z_samples[:-offset]
        next_states[:sequence_len] = z_samples[offset:]
        actions[:sequence_len] = actions_

        return [states, actions], [next_states]

    def __len__(self):
        return int(self.n_games * self.dataset_fraction)

(Carsten Ditzel) #23

I am not sure is this solves the concurrency issue since all you do is defer the hdf5 file access from the ctor to the getitem method while ensuring with a singleton-style syntax that the file is read only the first time the getitem method is called (for efficiency). Is that correct? However, the file handle that is used to access the data in each call persists…

I am puzzled

class MyDataset(Dataset):

    def __init__(self, hdf5file):
    
        self.hdf5file = hdf5file
        self.dataset = None

        with h5py.File(self.hdf5file, "r") as dataset:
            self.NrFrms = dataset.attrs['NrFrms']
            self.NrChn = dataset.attrs['NrChn']

    def __len__(self):
        return self.NrFrms * self.NrChn

    def __getitem__(self, idx):

        if self.dataset is None:
            self.dataset = h5py.File(self.hdf5file, "r")

        access_idx = idx % self.NrFrms 
        access_chn = idx // self.NrFrms + 1 

        target = torch.tensor(1, dtype=torch.long) 

        data = torch.tensor(self.dataset['Chn'+str(access_chn)][access_idx], dtype=torch.float32)
        image = torch.tensor(self.dataset['Image'][access_idx], dtype=torch.float32)

        return image, data, target

still yields

RuntimeError: DataLoader worker (pid 17402) exited unexpectedly with exit code 1. Details arelost due to multiprocessing. Rerunning with num_workers=0 may give better error trace.

if more than one worker is used for the dataloading…


(Piotr Januszewski) #24

It’s correct, I only defer hdf5 file opening to __getitem__ so it’s opened by each worker (not serialised and sent to them). Where do you think it might cause concurrency issues? I only read from the file, I do not make any writes and I assume file isn’t changed by any other process (but it could be done, see: http://docs.h5py.org/en/stable/swmr.html).

Did you try rerunning with num_workers=0? Maybe it will give you better error in deed. And you should look at what was printed above the message “RuntimeError: DataLoader worker (pid 17402) exited unexpectedly with exit code 1. Details are lost due to multiprocessing. […]”, there should be call stack of each process, you can see there what happened. Please copy-paste full error, I’ll give it a look.


(Carsten Ditzel) #25

I am happy to

Blockquote
Traceback (most recent call last):
File “”, line 1, in
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/multiprocessing/spawn.py”, line 105, in spawn_main
exitcode = _main(fd)
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/multiprocessing/spawn.py”, line 114, in _main
prepare(preparation_data)
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/multiprocessing/spawn.py”, line 225, in prepare
_fixup_main_from_path(data[‘init_main_from_path’])
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/multiprocessing/spawn.py”, line 277, in _fixup_main_from_path
run_name=“mp_main”)
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/runpy.py”, line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/runpy.py”, line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/runpy.py”, line 85, in _run_code
exec(code, run_globals)
File “/home/ditzel/…/racam/main_racam.py”, line 3, in
torch.multiprocessing.set_start_method(‘spawn’)
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/multiprocessing/context.py”, line 242,in set_start_method
raise RuntimeError(‘context has already been set’)
RuntimeError: context has already been set
Traceback (most recent call last):
File “/home/ditzel//main_racam.py”, line 136, in
train(epoch)
File “/home/ditzel/main_racam.py”, line 48, in train
for batch_idx, (img, rdm, t) in enumerate(trainloader):
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 631, in next
idx, batch = self._get_batch()
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 610, in _get_batch
return self.data_queue.get()
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/multiprocessing/queues.py”, line 94, in get
res = self._recv_bytes()
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/multiprocessing/connection.py”, line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/multiprocessing/connection.py”, line 407, in _recv_bytes
buf = self._recv(4)
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/multiprocessing/connection.py”, line 379, in _recv
chunk = read(handle, remaining)
File “/home/ditzel/anaconda3/envs/py37/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 274, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 10890) exited unexpectedly with exit code 1. Details arelost due to multiprocessing. Rerunning with num_workers=0 may give better error trace.

when I increase number of workers to 2. If set to 0, unfortunately there is no error and the program works as expected


(Piotr Januszewski) #26

Sorry for the delay. You need to protect torch.multiprocessing.set_start_method(‘spawn’) in /home/ditzel/radar/radarcamerafusion/racam/main_racam.py:3 with __name__ == "__main__". You can call this function only once, but currently you call it in each thread. See this issue for more details: https://github.com/pytorch/pytorch/issues/3492#issuecomment-341965218.

Also, you can try to delete this line, if you have HDF5 1.10. if shouldn’t be needed.