Cycle Gans Error RuntimeError: DataLoader worker (pid(s) 30412, 26524, 28704, 10456, 30900) exited unexpectedly

Hello, I am implementing cycleGans

with my own dataset. I have written my own dataset loader and in line 120: i have changed it as

Training data loader
train_dataloader = DataLoader(
MyDataset(train_A_dataset ,train_B_dataset),

and in line 161 :

r epoch in range(opt.epoch, opt.n_epochs):
for i, batch in enumerate(train_dataloader):

I am running the code on CPU .
I get this error

An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Traceback (most recent call last):
File “C:\Users\Anaconda3\lib\site-packages\torch\utils\data\”, line 761, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File “C:\Users\Anaconda3\lib\multiprocessing\”, line 105, in get
raise Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “E:/example/”, line 161, in
for i, batch in enumerate(train_dataloader):
File “C:\Users\Anaconda3\lib\site-packages\torch\utils\data\”, line 345, in next
data = self._next_data()
File “C:\Users\Anaconda3\lib\site-packages\torch\utils\data\”, line 841, in _next_data
idx, data = self._get_data()
File “C:\Users\Anaconda3\lib\site-packages\torch\utils\data\”, line 808, in _get_data
success, data = self._try_get_data()
File “C:\Users\Surbhi\Anaconda3\lib\site-packages\torch\utils\data\”, line 774, in _try_get_data
raise RuntimeError(‘DataLoader worker (pid(s) {}) exited unexpectedly’.format(pids_str))
RuntimeError: DataLoader worker (pid(s) 30412, 26524, 28704, 10456, 30900) exited unexpectedly

I do not understand what is the cause of this error. Also , I do not have the directory of input data in the folder . I created the patches, saved them in the memory and wrote the dataloader.

I tried to reduce the batch size to 1 and number of workers to 0:
After which I get the error in the line 164 for as: :

File “E:/example/”, line 164, in
real_A = Variable(batch[“A”].type(Tensor))
TypeError: list indices must be integers or slices, not str

The second error points towards an invalid indexing with a string:

real_A = Variable(batch["A"].type(Tensor))

so apparently batch is not a dict, which can be indexed in this way.

The first error points forwards a missing if-clause protection as explained in the error message and here.

Unrelated to this issue, but Variables are deprecated since PyTorch 0.4, so you can use tensors now.

PS: You can post code snippets by wrapping them into three backticks ```, which makes debugging a bit easier. :wink:

Thank you.
But now I face this error :RuntimeError: Argument #4: Padding size should be less than the corresponding input dimension, but got: padding (112, 112) at dimension 3 of input 4

for the above implementation.

I guess some of these layers:


might use a too large padding for the current input activation.
Could you try to isolate the layer, which raises the error and print the shape of its input?

I was able to solve that error. But now its showing
RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[5, 64, 70, 70] to have 3 channels, but got 64 channels instead.

I get the above mentioned error. I assume this is because input channels for RGB are 3.
The size of my patches are torch.Size([112, 64, 64, 64]).
How can I modify it for my own dataset. My images are dicom images.

If you are dealing with Dicom, is the input shape defined as [batch_size, depth, height, width], [batch_size, channels, height, width] or any other format?
Usually Dicom images are using a single channel, so I assume the channel dimension is missing?
If that’s the case, you would need to create a new conv layer with in_channels=1 or alternatively repeat the input channel 3 times.
Before digging into alternatives, let’s first clarify how you’ve created the input and what each dimension means. :slight_smile:

After preprocessing (I did resampling and converted to SUV)my dataset ,the image size was [284, 143, 143].
Then I extracted patches(applied padding and used unfold function) to get patches of size torch.Size([7, 4, 4, 64, 64, 64]) which are a total of 112 patches of 64 * 64 * 64. for patch_A and patch_B both.
A and B refer to two different modality

Then I created a custom dataloader and splitted my images for training .

In that case your channel dimension is missing and could be added via:

x = x.unsqueeze(1)

Also, since you are dealing with volumetric data, you would have to use the nn.*3d modules, such as nn.Conv3d.

Sorry, for asking too many questions.

But I want to use my 3D images on this 2D model(so i dont want to use nn.*3d modules).
I was wondering if i can send my patches as 64* 64.i.e 64 times 64* 64.

Also can you give example how can i repeat the input channel 3 times .

Now I am getting this error:
UserWarning: Using a target size (torch.Size([5, 1, 6, 6])) that is different to the input size (torch.Size([3, 1, 6, 6])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)

RuntimeError: The size of tensor a (3) must match the size of tensor b (5) at non-singleton dimension 0

Please correct me if I am wrong.

  1. So is this a Channel error of image, because my image is non-standard RGB format? and I need to repeat the input channel 3 times.

If yes ,then at which point i should perform this step.Should I do this when I extract image patch and before saving it as .pt files ?
As I am slightly confused now :sweat_smile:

That would be possible.
If your current input shape is [112, 64, 64, 64], which corresponds to [nb_patches=batch_size, depth, height, width], you could use:

x = x.view(x.size(0), 1, 64, 64*64)

to create a tensor of [112, 1, 64, 4096], which would be accepted by an nn.Conv2d layer.

To expand/repeat the channel dimension, you could use:

x = x.expand(-1, 3, -1, -1)