Pytorch CPU AssertionError

Hello everyone, I am trying to follow the object recognition tutorial from pytorch I use Windows 10 with CPU only and I have tried both the pip and the conda install for the torch and torchvision packages (CPU version), however, I keep getting the same error shown below. Any help is greatly appreciated. Cheers.

AssertionError Traceback (most recent call last)
in ()
4 for epoch in range(num_epochs):
5 # train for one epoch, printing every 10 iterations
----> 6 train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
7 # update the learning rate
8 lr_scheduler.step()

D:\Resources\Pytorch Learn\ in train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq)
24 lr_scheduler = utils.warmup_lr_scheduler(optimizer, warmup_iters, warmup_factor)
—> 26 for images, targets in metric_logger.log_every(data_loader, print_freq, header):
27 images = list( for image in images)
28 targets = [{k: for k, v in t.items()} for t in targets]

D:\Resources\Pytorch Learn\ in log_every(self, iterable, print_freq, header)
210 meters=str(self),
211 time=str(iter_time), data=str(data_time),
–> 212 memory=torch.cuda.max_memory_allocated() / MB))
213 i += 1
214 end = time.time()

C:\Users\Marinos\Anaconda3\lib\site-packages\torch\cuda\ in max_memory_allocated(device)
298 management.
299 “”"
–> 300 return memory_stats(device=device)[“allocated_bytes.all.peak”]

C:\Users\Marinos\Anaconda3\lib\site-packages\torch\cuda\ in memory_stats(device)
157 result.append((prefix, obj))
–> 159 stats = memory_stats_as_nested_dict(device=device)
160 _recurse_add_to_result("", stats)
161 result.sort()

C:\Users\Marinos\Anaconda3\lib\site-packages\torch\cuda\ in memory_stats_as_nested_dict(device)
166 def memory_stats_as_nested_dict(device=None):
167 r""“Returns the result of :func:~torch.cuda.memory_stats as a nested dictionary.”""
–> 168 device = _get_device_index(device, optional=True)
169 return torch._C._cuda_memoryStats(device)

C:\Users\Marinos\Anaconda3\lib\site-packages\torch\ in _get_device_index(device, optional)
29 if optional:
30 # default cuda device index
—> 31 return torch.cuda.current_device()
32 else:
33 raise ValueError('Expected a cuda device with a specified index ’

C:\Users\Marinos\Anaconda3\lib\site-packages\torch\ in current_device()
328 def current_device():
329 r""“Returns the index of a currently selected device.”""
–> 330 _lazy_init()
331 return torch._C._cuda_getDevice()

C:\Users\Marinos\Anaconda3\lib\site-packages\torch\ in _lazy_init()
147 raise RuntimeError(
148 "Cannot re-initialize CUDA in forked subprocess. " + msg)
–> 149 _check_driver()
150 if _cudart is None:
151 raise AssertionError(

C:\Users\Marinos\Anaconda3\lib\site-packages\torch\ in _check_driver()
45 def _check_driver():
46 if not hasattr(torch._C, ‘_cuda_isDriverSufficient’):
—> 47 raise AssertionError(“Torch not compiled with CUDA enabled”)
48 if not torch._C._cuda_isDriverSufficient():
49 if torch._C._cuda_getDriverVersion() == 0:

AssertionError: Torch not compiled with CUDA enabled

Why do you call torch.cuda.max_memory_allocated() here?

Thanks peterjc123, indeed this was not required. The version of utils and engine scripts were not 100% compatible with CPU. I had to make some small fixes, but now it works.
Unfortunately the problems did not stop. I get a silent error that restarts the kernel in the following bold line. By running the script in Powershell, also no error, just stops execution.

res = {target[“image_id”].item(): output for target, output in zip(targets, outputs)}
evaluator_time = time.time()

UPDATE: Actually the following line is giving silent error when run for the 2nd time in a loop, so I guess it is a memory problem.

coco_dt = loadRes(self.coco_gt, results) if results else COCO()

Could you please elaborate a little bit? What is the implementation of loadRes and COCO?