RuntimeError: contiguous is not implemented for type UndefinedType

How about under lib/nms/_ext/nms? I just pushed them to the master. This __init__.py file should contain:


from torch.utils.ffi import _wrap_function
from ._nms import lib as _lib, ffi as _ffi

__all__ = []
def _import_symbols(locals):
    for symbol in dir(_lib):
        fn = getattr(_lib, symbol)
        if callable(fn):
            locals[symbol] = _wrap_function(fn, _ffi)
        else:
            locals[symbol] = fn
        __all__.append(symbol)

_import_symbols(locals())

Pulled from master, lib/nms/_ext/nms/__init__.py looks like:

Still getting the ModuleNotFoundError: No module named '_ext' error though

hmm…I just checkout out the master in a clean directory, built the modules, and ran the script. Are you running run_resnet.sh from the top level directory pytorch-faster-rcnn?

thomasbalestri@linux02:~/pytorch-faster-rcnn$ ./experiments/scripts/run_resnet.sh

Also, just for your information, I’m using a TitanX, so when building the nms and roi_pooling module I use the flag -arch=sm_52

Yes, I checked out master again in a clean directory, built the modules, and ran the script:

I’m also compiling with -arch=sm_35 to match my gpus. I could be doing something really wrong so I’m going to go read up on how python imports modules for a little bit…

1 Like

Are you running on Python2 or 3? I’d recommend Python 2.7 for this project. See https://github.com/ruotianluo/pytorch-faster-rcnn/issues/8

Thanks for pointing that out, that is very promising. I’m on python 3 but it looks like it’s time to try 2.7.

I have a feeling the None return in lib/layer_utils/roi_pooling/roi_pool.py is messing with autograd (in the screenshot below), but I’m still looking into it.

If you replace that with

image

the model runs, after commenting out all instances of self.delete_intermediate_states(). I’m still trying to figure out why that causes the runtime error.

Great! I also got it running by returning torch.zeros_list(self.rois). However, I didn’t comment out self.delete_intermediate_states().

If you’re curious, I was able to isolate the bug and I’ve opened a github issue for it: https://github.com/pytorch/pytorch/issues/4198

Thanks for your help!

No, thank you! This was very helpful. I’ll watch the github issue.