How about under lib/nms/_ext/nms? I just pushed them to the master. This __init__.py file should contain:
from torch.utils.ffi import _wrap_function
from ._nms import lib as _lib, ffi as _ffi
__all__ = []
def _import_symbols(locals):
for symbol in dir(_lib):
fn = getattr(_lib, symbol)
if callable(fn):
locals[symbol] = _wrap_function(fn, _ffi)
else:
locals[symbol] = fn
__all__.append(symbol)
_import_symbols(locals())
hmm…I just checkout out the master in a clean directory, built the modules, and ran the script. Are you running run_resnet.sh from the top level directory pytorch-faster-rcnn?
I’m also compiling with -arch=sm_35 to match my gpus. I could be doing something really wrong so I’m going to go read up on how python imports modules for a little bit…
I have a feeling the None return in lib/layer_utils/roi_pooling/roi_pool.py is messing with autograd (in the screenshot below), but I’m still looking into it.
the model runs, after commenting out all instances of self.delete_intermediate_states(). I’m still trying to figure out why that causes the runtime error.