RuntimeError: the derivative for 'unique_dim' is not implemented.

When I added a small part of my own processing based on other people’s code, this error occurred (it seems to be an error in the process of loss_backward()), which part should I modify?
The complete error is as follows:

Traceback (most recent call last): File "D:\PyCharm 2020.1.1\plugins\python\helpers\pydev\pydevd.py", line 1438, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "D:\PyCharm 2020.1.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "F:/杂七杂八的代码/EntityMatcher-master/EntityMatcher-master/run.py", line 74, in <module> run_experiment(model_name, dataset_dir, embedding_dir) File "F:/杂七杂八的代码/EntityMatcher-master/EntityMatcher-master/run.py", line 32, in run_experiment model.run_train(train, File "D:\anaconda3\envs\dm\lib\site-packages\deepmatcher\models\core.py", line 183, in run_train return Runner.train(self, *args, **kwargs) File "D:\anaconda3\envs\dm\lib\site-packages\deepmatcher\runner.py", line 338, in train Runner._run( File "D:\anaconda3\envs\dm\lib\site-packages\deepmatcher\runner.py", line 249, in _run loss.backward() File "D:\anaconda3\envs\dm\lib\site-packages\torch\_tensor.py", line 255, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "D:\anaconda3\envs\dm\lib\site-packages\torch\autograd\__init__.py", line 147, in backward Variable._execution_engine.run_backward( RuntimeError: the derivative for 'unique_dim' is not implemented.

Based on the error message it seems you are calling torch.unique(input, dim) in your code and expect to backpropagate through it. Since this operation is not differentiable it will fail:

x = torch.randn(10, 10, requires_grad=True)
out = torch.unique(x, dim=1)
out.mean().backward()
# NotImplementedError: the derivative for 'unique_dim' is not implemented.

Thanks for the answer my problem was solved!