Why is this test skipped?

I’m trying to run a single test from pytorch/test/ with pytest.
I type py.test test_vmap.py -k test_trace in the shell to execute the test test_trace() from test_vmap.py.

It says ‘1 skipped’, so I assume the behavior of trace is not properly tested because of the skipped test. How can I fix this, respectively what do I need to change so that the test is not skipped?

I assume the main issue is this: Could not run 'aten::trace' with arguments from the 'Meta' backend.

What’s meant by ‘backend’ here? I’m running on a machine with GPU and can run PyTorch programs on it just fine.

The detailed pytest output is this:

platform linux -- Python 3.7.10, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /home/myname/pytorch, configfile: pytest.ini
plugins: hypothesis-6.10.1
collected 185 items / 181 deselected / 4 selected

test_vmap.py .s..                                                                                                                                                                                            [100%]

============================================================================================= short test summary info ==============================================================================================
SKIPPED [1] test_vmap.py:2409: not implemented: Could not run 'aten::trace' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::trace' is only available for these backends: [CPU, CUDA, BackendSelect, Named, InplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].

CPU: registered at aten/src/ATen/RegisterCPU.cpp:11836 [kernel]
CUDA: registered at aten/src/ATen/RegisterCUDA.cpp:13919 [kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
InplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:60 [backend fallback]
AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:9854 [autograd kernel]
AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:9854 [autograd kernel]
AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:9854 [autograd kernel]
AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:9854 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:9854 [autograd kernel]
AutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:9854 [autograd kernel]
AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:9854 [autograd kernel]
AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:9854 [autograd kernel]
AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:9854 [autograd kernel]
AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:9854 [autograd kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_2.cpp:9566 [kernel]
Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:250 [backend fallback]
Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1020 [kernel]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
=================================================================================== 3 passed, 1 skipped, 181 deselected in 2.05s ===================================================================================```

Hi,

Usually when a test is skipped, it is because it should not run. Either because you don’t have the hardware to run it, or it is a special kind of test (like slow tests that only run if you ask for them as they are… slow), or things that we know is not supported but it is ok.

This case falls in the third category: meta backend is a new special backend that we’re adding and it is perfectly fine for it not to be implemented for most ops.
So you can safely ignore this skip :slight_smile:

1 Like