Failing tests after successful build. How to contribute?

I built PyTorch from source because I want to contribute to the library. Unfortunately, I can’t run

python test/run_test.py

without getting errors. I rebuilt three times inside different Anaconda environments with Python versions 3.7, 3.8 and 3.9, but there are always some errors being thrown, not necessarily the same between the builds.

My questions are these:

  1. Should I expect all the tests to succeed? If not, how do I go about fixing them? The builds succeed without errors.
  2. If I ran the tests, and some failed, does this still mean that ALL tests were executed, or does the test suite stop at the first test that fails?
  3. If the answer to 2. is affirmative, i.e. all tests are run independently of whether one of them failed, am I correct in my assumption that I could run the tests once, make notes of which ones failed, then implement my changes to the codebase (e.g. to fix some issue), and re-run the tests to see if anything ELSE fails. In case only the same tests fail as before, should I then make a pull request for my changes?

I really would like to contribute, but this is the major roadblock.

  1. Ideally they should, but might be flaky or fail on unknown/new configs.

  2. You could pass the --continue-through-error argument to python run_test.py so that all tests will be executed even if one fails.

  3. See 2 for the first part. Are you interested in contributing to the tests directly or are you working on another feature/fix?

I’m working on an issue from the issue tracker. It’s a specific functionality, and there are specific tests for it (they are named after the function I’m working on). So I’m wondering whether it’s enough for these specific tests to pass, given that some of the tests in run_test.py fail and I’m not sure how to find out why they do. I googled to no avail.

Yes, you should take a look at related tests and could run them only via the -k flag.
E.g. if you are working on an torch.nn fix, the test could be located in test_nn.py and you could filter it out via:

python test_nn.py -v -k my_test_name

You could use grep -r my_test_name to find the related file and then execute this test in isolation.
I would also highly recommend @tom’s video on how to fix your first PyTorch bug, which walks you through the necessary steps and shows some tips alone the way.

1 Like

Ok, I will do that. Also thank you for the recommendation.