When we’re building new models we often write alot of tests iteratively, but I rarely see this formalized.
My current tests are built by inheriting from tests.common
(link by separately installing a python package into my environment that only contains whatever is in common
. (I made a feature-request (issue #5045) that didn’t receive any love yet)
setup.py
:
import sys
from setuptools import setup
setup(
name = "torchtestcommon", # what you want to call the archive/egg
version = '0.4.0a0',
packages=["torchtest"], # top-level python modules you can import like
dependency_links = [],
install_requires=[],
package_data = {},
author="Pytorch contributors",
author_email = "",
description = "Copy-paste of pytorch/test/common.py into ./torchtest/",
)
After cd torchtestcommon; pip install -e .
I can write stuff like
from torchtest.common import TestCase
testing = TestCase()
testing.assertAlmostEqual(torch.ones(1),torch.ones(1))
Or
from thisfancyneuralnetworkname import MyModel
from torchtest.common import TestCase
import torch
input_ex = torch.ones([100,100])
output_ex = torch.ones([100,100])
class TestMyModel(TestCase):
def test_forward(self):
model = MyModel()
y = model(input_ex)
self.assertEqual(y.size(), output_ex.size())
self.assertAlmostEqual(y, output_ex, places=3)
.....
And then run pytest .
How do you do it? Or any link to best practices/example repositories?