Building simple testing framework in Python
Every good product need to have a good testing coverage in order to insure that it works both in “happy scenarios” and in “bad scenarios” – inc. disruptions, limits and etc.
Beside unit tests, every module should be tested “end-to-end” as well. In this blog post I’ll demonstrate how to build a simple testing framework in python that will allow you to write tests, run them, get statistics and more. The important thing is making the tests runner scalable so we can add tests without changing existing code.
Let’s start by defining a base class that all the tests should inherit from it. The base class will have some methods that can be overridden by the test implementation:
1 2 3 4 5 6 7 8 9 10 11 12 | class TestBase(object): @classmethod def setup(cls): pass @classmethod def run_test(cls): raise Exception("Not Implemented") @classmethod def tear_down(cls): pass |
As you can see, the base class has 3 methods – one for preparing the environment before running the actual test, one for cleaning up the environment once the test is over and one for actual test implementation.
Now, let’s say all the tests will be in a directory called “tests”, each test will be defined as class that inherits from “TestBase” and probably will be in a different file. Now, what we need to do is write a function that can enumerate all the python files in this directory and find the classes that inherits from TestBase. This will give us the full list of the available tests and will allow us to dynamically create the test object and call it’s methods:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | def get_available_tests(): test_base = getattr(sys.modules["tests_common"], "TestBase") logger.info("Looking for available tests") files = os.listdir(os.path.join(os.getcwd(), "tests")) for file in files: if file.endswith(".py") and file != "__init__.py": logger.debug("+ Found python file %s" % file) name = ".".join(file.split(".")[:-1]) import_module("tests.%s" % name) tests = test_base.__subclasses__() logger.info("Loaded %d tests:" % len(tests)) for test in tests: logger.info("+ %s" % test.__name__) return tests |
Note that in order to use the “import_module” function, we need to make sure “tests” is a package. This is done simply by creating an empty “__init__.py” file in “tests” directory. For more information, you can read about modules and packages in the following link.
Running a single test is done easily by calling the base class methods for each test:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | def run_single_test(test): ok = True try: logger.info("Starting test: %s" % test.__name__) test.setup() test.run_test() logger.info("Test execution completed successfully!") except Exception as e: logger.error("Test execution failed: %s" % e.message) logger.error("-" * 60) trace = traceback.format_exc().splitlines() for line in trace: logger.error(line) logger.error("-" * 60) ok = False finally: test.tear_down() return ok |
Note that we simply call the all 3 methods from the base class in the right order. in case there is a test failure, it should throw an exception. The exception will be catched, test execution will be marked as failure and we’ll print the stack trace for debugging purposes. After each test, we’ll call it’s tear_down method that shouldn’t throw any exception. If we want to protect ourselves, we can simply wrap it with try…except without doing any further actions.
Now, we can add some logic that allows us to choose which tests we want to run in the current execution. This can be done easily by getting a pattern and filtering the list of the tests using this pattern. For example, in case we have 3 tests called “TestCase1“, “MyShortTest” and “MyVeryComplicatedTest” – we can run the tests_runner with the following patterns: “*“, “TestCase1, MyShortTest“, “My*” and etc.
The filtering function is simple and looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | def filter_tests(tests, pattern): filtered = list() if not(pattern): logger.info("No filtering pattern was supplied") return filtered patterns = pattern.split(",") for t in tests: for p in patterns: if fnmatch(t.__name__, p.strip()): filtered.append(t) continue logger.info("After filtering tests by given pattern, found %d tests to execute" % len(filtered)) for t in filtered: logger.info("+ %s" % t.__name__) return filtered |
We can get the pattern using command line arguments and provide more interesting options, for example – calculate tests duration, don’t stop test executions on failures and many more. A simple runner can look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | if __name__ == "__main__": parser = OptionParser() parser.add_option("-t", "--tests", dest="tests", help="Comma separated list of tests") parser.add_option("-c", "--continue-after-failure", dest="continue_after_failure", action="store_true", help="Continue to the next test in case of test failure") (options, args) = parser.parse_args() # Load available tests from "tests/" directory tests = get_available_tests() if len(tests) == 0: logger.error("No tests were found, aborting execution!") sys.exit(1) # In case no tests were supplied, assume the purpose was to list all available tests if not(options.tests): logger.warn("No tests were selected, exiting!") sys.exit(0) # Filter the tests we want to run to_run = filter_tests(tests, options.tests) # Run filtered tests passed_tests = 0 failed_tests = 0 total_duration = 0 for test in to_run: # Run test and measure execution time start_time = time.time() ok = run_single_test(test) test_duration = time.time() - start_time logger.info("Test execution took %.2f seconds" % test_duration) total_duration += test_duration if ok: passed_tests += 1 else: failed_tests += 1 if not(options.continue_after_failure): logger.error("Discarding other tests due to failure") break logger.info("=" * 60) logger.info("Ran %d tests (%d passed, %d failed) in %.2f seconds" % (passed_tests + failed_tests, passed_tests, failed_tests, total_duration)) sys.exit(1 if (failed_tests > 0) else 0) |
You can find the full source code in the following link available for download with some example tests and usage output.
Bonus: If you are using a linux machine, this is a nice way for poor auto completion for test names, I use it where I have more than 100 tests and it’s really convenient.
Create a new bash file, for example let’s call it “load_tests_to_autocompletion.sh” and source it from your ~/.bashrc file (don’t forget to add execution bit). The file should look like that:
1 2 3 4 5 6 7 8 9 10 | #! /bin/bash FILES="`ls /<full-path>/tests/*.py`" TESTS="`grep -o -P '(?<=class ).*(?=\(TestBase\):)' ${FILES} | cut -d':' -f2`" function tests { python /<full-path>/tests_runner.py -t $* } complete -W "${TESTS}" tests |
In this script we simply list all the python files in the “tests” directory and grep the name of the classes that inherits from “TestBase”. The usage is simple: “tests “.
Hope I gave you some useful ideas for building your own testing framework.
– Alexander
3 thoughts on “Building simple testing framework in Python”
How does this compare to Python’s built-in unit testing framework?
I’m not really familiar with the python unit testing framework, I’ve never used it. I guess it is almost the same, One of my purposes was to give some basic ideas, I chose python for implementation just because it’s easy 🙂
This framework can be easily extended and modified for your own use-cases (for example I’ve added json file output for each suite run that contains the list of all the executed tests and it’s duration and I’m using it for load balancing tests between builders – so tests can run simultaneously).
Of course! I did like your approach to the unit testing framework also! 🙂