| 1 |
|
| 2 |
Radiance Testing Framework |
| 3 |
-------------------------- |
| 4 |
|
| 5 |
A toolkit to test all (eventually) components of the Radiance |
| 6 |
synthetic image generation system for conformance to their |
| 7 |
specification. |
| 8 |
|
| 9 |
|
| 10 |
Limitations |
| 11 |
|
| 12 |
We use the Python unittest module to run our tests. This means |
| 13 |
that we're currently restricted to test only complete programs, |
| 14 |
and not actual units (since unittest was designed to test Python |
| 15 |
units, not C). A C-level testing framework may be added later. |
| 16 |
|
| 17 |
There's no good way to automatically test GUI programs like |
| 18 |
rview. We have to rely on good human testers to check whether |
| 19 |
those work correctly or not. |
| 20 |
|
| 21 |
|
| 22 |
Requirements |
| 23 |
|
| 24 |
You need a working installation of Python 2.7 or 3.x on your |
| 25 |
system. Radiance must be either built with the executables still |
| 26 |
in the source tree (preferrable to test before installing), or as |
| 27 |
a live installation. |
| 28 |
|
| 29 |
|
| 30 |
How to run tests |
| 31 |
|
| 32 |
The simplest way to run tests is to use the SCons build system. |
| 33 |
The file ray/INSTALL.scons explains the requirements and details. |
| 34 |
Once you have SCons working, go to the ray directory and type |
| 35 |
|
| 36 |
$> scons build |
| 37 |
$> scons test |
| 38 |
|
| 39 |
The first command will build Radiance, and place the executables |
| 40 |
in a platform-specific directory below ray/scbuild/. |
| 41 |
The second command will automatically execute all available tests |
| 42 |
in the environment created by the build. |
| 43 |
|
| 44 |
Other build systems may chose to integrate the tests in a similar |
| 45 |
way. The file "run_tests.py" can either be invoked as a script or |
| 46 |
imported as a module. Note that in either case, you probably need |
| 47 |
to supply the correct paths to the Radiance binaries and library. |
| 48 |
|
| 49 |
As a script: |
| 50 |
|
| 51 |
usage: run_tests.py [-V] [-H] [-p bindir] [-l radlib] [-c cat] |
| 52 |
|
| 53 |
optional arguments: |
| 54 |
-V Verbose: Print all executed test cases to stderr |
| 55 |
-H Help: print this text to stderr and exit |
| 56 |
-p bindir Path to Radiance binaries |
| 57 |
-l radlib Path to Radiance library |
| 58 |
-c cat Category of tests to run (else all) |
| 59 |
|
| 60 |
As a module: |
| 61 |
|
| 62 |
Call the class run_tests.RadianceTests(...) with suitable arguments: |
| 63 |
bindir=[directory ...] - will be prepended to PATH during tests |
| 64 |
radlib=[directory ...] - will be prepended to RAYPATH during tests |
| 65 |
cat=[category ...] - only test those categories (else TESTCATS) |
| 66 |
V=False - if True, verbose listing of executed tests |
| 67 |
|
| 68 |
Both methods will run all the tests, or just the category given |
| 69 |
as the value of the "cat" argument. |
| 70 |
|
| 71 |
|
| 72 |
What gets tested |
| 73 |
|
| 74 |
There are several test categories, each containing a number of test |
| 75 |
suites, each containing one or more tests. When running tests, each |
| 76 |
test category will be printed to the console. Depending on the |
| 77 |
settings, the individual test cases may also be listed, or just |
| 78 |
indicated with a dot. And last the total results for each category are |
| 79 |
shown. |
| 80 |
|
| 81 |
If any test fails, there will be diagnostic output about the |
| 82 |
nature of the failure, but the remaining tests will continue to |
| 83 |
be executed. Note that several utility programs may be used to |
| 84 |
access the results of other calculations, so if eg. getinfo is |
| 85 |
broken, that may cause a number of seemingly unrelated tests to |
| 86 |
fail as well. |
| 87 |
|
| 88 |
|
| 89 |
How to report failures |
| 90 |
|
| 91 |
If any of the tests fail on your platform, please report your |
| 92 |
results (and as much ancilliary information about your system and |
| 93 |
Radiance version as possible) to the radiance code development |
| 94 |
mailing list on http://www.radiance-online.org/ |
| 95 |
The developers will then either try to fix the bug, or instruct |
| 96 |
you on how to refine your testing to get more information about |
| 97 |
what went wrong. |
| 98 |
|
| 99 |
|
| 100 |
How to contribute test cases |
| 101 |
|
| 102 |
The selection of tests to run is still very much incomplete, but |
| 103 |
will hopefully grow over time. You can contribute by creating |
| 104 |
tests too! Please ask on the code development mailing list first, |
| 105 |
so that we can avoid overlaps between the work of different |
| 106 |
contributors. |
| 107 |
|
| 108 |
There are two classes of tests to be considered: |
| 109 |
|
| 110 |
- Testing individual executables |
| 111 |
This means that an individual program like ev, xfom, or getinfo |
| 112 |
is tested with typical input data, and the output is compared |
| 113 |
against the expected result. |
| 114 |
|
| 115 |
- Testing specific calculations |
| 116 |
This will mainly affect the actual simulation programs rpict |
| 117 |
and rtrace. For example, there should be a test suite for every |
| 118 |
material (and modifier) type, which uses rtrace to shoot a |
| 119 |
series of rays against a surface under varying angles, in order |
| 120 |
to verify material behaviour under different parameters. Tests |
| 121 |
of this kind may require a custom script. |
| 122 |
|
| 123 |
Contributed tests can be of two kinds. In the simplest case, you |
| 124 |
can contribute a small(!) set of test data, the command line(s) |
| 125 |
used to run your tests on them, and a list of expected results. |
| 126 |
Result comparisons are typically done in text form (by line). |
| 127 |
If the result is a picture, we'll use ttyimage to pick out a few |
| 128 |
scan lines for comparison (the image dimensions must be less than |
| 129 |
128 pixels). Other binary data needs to be converted into a |
| 130 |
suitable text representation as well. If you're not sure what to |
| 131 |
use, the developers will be happy to assist you. They will then |
| 132 |
also wrap your test case into a Python module for integration |
| 133 |
with the framework. |
| 134 |
|
| 135 |
Contributors sufficiently familiar with the Python programming |
| 136 |
language and the unittest framework can also submit complete |
| 137 |
test suites in Python. Please use the existing tests below the |
| 138 |
"testcases" directory as a template, and check out the helper |
| 139 |
modules in ".../lib/pyradlib" (where ".../lib" is the location of |
| 140 |
the Radiance support library). |
| 141 |
|
| 142 |
Also note the pseudo-builtin module "testsupport" temporarily |
| 143 |
created by the RadianceTests() class (see the docstrings there), |
| 144 |
which provides information about the various required directory |
| 145 |
locations. |
| 146 |
|
| 147 |
And lastly you'll find that we have deliberately included a space |
| 148 |
character in the name of the "test data" directory, because it is |
| 149 |
a design requirement that all our executables can handle path |
| 150 |
names with spaces. |
| 151 |
|
| 152 |
In any case, remember that we can't use any shell scripts or |
| 153 |
similar tools in our tests. All tests should be able to run on |
| 154 |
all supported platforms, where your favourite shell may not be |
| 155 |
available. The Python programming language is available for |
| 156 |
pretty much any platform, so we decided to use only that. |
| 157 |
|
| 158 |
|
| 159 |
|