| 1 | 
schorsch | 
1.1 | 
 | 
| 2 | 
  | 
  | 
Radiance Testing Framework | 
| 3 | 
  | 
  | 
-------------------------- | 
| 4 | 
  | 
  | 
 | 
| 5 | 
  | 
  | 
A toolkit to test all (eventually) components of the Radiance | 
| 6 | 
  | 
  | 
synthetic image generation system for conformance to their | 
| 7 | 
  | 
  | 
specification. | 
| 8 | 
  | 
  | 
 | 
| 9 | 
  | 
  | 
 | 
| 10 | 
  | 
  | 
Limitations | 
| 11 | 
  | 
  | 
 | 
| 12 | 
  | 
  | 
For the moment, we use PyUnit to run our tests. This means that | 
| 13 | 
  | 
  | 
we're restricted to test only complete programs, and not actual | 
| 14 | 
  | 
  | 
units (since PyUnit was designed to test Python units, not C). | 
| 15 | 
  | 
  | 
A C-level testing framework may be added later. | 
| 16 | 
  | 
  | 
 | 
| 17 | 
  | 
  | 
 | 
| 18 | 
  | 
  | 
Requirements | 
| 19 | 
  | 
  | 
 | 
| 20 | 
  | 
  | 
You need a working installation of Python 2.1 (or newer) on your | 
| 21 | 
  | 
  | 
system. The reason for this is that the PyUnit framework isn't | 
| 22 | 
  | 
  | 
included with earlier versions. If you prefer to use an older | 
| 23 | 
  | 
  | 
Python (back to 1.5.2), you can get PyUnit here, and install it | 
| 24 | 
  | 
  | 
somewhere on your PYTHONPATH: | 
| 25 | 
  | 
  | 
  http://pyunit.sourceforge.net/ | 
| 26 | 
  | 
  | 
 | 
| 27 | 
  | 
  | 
Our testing framework currently assumes that the Radiance files | 
| 28 | 
  | 
  | 
reside in the following local file tree (seen from the "test/" | 
| 29 | 
  | 
  | 
subdirectory where this file resides): | 
| 30 | 
  | 
  | 
 | 
| 31 | 
  | 
  | 
  executables:    ../bin/*[.exe] | 
| 32 | 
  | 
  | 
  support files:  ../lib/* | 
| 33 | 
  | 
  | 
  data files:     ./test data/* | 
| 34 | 
  | 
  | 
 | 
| 35 | 
  | 
  | 
This is the location where the experimental SCons build system | 
| 36 | 
  | 
  | 
will place everything, so it might be easiest to compile Radiance | 
| 37 | 
  | 
  | 
using SCons for testing. | 
| 38 | 
  | 
  | 
The space character in the name of the test data directory is | 
| 39 | 
  | 
  | 
deliberate, because it is a design requirement that all our | 
| 40 | 
  | 
  | 
executables can handle path names with spaces. | 
| 41 | 
  | 
  | 
 | 
| 42 | 
  | 
  | 
 | 
| 43 | 
  | 
  | 
How to run tests | 
| 44 | 
  | 
  | 
 | 
| 45 | 
  | 
  | 
On unix systems, just type "run_all.py" in this directory to | 
| 46 | 
  | 
  | 
test everything. If that file doesn't have execute rights, you | 
| 47 | 
  | 
  | 
can supply it to the python interpreter as its single argument: | 
| 48 | 
  | 
  | 
"python run_all.py". You can also run individual test suites from | 
| 49 | 
  | 
  | 
the "py_tests" directory directly: "python test_getinfo.py". | 
| 50 | 
  | 
  | 
 | 
| 51 | 
  | 
  | 
On Windows, this should usually work as well. As an alternative, | 
| 52 | 
  | 
  | 
use the "winrun.bat" script. WARNING: You need to change the | 
| 53 | 
  | 
  | 
paths in this script to match your Python installation first. | 
| 54 | 
  | 
  | 
 | 
| 55 | 
  | 
  | 
 | 
| 56 | 
  | 
  | 
What gets tested | 
| 57 | 
  | 
  | 
 | 
| 58 | 
  | 
  | 
There are several test groups, each containing a number of test | 
| 59 | 
  | 
  | 
suites, each containing one or more tests. When running tests, | 
| 60 | 
  | 
  | 
the name of the test groups and test suites will get printed to | 
| 61 | 
  | 
  | 
the console, the latter with an "ok" if they succeeded. | 
| 62 | 
  | 
  | 
 | 
| 63 | 
  | 
  | 
If any test fails, there will be diagnostic output about the | 
| 64 | 
  | 
  | 
nature of the failure, but the remaining tests will continue to | 
| 65 | 
  | 
  | 
be executed. Note that several utility programs may be used to | 
| 66 | 
  | 
  | 
access the results of other calculations, so if eg. getinfo is | 
| 67 | 
  | 
  | 
broken, that may cause a number of seemingly unrelated tests to | 
| 68 | 
  | 
  | 
fail as well. | 
| 69 | 
  | 
  | 
 | 
| 70 | 
  | 
  | 
 | 
| 71 | 
  | 
  | 
How to report failures | 
| 72 | 
  | 
  | 
 | 
| 73 | 
  | 
  | 
If any of the tests fail on your platform, please report your | 
| 74 | 
  | 
  | 
results (and as much ancilliary information about your system and | 
| 75 | 
  | 
  | 
Radiance version) to the radiance code development mailing list: | 
| 76 | 
  | 
  | 
   http://www.radiance-online.org/ | 
| 77 | 
  | 
  | 
The developers will then either try to fix the bug, or instruct | 
| 78 | 
  | 
  | 
you on how to refine your testing to get more information about | 
| 79 | 
  | 
  | 
what went wrong. | 
| 80 | 
  | 
  | 
 | 
| 81 | 
  | 
  | 
 | 
| 82 | 
  | 
  | 
How to contribute test cases | 
| 83 | 
  | 
  | 
 | 
| 84 | 
  | 
  | 
The list of tests run is still very much incomplete, but will | 
| 85 | 
  | 
  | 
hopefully grow quickly. You can contribute by creating tests too! | 
| 86 | 
  | 
  | 
Please ask on the code development mailing list first, so that we | 
| 87 | 
  | 
  | 
can avoid overlaps between the work of different contributors. | 
| 88 | 
  | 
  | 
 | 
| 89 | 
  | 
  | 
There are two classes of tests to be considered: | 
| 90 | 
  | 
  | 
 | 
| 91 | 
  | 
  | 
- Testing individual executables | 
| 92 | 
  | 
  | 
  This means that an individual program like ev, xfom, or getinfo | 
| 93 | 
  | 
  | 
  is tested with typical input data, and the output is compared | 
| 94 | 
  | 
  | 
  against the expected result. | 
| 95 | 
  | 
  | 
 | 
| 96 | 
  | 
  | 
- Testing specific calculations | 
| 97 | 
  | 
  | 
  This will mainly affect the actual simulation programs rpict | 
| 98 | 
  | 
  | 
  and rtrace. For example, there should be a test suite for every | 
| 99 | 
  | 
  | 
  material (and modifier) type, which uses rtrace to shoot a | 
| 100 | 
  | 
  | 
  series of rays against a surface under varying angles, in order | 
| 101 | 
  | 
  | 
  to verify material behaviour under different parameters. Tests | 
| 102 | 
  | 
  | 
  of this kind may require a custom script. | 
| 103 | 
  | 
  | 
 | 
| 104 | 
  | 
  | 
There's no good way to automatically test GUI programs like | 
| 105 | 
  | 
  | 
rview. We have to rely on good human testers to check whether | 
| 106 | 
  | 
  | 
those work correctly or not. | 
| 107 | 
  | 
  | 
 | 
| 108 | 
  | 
  | 
Contributed tests can be of two kinds. In the simplest case, you | 
| 109 | 
  | 
  | 
can contribute a small(!) set of test data, the command line(s) | 
| 110 | 
  | 
  | 
used to run your tests on them, and a list of expected results. | 
| 111 | 
  | 
  | 
Result comparisons are typically done in text form (by line). | 
| 112 | 
  | 
  | 
If the result is a picture, we'll use ttyimage to pick out a few | 
| 113 | 
  | 
  | 
scan lines for comparison (the image dimensions must be less than | 
| 114 | 
  | 
  | 
128 pixels). Other binary data needs to be converted into a | 
| 115 | 
  | 
  | 
suitable text representation as well. If you're not sure what to | 
| 116 | 
  | 
  | 
use, the developers can help you about that point. They will then | 
| 117 | 
  | 
  | 
also wrap your test case into a Python module for integration | 
| 118 | 
  | 
  | 
with the framework. | 
| 119 | 
  | 
  | 
 | 
| 120 | 
  | 
  | 
Contributors sufficiently familiar with the Python programming | 
| 121 | 
  | 
  | 
language and the PyUnit test framework can also submit complete | 
| 122 | 
  | 
  | 
test suites in Python. Please use the existing tests in the | 
| 123 | 
  | 
  | 
"py_tests" directory as a template, and check out the helper | 
| 124 | 
  | 
  | 
modules in "py_tests/unit_tools". | 
| 125 | 
  | 
  | 
 | 
| 126 | 
  | 
  | 
In any case, please note that we can't use any shell scripts or | 
| 127 | 
  | 
  | 
similar tools in our tests. All tests should be able to run on | 
| 128 | 
  | 
  | 
all supported platforms, where your favourite shell may not be | 
| 129 | 
  | 
  | 
available. The Python programming language is available for | 
| 130 | 
  | 
  | 
pretty much any platform, so we decided to use only that. | 
| 131 | 
  | 
  | 
 | 
| 132 | 
  | 
  | 
 | 
| 133 | 
  | 
  | 
 |