[Radiance-general] Re: CVS, ANSI, C++

Georg Mischler [email protected]
Wed, 12 Jun 2002 08:00:07 -0400 (EDT)


Randolph Fritz wrote:

> [Copied to both lists.  Until a few more people--including Greg
> Ward!--sign up for radiance-dev I don't feel right about using it
> exclusively for this discussion.]

I din't even know that list existed. When was it created?
I'm also posting to both for the moment, but I'd suggest that we
move this discussion over completely from now on.


> How do people feel about Cygwin/gcc
> <http://sources.redhat.com/cygwin/> under Windows?  I don't know the
> package, and it probably still needs an MS compiler for the header
> files, but otherwise we can use the familiar Unix tools on Windows.  I
> have, however, only a bit of experience with cygwin, and don't how
> well it works for a large project.

Cygwin is a wonderful hacker's tool, but I wouldn't recommend it
to the typical Windows user. I think the MS header files can be
downloaded somewhere, so that wouldn't necessarily be a problem.
But there are still a lot of issues that require custom coding
for Windows, as cygwin doesn't support all the unix APIs, and
some are only supported in relatively hackish ways. Apart from
that, as much as I hate to admit it, the MS compilers still
optimize a lot better than any gcc version I have seen so far.

This just reminds me of another problem that we'll have to solve
in this context. Since Windows doesn't support NFS file locking
(and neither did cygwin, last time I looked), we'll need to find
a better solution for concurrent access to ambient files. I can
think of two portable ways to do this: Either we invent a file
based locking mechanism, or we establish a seperate server
process that accepts network store and retreival requests by the
actual simulation processes. The latter would be more technicall
involved, but probably a lot more robust. Any thoughts?


> Georg Mischler wrote:
> > It could also be interesting to consider another implementation
> > framework for all the helper programs circulating around the actual
> > Radiance core. Some of the csh scripts have been rewritten in C for
> > Windows, with limited flexibility and robustness. My personal
> > favourite language for this kind of task would be Python (with some
> > parts already existing).
>
> I like this very much, but am a bit concerned about requiring all
> users to install Python libraries.  Also, some users will want other
> scripting languages; TCL, Perl, and (gack!) Visual Basic are likely
> candidates.  Ummm...probably Mathematica.

Python is trivial to install on every supported platform (which
are a lot more than even Radiance supports). Most unix systems
nowadays come with a preinstalled Python interpreter, (plus Tcl
and Perl, of course). The reason why I prefer Python in this
context is that it can be easily understood even by normal users
(in contrast to Perl), and that it has a very robust and stable
feature set, so that programs usually don't break with the next
release (in contrast to Tcl, as Gregs heroic efforts with trad
have demonstrated).

On top of that, a large body of related code in Python already
exists, parts of which may eventually be released as Open Source
as well (tough I can't make any promises just yet). In any case,
this is more a general thought for the future than an immediate
requirement. The basic core of Radiance should probably continue
to work without too many external dependencies. But a second
layer of tools could profit a lot of some improved flexibility
and portability.

Anyone who wants to use proprietary stuff like VB is always free
to do so, but I'm not sure about the chances of such code to be
included in the core distribution of Radiance even in the long run.


> Autoconf...scares me.  It's one of the most difficult scripting
> languages and it actively encourages #ifdef-laden code.  Personally, I
> favor the Kernighan and Pike (*The Practice of Programming*) approach
> to portability; write the base code portably, and bottle up the OS
> dependencies in separate libraries and APIs.

I don't think that those two approaches are mutually exclusive.
Some more complex dependencies certainly belong into seperate
modules with a thick layer of barbed wire around them. But there
are also many other small variations and bugs among different
systems with no clear borderlines between vendors, kernel and
library versions, etc. Keeping track of those without a tool like
autoconf is a real pain for both developers and users.

Have you had a look at the makeall script lately? This is
complexity that the user has to handle when something goes wrong.
Autoconf is generally a one-time effort, that only needs to be
handled by one or two of the developers. Once that is done, the
trusty mantra of "./configure; make; make install" just magically
works on pretty much any system, whether its specific quirks have
been cataloged before or not. Not every user can grant Greg ssh
access to solve compile problems, and Greg probably wouldn't have
the time to do this for every user anyway.

The Radiance sources are currently littered with hundreds of
instances of preprocessor symbols referencing more than a dozen
individual operating systems. This has almost worked yesterday,
it's already breaking today with very current systems, and it's
garanteed to break in the future, unless someone constantly keeps
a list of all the systems out there and their specific bugs and
other nonstandard behaviour. I will chose #ifdefs of the form
"HAS_<feature>" any time, against the alternative of multiple
nested OS specific conditionals in the same place.


> Peter Apian-Bennewitz wrote:
> > However, that interface is not changed solely by prototyping functions.
> > Doing more than that risks new bugs- well, we'll get them out. Maybe
> > there's a core structure between just-prototypes and a full rewrite ?
>
> Hmmmm...Radiance plug-ins.  Most Unices support some version of
> dynamic loading these days.  Windows does.  I don't think Plan 9 does...

Prototypes and the elimination of global variables will make the
*internal* interfaces of Radiance a lot clearer and more obvious
than they are right now. After that, it will be much easier to
isolate those parts that need to be changed to better accomodate
any present or future extensions, and the risk of breaking all
the rest when doing so will become much smaller.

The default compile and installation should probably be designed
for static linking, but on most systems it will be relatively
easy to generate dynamic options from there. Creating dynamically
loadable extension modules for Python from existing C libraries
is almost trivial, btw. (the image converter module in Rayfront
is just one practical example involving Radiance code).


> Now, I'm interested in ways to standardize the GUI API.  In my
> opinion, it would be useful if we could customize ximage and rview to
> native OS conventions easily, perhaps by providing an OS specific
> library.  It might also be useful to embed the core rendering tools in
> a dynamic loading environment.  But, again, I don't know what it would
> take.

You're not the first one to think that thought.

In the end it won't really take a lot of effort, but only after
the above steps have been taken. Despite all its shortcomings,
the Windows version of rview already points into the right
direction here, by demonstrating approximately where the
interfaces between the simulation core and a display framework
should be placed. Unfortunately, the existing implementation is a
horrible mess, due to the difficulties of integrating the current
Radiance code on one hand, and some other obstacles the original
developers were facing on the other. I realize that most of you
haven't seen those sources yet, so you'll simply have to take my
word for it... ;)


-schorsch

-- 
Georg Mischler  --  simulations developer  --  schorsch at schorsch.com
+schorsch.com+  --  lighting design tools  --  http://www.schorsch.com/