[Radiance-general] Evaluation

Martin Moeck MMoeck at engr.psu.edu
Thu Jan 20 19:55:49 CET 2005


For me, the most important determinant of accuracy (besides accurate input) is the number of interreflections, the ambient density and extensive use of mkillum. 5 ambient bounces is pretty much a must. This has been known ever since people started writing lighting programs. -ad =4096 can't hurt, either. 

Martin Moeck


-----Original Message-----
From:	Jack de Valpine [mailto:jedev at visarc.com]
Sent:	Thu 1/20/2005 1:27 PM
To:	Radiance general discussion
Cc:	
Subject:	Re: [Radiance-general] Evaluation
Hi Rob and Alexa,

I will follow-on Rob's excellent comments with a few thoughts of my own.

As  Rob has indicated the "accuracy" of a given simulation is highly 
dependent on the accuracy of the input data (geometry, materials and 
lighting). But I think that accuracy also has to be evaluated in terms 
of project scope/objectives. A simulation project early in the design 
process will necessarily have less resolved information to work with 
(i.e. decision have not been made on a lot of things), whereas a 
simulation later in the design process will potentially have a different 
degree of accuracy due to design decisions at that point.

Another thing to consider is what is being compared against, while the 
real world is what we ultimately try to measure against, studying design 
scenarios is another way to use a simulation tool such a radiance. In 
comparative studies, that is one design scenario to another, most things 
can be held as constant with a few things varying (geometry, materials, 
or lighting). The question I would be inclined to ask in this case is 
whether we can make reasonable judgments about design scenarios even 
though we may not be using the most accurate data (for a variety of 
reasons), for example if we are using a simple sky model (e.g. gensky) 
can we still make reasonable design judgments. I would suggest that even 
with unknowns or limited data performing a radiance based simulation is 
going to be far more useful from a design evaluation standpoint that 
using one of the multitude of shrink wrap renderers that are out in the 
market.

Best,

-Jack de Valpine


Rob Guglielmetti wrote:

>Hi Alexa,
>
>  
>
>>triggered by Jan's and Richard's problem of evaluating glare, I was
>>wondering how the architectual & lighting community (obviously I'm not a
>>member of it) uses RADIANCE.
>>    
>>
>
>OK, so I guess I'll go first.  But I'm not standing out here all by
>myself; I'd love to hear from the rest of you...
>
>  
>
>>a) how do you know that the results you come up with 'reflect the true
>>values' (I assume 'true' is +/- an error)? I agree the simulation
>>results are not completely out of order in terms of luminance, otherwise
>>people wouldn't use RADIANCE.
>>    
>>
>
>Good question.  Radiance itself has been the subject of many validation
>studies, and has been proven to be quite capable of coming up with the
>"true values" for most scenes, assuming valid, high-quality input. 
>There's the rub though; skies are variable, and every project brings with
>it new materials -- often materials unavailable for accurate sampling at
>the time of the simulation.  So, often I *don't* know I'm looking at "true
>values", but I do know (hope) that the values are close enough with which
>to make evaluations.  Many times, we are evaluating several different
>schemes, and when they are all simulated in Radiance with the same kind of
>what I call "accuracy settings" -- you know, the myriad values used for
>rpict & rtrace -- I know for certain that I can say scheme-a is (insert
>criteria here, brightness, uniformity, whathaveyou) than scheme-b.  Often
>this is all that is needed, is for Radiance to guide us in a direction
>that can be explored more fully, either with Radiance or with physical
>mockups.
>
>But yes, the temptation is there, to treat the numbers generated by
>Radiance as THE numbers.  I have to fight it all the time; I submit a
>report showing 290 Lux on a plan, and people go "oh, this doesn't work, we
>can't have more than 270 Lux there."  That's my cue to ease into the
>discussion of how a mathematical model of the sky's luminance distribution
>is NOT /the sky's/ luminance distribution, etc.
>
>  
>
>>b) once a buliding has been built, has anyone gone back inside the
>>office they simulated and obtained measurements to compare with their
>>simulation results?
>>    
>>
>
>Many of the validation studies do just that.  My first big project
>simulated with Radiance is still under construction, but we have done
>similar tests with projects simulated with Lightscape and AGI and have
>been generally pleased with the outcome.  Typically, the light levels are
>not the same, but neither is the real space as compared to the simulation
>model.  But the values are all in the ballpark and the clients have been
>happy. Indeed, the last big museum project I did with Lightscape at my
>previous firm was astonishingly accurate, I believe the light levels on
>the day my boss measured them were within 5% of the caluclation.  But I
>also know a thing or two about luck.  I don't tell clients to expect 5%
>accuracy and neither should you.  Barring luck, the only way to get that
>close is to do a simulation with measured sky data (and take readings of
>the space under that same sky that you are measuring).  Right, John M.? 
>This of course requires a finished building, which sorta misses the point
>of the simulation!  But John's thesis work provides the basis for many of
>us using Radiance to achieve real restful sleep at night. =8-)
>
>  
>
>>c) what magnitude of error is acceptable for your work?
>>    
>>
>
>Ian Ashdown says it better than I can, in his (excellent) "Thinking
>Photometrically" coursenotes:
>
>"As for daylighting calculations, it is likely that Jongewaard (1993) is
>correct - the results are only as accurate as the accuracy of the input
>data. Done with care, it should be possible to obtain ±20 percent accuracy
>in the photometric predictions. However, this requires detailed knowledge
>and accurate modeling of both the indoor and outdoor environments. If this
>cannot be done, it may be advisable to walk softly and carry a calibrated
>photometer."
>
>  
>
>>d) I've come across two opposing views on the accuracy of lumenaire
>>descriptor files provided by manufacturers. One states that these can be
>>off quite a bit (I think I read that in the 'Rendering with Radiance'
>>book) and other authors strut how careful and accurate their simulation
>>is by using manufacturer-provided lumenaire descriptors.
>>    
>>
>
>Photometry data from the manufacturers is a far better way to describe the
>performance of a luminaire than most of the built-in tools in simulation
>programs.  But yes, there are still problems.  Primarily, the issue of
>far-field photometry.  Linear cove fixtures are trated as point sources
>when photometred, and misuse of these IES files in a simulation can lead
>to very inaccurate simulations.  Of course in Radiance you can increase
>the -ds value to at least help the situation, by taking that "point"
>distribution and sort-of arraying it along the fixture's axis.  As long as
>the distribution is the same along the length of the luminaire, and your
>-ds is suitably fine, you can get good results this way with
>manufacturer-supplied data.  The other big lighting simulation packages
>like AGI & Lumen Micro (and dear departed Lightscape) also allow you to do
>this, in their own ways.
>
>But sometimes the boast of accuracy simply because manufacturer-supplied
>photometry is being used should be a warning sign...  I recently received
>a mailer from one of the manufacturers of a popular lighting simulation
>program, featuring a rendering on it that was supposed to impress upon me
>how amazing and accurate the software is.  The thing is, the linear
>uplight pendants in the image were casting this ridiculous round spot on
>the ceiling, bearing no resemblance to the linear nature of the fixture --
>in fact, it looked a heck of a lot like the operator knew nothing about
>far-field photometry and the workarounds one must use when using
>photometry files based on that method.  And this was the featured
>rendering for the product's promotional literature -- worse, the rendering
>was created by one of the company's in-house tech support/training people.
>(!)
>
>I think there is a big naivete in the industry -- when you get beyond this
>group, who is obviously much more concerned with accuracy -- when it comes
>to these photometry files, many designers just download the files and plug
>them into their programs and hit the "do my job" button. In fact, these
>files are really just ASCII dumps of a test report, a test that used a
>certain lamp, with a certain lumen depreciation factor, which may be
>different than the one in your spec; other light loss factors need to be
>considered, the orientation may not even be what you expected.  So I guess
>it just goes back to garbage in, garbage out.  Those manufacturer-supplied
>files are only as good as the person integrating them into the simulation.
>
>- Rob Guglielmetti
>www.rumblestrip.org
>
>_______________________________________________
>Radiance-general mailing list
>Radiance-general at radiance-online.org
>http://www.radiance-online.org/mailman/listinfo/radiance-general
>
>
>  
>



-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/ms-tnef
Size: 7474 bytes
Desc: not available
Url : http://radiance-online.org/pipermail/radiance-general/attachments/20050120/79918fac/attachment.bin


More information about the Radiance-general mailing list