[Radiance-general] Various texture mappings at rendertime

Greg Ward gregoryjward at gmail.com
Sun Jul 30 21:11:57 PDT 2017


Hi Dion,

I've put a few responses inline, below...

> From: Dion Moult <dion at thinkmoult.com>
> Date: July 30, 2017 4:15:08 PM PDT
> 
> Good day all, 
> 
> I've got a few questions about texture mappings which are so prevalent in other less validated rendering engines. I understand that these texture mappings aren't usually in the Radiance vocabulary as usually simple materials, or even greyscale materials with a focus purely on luminance is used. But hey :) 
> 
> Displacement maps:
> 
> Some other rendering engines (which may not be scientifically validated, such as Pixar's Renderman) support a form of displacement mapping that is applied at render time. That is, the lower resolution mesh is fed into the rendering engine along with a bitmap that represents a height map, which is then used to displace the mesh. 
> 
> Please note that this is not the same as a normal map / bump map, which simply perturbs the surface normals but does not actually shift the geometry itself. 
> 
> Is this possible in Radiance?

Not as such.  The ray intersection routines in Radiance don't offer surface subdivision capabilities, as are necessary when implementing displacement maps.  The complexity of supporting this feature is considerable.

> Normal maps:
> 
> As a side question, I encountered the original discussion about bump mapping / normal mapping by Simon and Greg back in 1992 about using texdata along with a custom script to create the data files. I've recreated this method successfully but am curious as to whether there is something now more built-in :) (yes, I saw the texpict patch but it is now outdated) Is there?

I don't really see the problem converting images to data files if you want that kind of texture/bump map.  Is something broken with your method, or you just want to do it the way other renderers do it?  

> Occlusion maps:
> 
> From my understanding, the usage of occlusion maps to express ambient occlusion is not a thing in Radiance.  Although it is perhaps technically possible to achieve, it only serves to create artificial peak thresholds of the Rgb reflectance values that is ignorant of the actual lighting being used in the scene. Instead, the occlusion should be calculated by Radiance itself by nature of less bounces near corners. Is this understanding correct? If so, I am curious why it is so popular in other "photorealistic" rendering engines. 

The interreflection calculation gives you something close to the right answer, where occlusion maps offer no guarantees and in fact no strict relation to real lighting. They are merely a convenient shortcut that "looks OK" because your eye is a lousy photometer.  They are much easier to compute, because you don't need to consider other objects in your scene.  That's also why they're wrong.

> Diffuse maps:
> 
> This is basically colorpict in action. From my understanding, there is no problem in using this to add further realism and rgb relevant analysis to the Radiance render as long as the picture used has pixels that aren't merely artistic, but actually do represent the Rgb reflectance values of the material. Where can I learn more about how to create and find repositories of these types of pictures? I am cautious that if I simply use any old picture online, it may incorrectly skew the luminance analysis. 

True enough.  So long as your image value multiplied against the diffuse RGB of the material are not greater than 1.0, you stay within what is physically plausible.  Since most images convert to Radiance pictures in the 0-1 range, this is fairly easy to ensure.  You should still use an RGB value that when multiplied against the average given by "pvalue -h -b -d texture.hdr | total -m" gives you the desired average diffuse surface reflectance.

> UV mapping with coordinates:
> 
> I played a bit with maps and got familiar with the various transforms, pic_u/v and tile_u/v options to place the maps to scale. At first glance I did not notice anything in the refman about using more sophisticated UV coordinates (for example, map UV coordinates directly to coordinates in a polygon) Is this possible? 

If you have (u,v) coordinates in a Wavefront .OBJ file, you can use obj2mesh to preserve them and utilize the "Lu" and "Lv" built-in variables to access them in the .cal file associated with patterns or textures.  This is currently the only way to import (u,v) coordinates in Radiance, unfortunately.

> Sorry for the barrage of questions. 
> 
> Kind regards, 
> Dion
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.radiance-online.org/pipermail/radiance-general/attachments/20170730/ea7b0af5/attachment.html>


More information about the Radiance-general mailing list