[Radiance-general] Rendering for off-centre presentation in a VR lab

P George Lovell p.g.lovell at st-andrews.ac.uk
Tue Jan 29 08:36:35 PST 2013


Thanks Giovanni and Greg,

I'll try those rpict options, they sound just like what I need.

I wont be using high dynamic range, which means I may have to simulate a 
more muted illumination pattern - more like the usual Scottish weather!

RHolo would be cool! Currently everything is visualized in code I've 
written in matab, as I'm using a PC with mingGW...

George






On 29/01/2013 09:59, Giovanni Betti wrote:
> Hi George,
>
> This seems a very interesting project! Does this include also trying to simulate the high dynamic range found in real life?
>
> For your problem I guess that the options -vs and -vl might help you.
>  From the rpict manpage:
> 	
> -vs
> Set the view shift to val. This is the amount the actual image will be shifted to the right of the specified view. This is option is useful for generating skewed perspectives or rendering an image a piece at a time. A value of 1 means that the rendered image starts just to the right of the normal view. A value of −1 would be to the left. Larger or fractional values are permitted as well.
>
> -vl
> Set the view lift to val. This is the amount the actual image will be lifted up from the specified view, similar to the −vs option.
>
>
> Hope this helps,
>
> Giovanni
>
>
> -----Original Message-----
> From: P George Lovell [mailto:p.g.lovell at st-andrews.ac.uk]
> Sent: 29 January 2013 09:36
> To: Radiance general discussion
> Subject: [Radiance-general] Rendering for off-centre presentation in a VR lab
>
> Hi Everyone,
>
> I'm attempting to use Radiance to generate images for presentation in a large VR space - the system currently uses a game-engine but I'm not happy with the quality of the rendering as I'm interested in the visual perception of shadows and shading.
>
> The space* is approximately a 6x6x2metre volume with a stereo
> (polarized) screen at one end, the screen is roughly 6x2 metres.
>
>
> I want to present a rendered scene as if it lies behind the screen, i.e.
> the screen is a window through which we view a rendered object. The
> viewer can then walk through the space and I can present updated views
> relative to the current viewing location (I have 6 DOF head tracking).
> Obviously this requires a lot of offline rendering and a relatively
> narrow movement range - just to cut the rendering overhead.
>
> It's easy enough to see how I might render images for presentation when
> viewer and the viewing target are positioned centrally within the world,
> i.e. looking straight forward towards the middle of the screen. What I
> don't understand is how I render scenes for when the viewer has move
> off-centre to left or right. Firstly, perspective is going to make one
> side of the screen smaller than the other, I'd need to correct this so
> that the screen image fits on the actual screen.
>
> I think I could build-in some markers that denote the corners of the
> large VR screen, then stretch the image so that these markers lie on the
> corners of the screen - this seems a little clumsy.
>
> Is there a better way?
>
> George
>
>
> *<http://www.abertay.ac.uk/about/news/pickoftheweek/2010/name,5454,en.html>
>
>
>


-- 
Dr P. George Lovell,

Lecturer in Psychology
University of Abertay Dundee
Dundee
DD1 1HG

Tel 01382-308581
Fax 01382-308749

Researcher/Co-investigator,
School of Psychology, University of St Andrews

RM 2.01 (The Tower Room).
Phone      (01334) 462085




More information about the Radiance-general mailing list