Jump to content

Parallel rendering

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Eile (talk | contribs) at 12:37, 21 July 2006 (Open source applications). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Parallel rendering (or Distributed rendering) is a method used to improve the performance of computer graphics. The rendering of graphics requires massive computational resources for complex objects like medical visualization, iso-surface generation, and some CAD applications. Traditional methods like ray tracing, 3D textures, etc., work extremely slowly in simple machines. Furthermore, virtual reality and visual simulation programs, which render to multiple display systems concurrently, are applications for parallel rendering.

Subdivision of work

Parallel rendering divides the work to be done and processes it in parallel. For example, if we have a non-parallel ray-casting application, we would send rays one by one to all the pixels in the view frustum. Instead, we can divide the whole frustum into some x number of parts and then run that many threads or processes to send rays in parallel to those x tiles. We can use a cluster of machines to do such a thing and then composite the results. This is parallel rendering.

Traditional parallel rendering is a great example of what is meant by "embarrassingly parallel" in that the frames to be rendered are distributed amongst the available compute nodes. For instance, one frame is rendered on one compute node. Multiple frames can be processed because there are multiple nodes. A truly parallel process can distribute a frame across multiple nodes using a tightly coupled cross communication methodology to process frames by orders of magnitude faster. In this way, a full-rendering job consisting of multiple frames can be edited in real-time enabling designers to do better work faster.

In interactive parallel rendering, there are different approaches of distributing the rendering work, which have different advantages and disadvantages. Sort-first rendering decomposes the final view in screen space, that is, each contributer renders a 2D tile of the final view. This mode has a limited scalability due to the parallel overhead caused by objects rendered on multiple tiles. Sort-last rendering on the other hand decomposes the rendered database across all rendering units, and recombines the partially rendered frames. This modes scales the rendering very well, but the recomposition step is expensive due to the amount of pixel data processed during recomposition. DPlex rendering distributes full, alternating frames to the individual rendering nodes. It scales very well, but increases the latency between user input and final display, which is often irritating for the user. Stereo decomposition is used for immersive applications, where the individual eye passes are rendered by different rendering units. Passive stereo systems are a typical example for this mode.

Parallel rendering can be used in graphics intensive applications to visualize the data more efficiently by adding resources like more machines.

Open source applications

The open source software package Chromium (http://chromium.sourceforge.net) provides a parallel rendering mechanism for existing applications. It intercepts the OpenGL calls and processed them, typically to send them to multiple rendering units driving a display wall.

The Equalizer project (http://www.equalizergraphics.com) is an open source rendering framework and resource management system for multipipe applications. Equalizer provides an API to write parallel, scalable visualization applications which are configured at run-time by a resource server.

See also

Concepts
Implementations