Spatial Streaming and Compositing

In my last blog, I talked about spatial streaming in VTK. The example I covered demonstrated how a pipeline consisting of a structured data source and a contour filter can be streamed in smaller chunks to create a collection of polydata objects, which can then be rendered. The downside of this approach is that the entirety of the contour geometry needs to be stored in memory for rendering. One way of getting around this limitation is to stream into a view. Here is an example.

Note that this is very similar to the examples in the previous blog. The main difference is that this algorithm is a sink rather than a filter and instead of producing a polydata output which represents the contour, it produces an image. For this, we create a rendering pipeline in the constructor and then during each pipeline execution, we render the current contour geometry. Finally, we save the output image after the last render. In skeleton code, this looks like the following.

The trick that makes this example work is in these two lines:

When the Erase mode is enabled, the render window renders each time on top of the same framebuffer and zbuffer without clearing them. OpenGL decides to draw a particular pixel or not depending on the previous value of the z buffer. If the old z value is smaller, it keeps the previous pixel. Otherwise, it overwrites the pixel. This works perfectly because the object we are rendering is opaque. Handling transparencly would require doing the passes in a certain order and blending the pixels. I leave that to you as an exercise. The output looks as follows. Click on the picture to see the streaming in action.

Animation

This example was mainly for demonstration. The same can be achieved in VTK without writing an algorithm. Here is the code.

The only special thing in this example is the mapper.SetNumberOfSubPieces(20) line which tells the mapper to stream its input in 20 steps. This uses the same logic as our example in that the mapper does multiple render passes when it is asked to render.

Parallel Compositing

The examples we covered so far are only one step away from parallel compositing so we might as well cover that too. In general, parallel sort last compositing involves rendering images with geometry local to each process, transferring the frame and zbuffers over the network and then comparing the z values to decide which pixels to keep from which framebuffer. For more details on compositing, I recommend checking out some of the papers out there, for example “An Image Compositing Solution at Scale” by Moreland et al. You should also check out IceT, which has become the open-source reference and production implementation used by many parallel tools including VTK/ParaView.

Here is my simple two process example demonstrating compositing.

There are a few interesting bits to this example. First, the following sets up distributed processing pipeline as well as streaming:

Note how each process is asked to process the piece with index rank among a group of size size. In addition, each process is asked to stream using 20/size pieces. With 2 ranks, we still have 20 pieces total. The only special thing we have to do for compositing to work is this.

If we don’t set a global clipping range, each rank will use clipping planes based on local data which will lead to inconsistent zbuffer values (which are normalized to 0-1).

Then we grab the frame and zbuffer with

Then transfer the buffer from rank 1 to 0 with

Finally compositing is only 2 lines:

I leave it up to the reader as an exercise to extend this to more than 2 ranks. The paper by Ken et al. describes common ways of doing this including using a binary tree pattern.

Here are the 2 local images and the final image:

composite

In a future blog, I will talk about block-based streaming. Until then, happy coding.

Questions or comments are always welcome!