Streaming in VTK : Spatial

In my last 2 blogs (1, 2), I covered temporal streaming in VTK. Let’s check out how these ideas can be applied to spatial streaming. By spatial streaming, I mean processing a larger dataset in multiple pipeline executions wherein each execution processes a spatial subset of the data. There are 3 ways of spatial streaming in VTK:

  1. Extent based,
  2. Piece based,
  3. Block based.

In this blog, we’ll cover 1 and 2. I’ll talk about 3 later. Let’s dive into an example right away.

This code is almost identical to the temporal streaming code so I will not cover pipeline details. The key pieces are the following.

This is where we configure a synthetic data source to (potentially) produce an image of extents (-100, 100, -100, 100, -100, 100). Now, let’s say that this volume is too big to fit into memory and we want to process it in smaller chunks. To achieve this, we can use the vtkStreamingDemandDrivenPipeline.UPDATE_EXTENT() request. This key allows a consumer to ask a producer a subset of what it can produce. So in our RequestUpdateExtent(), we do the following:

The key part here is the use of the extent translator. vtkExtentTranslator is a simple class that breaks an extent into smaller chunks given two parameters: NumberOfPieces and Piece. If we print out the extent in RequestUpdateExtent(), we see:

Finally, we use the following to create the output:

This code contours the current block and adds the result to the output, which is a multi-block dataset.

The output looks like this:

multi extent

Next, let see how we can do piece based streaming. Actually, this is almost identical to extent based streaming. Here is the code.

The biggest difference is in RequestUpdateExtent where we do the following:

instead of

Since the data source, which is a simple image source, did not change, the behavior is actually identical in both cases. In fact, the executive uses the extent translator under the cover to ask the source for the appropriate subset during each execution. This is not the case for all data source however. For unstructured data sources, the only choice is to use piece based streaming.

This is it for now folks. In my next blog, I will talk about how we can reduce the memory usage of this pipeline further by streaming onto an image rather than a set of polydata objects. Note that polydata objects produced by the contour filter can get fairly large – sometimes larger than the original image. So it may not always be possible to keep all of the polydata in memory for rendering. We’ll see how we can avoid it in some cases. Hint: Check out vtkWindow::SetErase() and vtkWindow::SetDoubleBuffer().

Questions or comments are always welcome!