Block-Based Streaming

This is it. My last blog on streaming, at least for a while. Looking back, I have been writing about the VTK pipeline and how it can be leveraged to do cool things since September. This will be a cool topic to wrap this series up with.

In the last blog, I demonstrated how to write a dataset in blocks using h5py and reading it with a simple reader. When writing the blocks, we also saved meta-data for the spatial bounds of each block. We will leverage this meta-data here. Here is the reader.

The part that reads data is almost identical to the reader in the last blog so I won’t cover that here. The interesting part is related to meta-data and request. Here is the meta-data part.

Lines 5-14 are the meat of this function. On line 5, we create a multi-block dataset that will be used to hold meta-data to be sent downstream. Note that, this object is used for transmitting meta-data only. On line 8, we extract a block number from the name of an HDF5 group. On lines 12-13, we read the bounds for that block. This meta-data is assigned to the corresponding block’s meta-data object obtained on line 10. Finally, we set this dataset as the COMPOSITE_DATA_META_DATA() on lines 18-20. This information entry is propagated downstream automatically by the pipeline during the RequestInformation() pass.

Once this meta-data is propagated downstream, consumers can ask for a subset of blocks to be read during the RequestUpdateExtent() pass. This pass will likely set a UPDATE_COMPOSITE_INDICES() which the reader has to respond to. We will demonstrate how this key is set later. Let’s first look at how the reader uses it in RequestData().

The main difference from a standard reader is that we look at which blocks (referred to as pieces in this example but not to be confused with the pipeline piece) are requested to decide what to read. This code handles 3 different cases:

  • No UPDATE_COMPOSITE_INDICES() is set. In this case, we read all of the blocks. pieces == f.
  • UPDATE_COMPOSITE_INDICES() is set but is empty. We read nothing. pieces == []
  • UPDATE_COMPOSITE_INDICES() is set to a non-empty list. We read what is requested. pieces == [“piece%d” % num for num in uci].

This is it for the reader. Now let’s look at a simple streaming writer. This is very similar to writer in the previous blog but uses blocks instead of pieces to stream.

If you compare this writer to the previous one, you will see that it has very few minor differences. The biggest difference is in RequestUpdateExtent():

Note how the writer sets UPDATE_COMPOSITE_INDICES() rather than UPDATE_PIECE_NUMBER() to achieve streaming.

Let’s make things a bit more interesting. Say we want to insert a planar cutter (slice) between the writer and the reader. This filter will need only a subset of the input blocks – the ones that the slice plane intersects. Since the spatial bounds of each block is available at the meta-data stage (RequestInformation), this filter can actually make a smart decision on which blocks need to be loaded by the reader. Here is such a filter.

This filter is more complicated than what we have seen so far. Let’s break it into smaller pieces to study.

First the meta-data pass. Since this filter will ask only for a subset of the input blocks, it needs to replace the COMPOSITE_DATA_META_DATA() object with one that takes out the blocks that will not be needed. This way, any filter downstream will not ask for unnecessary blocks. The following code identifies the blocks that may be loaded.

It also makes a map from the output block id to the input block id. This will be needed in RequestUpdateExtent() as we will see later. Then we create a new multi-block meta-data object as follows.

Fairly straightforward. Using the map, copy meta-data from input to output. Now the output meta-data contains only blocks that intersect the plane. There is one minor issue however. Downstream filters will use an index space to request blocks which is different than the index space for the input. The good news is that we have a map to convert from one to another. We use this in RequestUpdateExtent() as follows.

Pretty easy. For each requested block (in uci), ask for the corresponding input block by mapping it through the BlockMap data member. By the way, uci is computed in a similar way to the reader so it shouldn’t need explanation.

Here you go. We can exercise these algorithms in the following ways.

This will produce a file identical to the input (test.h5).

For a test.h5 created with 33 blocks, this will print a dataset with 7 blocks, showing that only 7 blocks were loaded. These are blocks with bounds intersecting the plane.

The following pipeline, with minor modifications to the writer, would also work.

For this to work, you have to change StreamBlocks() to write a polydata instead of an unstructured grid (left to you as an exercise). Once that change is made, this pipeline will write 7 blocks of slices for a 33 blocks input dataset.

This is it folks. If you read all my blogs on the VTK pipeline, you pretty much know everything necessary to put together very sophisticated pipelines to solve all kinds of problems. Please be sure to let us know on the VTK mailing list about any interesting pipelines you put together.

In the future, I will switch gears and start talking about VTK’s data model as well as various ways of developing parallel algorithms.

One Response to Block-Based Streaming

  1. Jean Favre says:

    Great article. Thanks!

    question: I have developed an AMR reader for ParaView. Works well so far. When enabling streaming ( –enable-streaming), I am confused by the fact that my reader must already load all its pieces (after hitting Apply), before I have a chance to create a SliceAMRData filter. So, am I actually throwing away all the pieces at all levels read during the first execution, in order to satisfy the UPDATE_COMPOSITE_INDICES list? Seems to work, I can color by AMRLevel, but cannot unfortunately color by my scalars.

    Would you also comment on the mechanics of the “Enable prefetching” button? how different is the meta-data pushed upstream?


Questions or comments are always welcome!