We use ParaView for many of our our point cloud projects: object detection, mobile LiDAR, advanced visualization, and more. ParaView provides a platform for interactive visualization and processing pipelines with python scriptability. Last year we wrote a blog post describing how to load point cloud data into ParaView from the first generation Project Tango development kit. In this article we describe an updated method that works with the current development kit from Google. We are just getting started using the device and SDK and wanted to share our initial experiences and results.
Using the new device, we have begun mapping and analyzing our 3D environment. The video below shows the recorded information from a walk around our office. This video uses the eye-dome lighting feature of ParaView to visualize the point cloud over the whole sequence. The individual point cloud captured at each acquisition is highlighted in red, and the path of the tablet is shown in yellow.
Installing the recorder app
We used the Point Cloud Java example from Project Tango Java API Example Projects as the basis for our recording tool. We modified the application to record device pose and point cloud data from the Project Tango API. This data is saved to VTK files which can be directly loaded into ParaView. The source code for this application is available here under the Apache 2.0 license. Improvements or fixes to this app are welcome as pull requests. A binary download of the .apk is available here (updated). The current version is built against the Fermat release of the Project Tango SDK.
Using the recorder app
To start recording, open the ParaView Tango Recorder on the device. After starting the app, hold the device still for several seconds while the device initializes itself. You can start using the app when you see a point cloud appear. The app listens to callbacks from the Project Tango SDK for updates of the device pose and for new point cloud data. Recorded data is written to the sdcard storage.
The app can record two different types of data:
- Point Cloud – the 3D coordinates of each point in the pointcloud along with a timestamp. Currently we record a subset of the available point cloud data (every third frame of the point cloud data) to reduce overall storage requirements.
- Device Pose – The 3D position, orientation, and timestamp of the device position are recorded separately.
To start recording the user should toggle on the Record switch; this will begin recording the pose information only. Point cloud data can be acquired either by tapping the Snapshot button to record a single point cloud or by enabling Auto Mode which will automatically record every third point cloud that becomes available. The First, Third, and Top buttons control the camera viewpoint of the live point cloud stream showing first person, third-person, and top-down views. If difficulties are encountered in viewing the point cloud data try restarting the app and review the troubleshooting procedures.
When recording is stopped by switching off the Record button, a dialog will appear to share the scan data compressed as a .zip, to another application. You can then save the scan to another application such as Google Drive, Email, etc.
There will be a delay before this dialog appears, so be patient; longer scans will require more time to launch the Send Data To… dialog.
The data is also recorded on the SD card, in the folder /sdcard/Tango/MyPointCloudData. There is no feature within the current app to clear that data from disk. If you want to free disk space you can use a file manager app or adb to delete the files manually.
Loading data in ParaView
The ParaView Tango Recorder app uses the VTK file format to store pose and point cloud information stored in a .zip file. Sample data is available here. After the .zip file is extracted, the contents may be opened in ParaView. For this tutorial we used ParaView 4.3.1. After opening ParaView, use File > Open… to load the pose and point cloud data. The point cloud files should appear as a single collapsed entry. If you select and load this entry, without uncollapsing the filenames, the data will load as a point cloud animation sequence. After loading, we toggle off the visibility of the scan data to show only the file pc_14143861_poses.vtk. This shows the XYZ position of the device during recording. In this case it traces a path through our office. The orientation and time information are stored as point data within the object.
Second we can view and animate the point cloud data. The Animation toolbar at the top enables sequential playback of the point cloud data. In the figure below, we see a 3D scan of a person wearing boxing gloves from the perspective of the depth camera.
In a followup post we will show how to align the depth information with the device pose, visualize the orientation of the device as glyphs, overlay multiple depth scans into a single scene, and other techniques needed to create visualizations like the video at the top of the blog post.