19marallday232018 MSS (Parallel)Military Sensing Symposia

Event Details

Kitware will be briefing at this year’s 2018 Military Sensing Symposia (MSS) Parallel Meeting under the MSS Specialty Group on Passive Sensors in Gaithersburg, MD, March 19-23.

Event Details:

This event is the annual parallel meeting of the MSS Specialty Group on Detectors, the MSS Specialty Group on Passive Sensors, the MSS Specialty Group on Materials and the MSS Specialty Group on Battlefield Survivability and Discrimination.  According to the MSS website, it will include five days of presentations, discussions, and dissemination of information on various technologies and capabilities related to each Group, all conducted at the SECRET classified level.  Sessions will include but are not limited to Image Processing and Real-Time Implementation, Multidimensional and Diversity Sensors, Sensor System Design & Development, Small Unattended Aircraft Systems (SUAS) Sensors and Systems, Emerging Technologies and System Concepts, and a Special Session on Passive MMW Imaging Through Degraded Visual Environment.  Make sure to check out their website for more details on the different Groups and to register by their deadline of March 3, 2018.  Various Department of Defense (DoD), Industry, Commercial, and Academic experts will be in attendance to collaborate and learn more about technology, capabilities, and needs.

Details:

Matt Leotta, Ph.D., a Team Principal and Technical Leader within the Computer Vision Group at Kitware, will speak on Telesculptor: Dense 3D Models from Uncalibrated Full Motion Video (FMV).  Accurate and dense 3D models are an important resource in mission planning for special operations.  Full Motion Video (FMV) is often readily available and up-to-date when active sensors data, such as lidar, are not.  Current techniques using FMV to produce 3D models are labor intensive and manual, and are not able to respond quickly to real-time operations and decision making.  The use of FMV to build accurate 3D models automatically is possible, and Kitware will present their recent work supported by the U.S. Air Force (USAF) to automate the reconstruction of accurate and dense 3D models from operational FMV.  Details addressing the challenges of applying structure from motion (SfM) and multi-view stereo (MVS) to FMV will be included, such as missing metadata, cloud cover, metadata “burned” into the video, etc.  Kitware’s approaches to overcome these challenges will be broken down and preliminary results will show that semantic segmentation of video key frames can be used to improve 3D results.  Experimental results showing detailed 3D models produced from FMV data collected by Air Force Special Operations Command (AFSOC) will demonstrate that fine scale details are visible, such as utility poles and tree trunks.  Fine scale details greatly improve mission planning, analysis of a scene, and accuracy.  Authors to this paper include Matt Leotta, Eric Smith, Ph.D., and David Russell, all with Kitware, Inc. and the Computer Vision Group.

Kitware’s Computer Vision group recognizes how valuable advancing computer vision and deep learning is in order to greatly improve and push capabilities beyond their limits supporting the DoD and Intelligence Communities.  We have worked with various agencies, such as the Defense Advanced Research Project Agency (DARPA), Air Force Research Laboratory (AFRL), the Office of Naval Research (ONR), Intelligence Advanced Research Projects Activity (IARPA) and the U.S. Air Force.  Kitware has developed and deployed an operational Wide Area Motion Imagery (WAMI) tracking systems for Intelligence, Surveillance, and Reconnaissance (ISR) in theatre, providing analysts with exploitation capabilities that fuse sensors, platforms, and people.  Our work with DARPA on Squad-X has led to extensive research, development, and deployment of robust methods to more accurately identify and track objects and people, delivered straight to the soldier on the ground. In addition, Kitware is continually improving their KitWare Image and Video Exploitation and Retrieval (KWIVER) toolkit, which is an open source framework for video and image analytics built from Kitware’s years of experience developing analytic systems for various customers in multiple domains.  KWIVER has grown to include our Motion-imagery Aerial Photogrammetry Toolkit (MAP-TK), an open source toolkit for making measurements from aerial video.  Kitware also provides an end-user application called TeleSculptor which uses KWIVER to  provide the 3D Model generation solution from FMV.  Please visit our computer vision and KWIVER webpages for more information into our key focus areas and experience.

Contact:

Please reach out to computervision@kitware.com to schedule meetings throughout this event.  Matt Leotta will be on-hand for in depth conversations.  We are looking forward to engaging with this community and sharing information on Kitware’s ongoing research and capability development in computer vision and deep learning as well as our cutting-edge open source vision software, KWIVER.

Time

march 19 (Monday) - 23 (Friday) EST

Location

Gaithersburg

Questions or comments are always welcome!

X