Leaders in deep learning for computer vision since 2014

Kitware at WACV

As Silver Level Sponsors of this year’s virtual WACV Conference, Kitware continues its active participation and support of the computer vision community. Check out this video to learn more about our capabilities and the deep learning expertise we have pioneered in computer vision.

Schedule of Events

Tuesday, January 5 through Saturday, January 9

WACV is known as the Institute of Electrical and Electronics Engineers (IEEE) and the Technical Committee on Pattern Analysis and Machine Intelligence’s (PAMI-TC) premier meeting on applications of computer vision. This conference promotes collaboration, research and development, and insight into computer vision applications and technology through workshops, tutorials, and main sessions. The event provides in-depth information on cutting edge research advances in applications of computer vision technology. WACV attendees include academia, industry, and government representatives. 

Join us virtually for these featured events:

Tuesday, January 5, 2021, from 11 am – 3:15 pm ET (Virtual)

Workshop: Third International Workshop on Human Activity Detection in Multi-camera, Continuous, Long-duration Video

When humans interact with machines and other agents, advanced pattern recognition techniques are predominately used to interpret the complex behavioral patterns. Computer vision is key to the analysis and synthesis of human behavior. Still, it stands to gain much from multimodality and multi-source processing to improve accuracy, resource use, robustness, and contextualization.

This workshop is for researchers looking to model human behavior under its multiple facets, with particular attention to multi-source aspects, including multi-sensor, multi-participant, and multimodal settings. The diversity of human behavior, the richness of multi-modal data that arises from its analysis, and the multitude of applications that demand rapid progress in this area make this workshop a timely and relevant discussion and dissemination platform.

This workshop was organized by Afzal Godil, Jonathan Fiscus, Yooyoung Lee, Anthony Hoogs, and Reuven Meth.

 

Human Activity Detection Workshop

Wednesday, January 6, 2021, between 9:45 – 11 pm ET Time Slot (Virtual)

Paper Presentation: A Large-Scale Multiview, Multimodal Video Dataset for Activity Detection

Authors: Kellie CoronaKatie OsterdahlRoddy CollinsAnthony Hoogs

We present MEVA, a new and very-large-scale dataset for human activity recognition. Existing surveillance datasets either focus on activity counts by aggregating public video disseminated due to its content, which typically excludes same-scene background video, or they achieve persistence by observing public areas and thus cannot control for activity content. Our dataset is over 9300 hours of untrimmed, continuous video, scripted to include diverse, simultaneous activities, along with spontaneous background activity. We have annotated 144 hours for 37 activity types, marking bounding boxes of actors and props.

Our collection observed approximately 100 actors performing scripted scenarios and spontaneous background activity over a three-week period at an access-controlled venue, collecting in multiple modalities with overlapping and non-overlapping indoor and outdoor viewpoints. The resulting data includes video from 38 RGB and thermal IR cameras, 42 hours of UAV footage, as well as GPS locations for the actors. 122 hours of annotation are sequestered in support of the NIST Activity in Extended Video (ActEV) challenge; the other 22 hours are available on our website, along with 328 hours of ground camera data, 4.6 hours of UAV data, and 9.6 hours of GPS logs.

Additional derived data includes camera models geo-registering the outdoor cameras and a dense 3D point cloud model of the outdoor scene. The data was collected with IRB oversight and approval and released under a CC-BY-4.0 license.

 

Paper Presentation

Looking for expertise in deep learning for computer vision?

Are you interested in joining our Computer Vision Team?

We are committed to furthering research and development in computer vision.

Our Deep Learning Open Source Platforms

VIAME is an open source, do-it-yourself AI system for analyzing imagery and video for general use, with specialized tools for the marine environment.

TeleSculptor is a cross-platform application for aerial photogrammetry.

KWIVER is an open source toolkit that solves challenging image and video analysis problems.

SMQTK is an open source toolkit for exploring large archives of image and video data that enables users to easily and dynamically train custom object classifiers for retrieval.

For more information on how you can leverage these platforms for your project, send us a message at kitware@kitware.com.