Category Archives: IEEE TVCG

VAST Best Paper Award

Congratulations to Gary Tam and co-authors for their Best Paper Award at VAST 2016: An Analysis of Machine- and Human-Analytics in Classification, Gary K. L. Tam, Vivek Kothari, Min Chen, http://dx.doi.org/10.1109/TVCG.2016.2598829.

Abstract
machine-and-human-classification-analysisIn this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the “bag of features” approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.

Transformation of an Uncertain Video Search Pipeline to a Sketch-based Visual Analytics Loop

Video search interface

Video search interface

Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatio-temporal attributes from sports video to identify key instances of the team and player performance.

pdficon_largePowerpoint iconPhil A. Legg, David H. S. Chung, Matt L. Parry, Rhodri Bown, Mark W. Jones, Iwan W. Griffiths, Min Chen.
IEEE Transactions on Visualization and Computer Graphics, 19(12), 2109-2118.