Tag Archives: TVCG

Transformation of an Uncertain Video Search Pipeline to a Sketch-based Visual Analytics Loop

Video search interface

Video search interface

Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatio-temporal attributes from sports video to identify key instances of the team and player performance.

pdficon_largePowerpoint iconPhil A. Legg, David H. S. Chung, Matt L. Parry, Rhodri Bown, Mark W. Jones, Iwan W. Griffiths, Min Chen.
IEEE Transactions on Visualization and Computer Graphics, 19(12), 2109-2118.

An Empirical Study on Using Visual Embellishments in Visualization

In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dualtask methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces “divided attention”, and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.

Rita Borgo, Alfie Abdul-Rahman, Farhan Mohamed, Philip W. Grant, Irene Reppa, Luciano Floridi, and Min Chen
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 18, NO. 12, DECEMBER 2012

Similarity Measures for Enhancing Interactive Streamline Seeding

Focus and context rakesStreamline seeding rakes are widely used in vector field visualization. We present new approaches for calculating similarity between integral curves (streamlines and pathlines). While others have used similarity distance measures, the computational expense involved with existing techniques is relatively high due to the vast number of Euclidean distance tests, restricting interactivity and their use for streamline seeding rakes.We introduce the novel idea of computing streamline signatures based on a set of curve-based attributes. A signature produces a compact representation for describing a streamline. Similarity comparisons are performed by using a popular statistical measure on the derived signatures. We demonstrate that this novel scheme, including a hierarchical variant, produces good clustering results and is computed over two orders of magnitude faster than previous methods. Similarity-based clustering enables filtering of the streamlines to provide a non-uniform seeding distribution along the seeding object. We show that this method preserves the overall flow behavior while using only a small subset of the original streamline set. We apply focus + context rendering using the clusters which allows for faster and easier analysis in cases of high visual complexity and occlusion. The method provides a high level of interactivity and allows the user to easily fine-tune the clustering results at run-time while avoiding any time-consuming re-computation. Our method maintains interactive rates even when hundreds of streamlines are used.

Tony McLoughlin, Mark W. Jones, Robert S. Laramee, Rami Malki, Ian Masters,
and Charles D. Hansen.
IEEE Transactions on Visualization and Computer Graphics, 19(8), 1342-1353, 2013. [doi]

Automatic Generation of 3D Caricatures based on Artistic Deformation Styles

Caricatures are a form of humorous visual art, usually created by skilled artists for the intention of amusement and entertainment. In this paper, we present a novel approach for automatic generation of digital caricatures from facial photographs, which capture artistic deformation styles from hand-drawn caricatures. We introduced a pseudo stress-strain model to encode the parameters of an artistic deformation style using “virtual” physical and material properties. We have also developed a software system for performing the caricaturistic deformation in 3D which eliminates the undesirable artifacts in 2D caricaturization. We employed a Multilevel Free-Form Deformation (MFFD) technique to optimize a 3D head model reconstructed from an input facial photograph, and for controlling the caricaturistic deformation. Our results demonstrated the effectiveness and usability of the proposed approach, which allows ordinary users to apply the captured and stored deformation styles to a variety of facial photographs.

Lindsay Clarke, Min Chen and Ben Mora.
IEEE Transactions on Visualization and Computer Graphics, 2011, Vol. 17. No 5, pp. 808-821.

Smooth Graphs for Visual Exploration of Higher-Order State Transitions

Representing higher order paths with smooth linesIn this paper, we present a new visual way of exploring state sequences in large observational time-series. A key advantage of our method is that it can directly visualize higher-order state transitions. A standard first order state transition is a sequence of two states that are linked by a transition. A higher-order state transition is a sequence of three or more states where the sequence of participating states are linked together by consecutive first order state transitions. Our method extends the current state-graph exploration methods by employing a two dimensional graph, in which higher-order state transitions are visualized as curved lines. All transitions are bundled into thick splines, so that the thickness of an edge represents the frequency of instances. The bundling between two states takes into account the state transitions before and after the transition. This is done in such a way that it forms a continuous representation in which any subsequence of the timeseries is represented by a continuous smooth line. The edge bundles in these graphs can be explored interactively through our incremental selection algorithm. We demonstrate our method with an application in exploring labelled time-series data from a biological survey, where a clustering has assigned a single label to the data at each time-point. In these sequences, a large number of cyclic patterns occur, which in turn are linked to specific activities. We demonstrate how our method helps to find these cycles, and how the interactive selection process helps to find and investigate activities.

Jorik Blaas, Charl P. Botha, Ed Grundy, Mark W. Jones, Robert S. Laramee and Frits H. Post.
IEEE Transactions on Visualization and Computer Graphics 15(6), 969-976, 2009. [doi] [BibTeX]

Visualization and Computer Graphics on Isotropically Emissive Volumetric Displays


The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer, producing x-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D data set or object as the input, creates an intermediate light field, and outputs a special 3D volume data set called a lumi-volume. This lumi-volume encodes approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes. We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results.

Benjamin Mora, Ross Maciejewski, Min Chen and David S. Ebert
IEEE Transactions on Visualization and Computer Graphics, March-April 2009, Vol. 15. No 2, pp. 221-234.

Hierarchical Photon Mapping

Hierarchical Photon MappingHierarchical Photon MappingPhoton mapping is an efficient method for producing high-quality photorealistic images with full global illumination. In this paper, we present a more accurate and efficient approach to final gathering using the photon map based upon the hierarchical evaluation of the photons over each surface. We use the footprint of each gather ray to calculate the irradiance estimate area rather than deriving it from the local photon density. We then describe an efficient method for computing the irradiance from the photon map given an arbitrary estimate area. Finally, we demonstrate how the technique may be used to reduce variance and increase efficiency when sampling diffuse and glossy-specular BRDFs.

Ben Spencer and Mark W. Jones.
IEEE Transactions on Visualization and Computer Graphics, 15(1), 49-61, Jan/Feb 2009. [doi] [BibTeX]