Category Archives: Publication

Subject specific 3D human pose interaction classification

Screen Shot 2014-05-09 at 10.16.44In this work, we investigate whether it is possible to distinguish conversational interac- tions from observing human motion alone, in particular subject specific gestures in 3D. We adopt Kinect sensors to obtain 3D displacement and velocity measurements, followed by wavelet decomposition to extract low level temporal features. These features are then generalized to form a visual vocabulary that can be further generalized to a set of topics from temporal distributions of visual vocabulary. A subject specific supervised learning approach based on Random Forests is used to classify the testing sequences to seven dif- ferent conversational scenarios. These conversational scenarios concerned in this work have rather subtle differences among them. Unlike typical action or event recognition, each interaction in our case contain many instances of primitive motions and actions, many of which are shared among different conversation scenarios. That is the interactions we are concerned with are not micro or instant events, such as hugging and high-five, but rather interactions over a period of time that consists rather similar individual motions, micro actions and interactions. We believe this is among one of the first work that is devoted to subject specific conversational interaction classification using 3D pose features and to show this task is indeed possible.

pdficon_largeJ. Deng, X. Xie, and B. Daubney, A bag of words approach to subject specific 3D human pose interaction classification with random decision forests, Graphical Models, Volume 76, Issue 3, Pages 162–171, May 2014.

More details can be found at the Swansea Vision website.

Diffusion pruning for rapidly and robustly selecting global correspondences using local isometry

DPFinding correspondences between two surfaces is a fundamental operation in various applications in computer graphics and related fields. Candidate correspondences can be found by matching local signatures, but as they only consider local geometry, many are globally inconsistent. We provide a novel algorithm to prune a set of candidate correspondences to those most likely to be globally consistent. Our approach can handle articulated surfaces, and ones related by a deformation which is globally nonisometric, provided that the deformation is locally approximately isometric. Our approach uses an efficient diffusion framework, and only requires geodesic distance calculations in small neighbourhoods, unlike many existing techniques which require computation of global geodesic distances. We demonstrate that, for typical examples, our approach provides significant improvements in accuracy, yet also reduces time and memory costs by a factor of several hundred compared to existing pruning techniques. Our method is furthermore insensitive to holes, unlike many other methods.

Gary K. L. Tam, Ralph R. Martin, Paul L. Rosin, Yu-Kun Lai
ACM Transactions on Graphics (TOG) 33(1):4, January 2014

Segmentation of biomedical images using shape prior

Screen Shot 2014-05-09 at 10.26.02In this article, a new level set model is proposed for the segmentation of biomedical images. The image energy of the proposed model is derived from a robust image gradient feature which gives the active contour a global representation of the geometric configuration, making it more robust in dealing with image noise, weak edges, and initial configurations. Statistical shape information is incorporated using nonparametric shape density distribution, which allows the shape model to handle relatively large shape variations. The segmentation of various shapes from both synthetic and real images depict the robustness and efficiency of the proposed method.

pdficon_largeS. Y. Yeo, X. Xie, I. Sazonov, and P. Nithiarasu, Segmentation of biomedical images using active contour model with robust image feature and shape prior, International Journal for Numerical Methods in Biomedical Engineering, Volume 30, Issue 2, pages 232–248, February 2014.

More details can be found at the Swansea Vision website.

Integrated Segmentation and Interpolation of Sparse Data

Screen Shot 2014-05-09 at 09.51.59We address the two inherently related problems of segmentation and interpolation of 3D and 4D sparse data and propose a new method to integrate these stages in a level set framework.
The interpolation process uses segmentation information rather than pixel intensities for increased robustness and accuracy. The method supports any spatial configurations of sets of 2D slices having arbitrary positions and orientations. We achieve this by introducing a new level set scheme based on the interpolation of the level set function by radial basis functions. The proposed method is validated quantitatively and/or subjec- tively on artificial data and MRI and CT scans and is compared against the traditional sequential approach, which interpolates the images first, using a state-of-the-art image interpolation method, and then segments the interpolated volume in 3D or 4D. In our experiments, the proposed framework yielded similar segmentation results to the sequential approach but provided a more robust and accurate interpolation. In particular, the interpolation was more satisfactory in cases of large gaps, due to the method taking into account the global shape of the object, and it recovered better topologies at the extremities of the shapes where the objects disappear from the image slices. As a result, the complete integrated framework provided more satisfactory shape reconstructions than the sequential approach.

pdficon_largeA. Paiement, M. Mirmehdi, X. Xie, and M. Hamilton, Integrated Segmentation and Interpolation of Sparse Data, IEEE Transactions on Image Processing (T-IP), volume 23, issue 1, pages 110-125, January 2014.

More details can be found at the Swansea Vision website.

Transformation of an Uncertain Video Search Pipeline to a Sketch-based Visual Analytics Loop

Video search interface

Video search interface

Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatio-temporal attributes from sports video to identify key instances of the team and player performance.

pdficon_largePowerpoint iconPhil A. Legg, David H. S. Chung, Matt L. Parry, Rhodri Bown, Mark W. Jones, Iwan W. Griffiths, Min Chen.
IEEE Transactions on Visualization and Computer Graphics, 19(12), 2109-2118.

Shape and Appearance Priors for Level Set-based LV Segmentation

We propose a novel spatiotemporal constraint based on shape and appearance and combine it with a level-set deformable model for left ventricle (LV) segmentation in four-dimensional gated cardiac SPECT, particularly in the presence of perfusion defects. The model incorporates appearance and shape information into a ‘soft-to-hard’ probabilistic constraint, and utilises spatiotemporal regularisation via a maximum a posteriori framework. This constraint force allows more flexibility than the rigid forces of shape constraint-only schemes, as well as other state of the art joint shape and appearance constraints. The combined model can hypothesise defective LV borders based on prior knowledge. The authors present comparative results to illustrate the improvement gain. A brief defect detection example is finally presented as an application of the proposed method.

IET Journal of Computer Vision, vol. 7, no.3, pp. 170-183, 2013.

Follow this link to see more publications on Computer Vision and Medical Image Analysis.

Probabilistic illumination-aware filtering for Monte Carlo rendering

Path traced, 16 samples using  Probabilistic illumination-aware filtering

Path traced, 16 samples using Probabilistic illumination-aware filtering

Noise removal for Monte Carlo global illumination rendering is a well known problem, and has seen significant attention from image-based filtering methods. However, many state of the art methods breakdown in the presence of high frequency features, complex lighting and materials. In this work we present a probabilistic image based noise removal and irradiance filtering framework that preserves this high frequency detail such as hard shadows and glossy reflections, and imposes no restrictions on the characteristics of the light transport or materials. We maintain per-pixel clusters of the path traced samples and, using statistics from these clusters, derive an illumination aware filtering scheme based on the discrete Poisson probability distribution. Furthermore, we filter the incident radiance of the samples, allowing us to preserve and filter across high frequency and complex textures without limiting the effectiveness of the filter.

pdficon_largePowerpoint iconIan C. Doidge and Mark W. Jones.
CGI 2013, The Visual Computer 29(6-8),707-616, 2013. The final publication is available at www.springerlink.com.

Photon Parameterisation for Robust Relaxation Constraints

splitprismThis paper presents a novel approach to detecting and preserving fine illumination structure within photon maps. Data derived from each photon’s primal trajectory is encoded and used to build a high-dimensional kd-tree. Incorporation of these new parameters allows for precise differentiation between intersecting ray envelopes, thus minimizing detail degradation when combined with photon relaxation. We demonstrate how parameter-aware querying is beneficial in both detecting and removing noise. We also propose a more robust structure descriptor based on principal components analysis that better identifies anisotropic detail at the sub-kernel level.We illustrate the effectiveness of our approach in several example scenes and show significant improvements when rendering complex caustics compared to previous methods.

pdficon_largeBen Spencer and Mark W. Jones
Computer Graphics Forum, Volume 32, Issue 2pt1, pages 83–92, May 2013. [doi]

Best paper, Eurographics 2013.

InK-Compact: In-Kernel Stream Compaction and Its Application to Multi-Kernel Data Visualization on General-Purpose GPUs

inkcompactStream compaction is an important parallel computing primitive that produces a reduced (compacted) output stream consisting of only valid elements from an input stream containing both invalid and valid elements. Computing on this compacted stream rather than the mixed input stream leads to improvements in performance, load balancing, and memory footprint. Stream compaction has numerous applications in a wide range of domains: e.g., deferred shading, isosurface extraction, and surface voxelization in computer graphics and visualization. We present a novel In-Kernel stream compaction method, where compaction is completed before leaving an operating kernel. This contrasts with conventional parallel compaction methods that require leaving the kernel and running a prefix sum kernel followed by a scatter kernel. We apply our compaction methods to ray-tracing-based visualization of volumetric data. We demonstrate that the proposed In-Kernel Compaction outperforms the standard out-of-kernel Thrust parallel-scan method for performing stream compaction in this real-world application. For the data visualization, we also propose a novel multi-kernel ray-tracing pipeline for increased thread coherency and show that it outperforms a conventional single-kernel approach.

D. M.pdficon_large Hughes, I. S. Lim, M. W. Jones, A. Knoll and B. Spencer
Computer Graphics Forum, 2013, 32(6), 178-188. [doi]

Progressive Photon Relaxation

splitprismWe introduce a novel algorithm for progressively removing noise from view-independent photon maps while simultaneously minimizing residual bias. Our method refines a primal set of photons using data from multiple successive passes to estimate the incident flux local to each photon. We show how this information can be used to guide a relaxation step with the goal of enforcing a constant, per-photon flux. Using a reformulation of the radiance estimate, we demonstrate how the resulting blue noise photon distribution yields a radiance reconstruction in which error is significantly reduced. Our approach has an open-ended runtime of the same order as unbiased and asymptotically consistent rendering methods, converging over time to a stable result. We demonstrate its effectiveness at storing caustic illumination within a view-independent framework and at a fidelity visually comparable to reference images rendered using progressive photon mapping.

pdficon_largeBen Spencer and Mark W. Jones
ACM Transactions on Graphics. 32(1), January 2013 [doi] [bibtex]