|
IEEE Visualization Conference and IEEE Information Visualization Conference Proceedings 2011 title page |
|
Pages: i-ii |
|
doi>10.1109/TVCG.2011.210 |
|
|
|
Message from the Editor-in-Chief |
|
Ming Lin Lin
|
|
Pages: ix-ix |
|
doi>10.1109/TVCG.2011.221 |
|
|
|
Message from the Paper Chairs and Guest Editors |
|
Frank van Ham,
Raghu Machiraju,
Klaus Mueller,
Gerik Scheuermann,
Chris Weaver
|
|
Pages: x-x |
|
doi>10.1109/TVCG.2011.222 |
|
|
|
Committees, Reviewers, and Supporting Organizations |
|
Pages: xii-xii |
|
doi>10.1109/TVCG.2011.180 |
|
|
|
The 2011 Visualization Career Award: Frits Post |
|
Pages: xxi-xxi |
|
doi>10.1109/TVCG.2011.240 |
|
|
|
The 2011 Visualization Technical Achievement Award: Daniel Keim |
|
Pages: xxii--xxii |
|
doi>10.1109/TVCG.2011.241 |
|
|
|
VisWeek Keynote Address |
|
Daniel Keim
|
|
Pages: xxiii-xxiii |
|
doi>10.1109/TVCG.2011.257 |
|
|
|
VisWeek Capstone Address |
|
Amanda Cox
|
|
Pages: xxiv-xxiv |
|
doi>10.1109/TVCG.2011.256 |
|
|
|
Saliency-Assisted Navigation of Very Large Landscape Images |
|
Cheuk Yiu Ip,
Amitabh Varshney
|
|
Pages: 1737-1746 |
|
doi>10.1109/TVCG.2011.231 |
|
The
field of visualization has addressed navigation of very large datasets,
usually meshes and volumes. Significantly less attention has been
devoted to the issues surrounding navigation of very large images. In
the last few years the explosive growth ...
The
field of visualization has addressed navigation of very large datasets,
usually meshes and volumes. Significantly less attention has been
devoted to the issues surrounding navigation of very large images. In
the last few years the explosive growth in the resolution of camera
sensors and robotic image acquisition techniques has widened the gap
between the display and image resolutions to three orders of magnitude
or more. This paper presents the first steps towards navigation of very
large images, particularly landscape images, from an interactive
visualization perspective. The grand challenge in navigation of very
large images is identifying regions of potential interest. In this paper
we outline a three-step approach. In the first step we use multi-scale
saliency to narrow down the potential areas of interest. In the second
step we outline a method based on statistical signatures to further cull
out regions of high conformity. In the final step we allow a user to
interactively identify the exceptional regions of high interest that
merit further attention. We show that our approach of progressive
elicitation is fast and allows rapid identification of regions of
interest. Unlike previous work in this area, our approach is scalable
and computationally reasonable on very large images. We validate the
results of our approach by comparing them to user-tagged regions of
interest on several very large landscape images from the Internet. expand
|
|
Hierarchical Event Selection for Video Storyboards with a Case Study on Snooker Video Visualization |
|
Matthew L. Parry,
Philip A. Legg,
David H. S. Chung,
Iwan W. Griffiths,
Min Chen
|
|
Pages: 1747-1756 |
|
doi>10.1109/TVCG.2011.208 |
|
Video
storyboard, which is a form of video visualization, summarizes the
major events in a video using illustrative visualization. There are
three main technical challenges in creating a video storyboard, (a)
event classification, (b) event selection ...
Video
storyboard, which is a form of video visualization, summarizes the
major events in a video using illustrative visualization. There are
three main technical challenges in creating a video storyboard, (a)
event classification, (b) event selection and (c) event illustration.
Among these challenges, (a) is highly application-dependent and requires
a significant amount of application specific semantics to be encoded in
a system or manually specified by users. This paper focuses on
challenges (b) and (c). In particular, we present a framework for
hierarchical event representation, and an importance-based selection
algorithm for supporting the creation of a video storyboard from a
video. We consider the storyboard to be an event summarization for the
whole video, whilst each individual illustration on the board is also an
event summarization but for a smaller time window. We utilized a 3D
visualization template for depicting and annotating events in
illustrations. To demonstrate the concepts and algorithms developed, we
use Snooker video visualization as a case study, because it has a
concrete and agreeable set of semantic definitions for events and can
make use of existing techniques of event detection and 3D reconstruction
in a reliable manner. Nevertheless, most of our concepts and algorithms
developed for challenges (b) and (c) can be applied to other
application areas. expand
|
|
Artificial Defocus for Displaying Markers in Microscopy Z-Stacks |
|
Alessandro Giusti,
Pierluigi Taddei,
Giorgio Corani,
Luca Gambardella,
Cristina Magli,
Luca Gianaroli
|
|
Pages: 1757-1764 |
|
doi>10.1109/TVCG.2011.168 |
|
As
microscopes have a very shallow depth of field, Z-stacks (i.e. sets of
images shot at different focal planes) are often acquired to fully
capture a thick sample. Such stacks are viewed by users by navigating
them through the mouse wheel. We propose ...
As
microscopes have a very shallow depth of field, Z-stacks (i.e. sets of
images shot at different focal planes) are often acquired to fully
capture a thick sample. Such stacks are viewed by users by navigating
them through the mouse wheel. We propose a new technique of visualizing
3D point, line or area markers in such focus stacks, by displaying them
with a depth-dependent defocus, simulating the microscope's optics; this
leverages on the microscopists' ability to continuously twiddle focus,
while implicitly performing a shape-from-focus reconstruction of the 3D
structure of the sample. User studies confirm that the approach is
effective, and can complement more traditional techniques such as
color-based cues. We provide two implementations, one of which computes
defocus in real time on the GPU, and examples of their application. expand
|
|
Visualization of Topological Structures in Area-Preserving Maps |
|
Xavier Tricoche,
Christoph Garth,
Allen Sanderson
|
|
Pages: 1765-1774 |
|
doi>10.1109/TVCG.2011.254 |
|
Area-preserving
maps are found across a wide range of scientific and engineering
problems. Their study is made challenging by the significant
computational effort typically required for their inspection but more
fundamentally by the fractal complexity ...
Area-preserving
maps are found across a wide range of scientific and engineering
problems. Their study is made challenging by the significant
computational effort typically required for their inspection but more
fundamentally by the fractal complexity of salient structures. The
visual inspection of these maps reveals a remarkable topological picture
consisting of fixed (or periodic) points embedded in so-called island
chains, invariant manifolds, and regions of ergodic behavior. This paper
is concerned with the effective visualization and precise topological
analysis of area-preserving maps with two degrees of freedom from
numerical or analytical data. Specifically, a method is presented for
the automatic extraction and characterization of fixed points and the
computation of their invariant manifolds, also known as separatrices, to
yield a complete picture of the structures present within the scale and
complexity bounds selected by the user. This general approach offers a
significant improvement over the visual representations that are so far
available for area-preserving maps. The technique is demonstrated on a
numerical simulation of magnetic confinement in a fusion reactor. expand
|
|
Multi-Touch Table System for Medical Visualization: Application to Orthopedic Surgery Planning |
|
Claes Lundstrom,
Thomas Rydell,
Camilla Forsell,
Anders Persson,
Anders Ynnerman
|
|
Pages: 1775-1784 |
|
doi>10.1109/TVCG.2011.224 |
|
Medical
imaging plays a central role in a vast range of healthcare practices.
The usefulness of 3D visualizations has been demonstrated for many types
of treatment planning. Nevertheless, full access to 3D renderings
outside of the radiology department ...
Medical
imaging plays a central role in a vast range of healthcare practices.
The usefulness of 3D visualizations has been demonstrated for many types
of treatment planning. Nevertheless, full access to 3D renderings
outside of the radiology department is still scarce even for many
image-centric specialties. Our work stems from the hypothesis that this
under-utilization is partly due to existing visualization systems not
taking the prerequisites of this application domain fully into account.
We have developed a medical visualization table intended to better fit
the clinical reality. The overall design goals were two-fold: similarity
to a real physical situation and a very low learning threshold. This
paper describes the development of the visualization table with focus on
key design decisions. The developed features include two novel
interaction components for touch tables. A user study including five
orthopedic surgeons demonstrates that the system is appropriate and
useful for this application domain. expand
|
|
Load-Balanced Parallel Streamline Generation on Large Scale Vector Fields |
|
Boonthanome Nouanesengsy,
Teng-Yok Lee,
Han-Wei Shen
|
|
Pages: 1785-1794 |
|
doi>10.1109/TVCG.2011.219 |
|
Because
of the ever increasing size of output data from scientific simulations,
supercomputers are increasingly relied upon to generate visualizations.
One use of supercomputers is to generate field lines from large scale
flow fields. When generating ...
Because
of the ever increasing size of output data from scientific simulations,
supercomputers are increasingly relied upon to generate visualizations.
One use of supercomputers is to generate field lines from large scale
flow fields. When generating field lines in parallel, the vector field
is generally decomposed into blocks, which are then assigned to
processors. Since various regions of the vector field can have different
flow complexity, processors will require varying amounts of computation
time to trace their particles, causing load imbalance, and thus
limiting the performance speedup. To achieve load-balanced streamline
generation, we propose a workload-aware partitioning algorithm to
decompose the vector field into partitions with near equal workloads.
Since actual workloads are unknown beforehand, we propose a workload
estimation algorithm to predict the workload in the local vector field. A
graph-based representation of the vector field is employed to generate
these estimates. Once the workloads have been estimated, our
partitioning algorithm is hierarchically applied to distribute the
workload to all partitions. We examine the performance of our workload
estimation and workload-aware partitioning algorithm in several timings
studies, which demonstrates that by employing these methods, better
scalability can be achieved with little overhead. expand
|
|
Extinction-Based Shading and Illumination in GPU Volume Ray-Casting |
|
Philipp Schlegel,
Maxim Makhinya,
Renato Pajarola
|
|
Pages: 1795-1802 |
|
doi>10.1109/TVCG.2011.198 |
|
Direct
volume rendering has become a popular method for visualizing volumetric
datasets. Even though computers are continually getting faster, it
remains a challenge to incorporate sophisticated illumination models
into direct volume rendering while ...
Direct
volume rendering has become a popular method for visualizing volumetric
datasets. Even though computers are continually getting faster, it
remains a challenge to incorporate sophisticated illumination models
into direct volume rendering while maintaining interactive frame rates.
In this paper, we present a novel approach for advanced illumination in
direct volume rendering based on GPU ray-casting. Our approach features
directional soft shadows taking scattering into account, ambient
occlusion and color bleeding effects while achieving very competitive
frame rates. In particular, multiple dynamic lights and interactive
transfer function changes are fully supported. Commonly, direct volume
rendering is based on a very simplified discrete version of the original
volume rendering integral, including the development of the original
exponential extinction into a-blending. In contrast to a-blending
forming a product when sampling along a ray, the original exponential
extinction coefficient is an integral and its discretization a Riemann
sum. The fact that it is a sum can cleverly be exploited to implement
volume lighting effects, i.e. soft directional shadows, ambient
occlusion and color bleeding. We will show how this can be achieved and
how it can be implemented on the GPU. expand
|
|
GPU-Based Interactive Cut-Surface Extraction From High-Order Finite Element Fields |
|
Blake Nelson,
Robert M. Kirby,
Robert Haimes
|
|
Pages: 1803-1811 |
|
doi>10.1109/TVCG.2011.206 |
|
We
present a GPU-based ray-tracing system for the accurate and interactive
visualization of cut-surfaces through 3D simulations of physical
processes created from spectral/hp high-order finite element methods.
When used by the numerical analyst to debug ...
We
present a GPU-based ray-tracing system for the accurate and interactive
visualization of cut-surfaces through 3D simulations of physical
processes created from spectral/hp high-order finite element methods.
When used by the numerical analyst to debug the solver, the ability for
the imagery to precisely reflect the data is critical. In practice, the
investigator interactively selects from a palette of visualization tools
to construct a scene that can answer a query of the data. This is
effective as long as the implicit contract of image quality between the
individual and the visualization system is upheld. OpenGL rendering of
scientific visualizations has worked remarkably well for exploratory
visualization for most solver results. This is due to the consistency
between the use of first-order representations in the simulation and the
linear assumptions inherent in OpenGL (planar fragments and color-space
interpolation). Unfortunately, the contract is broken when the solver
discretization is of higher-order. There have been attempts to mitigate
this through the use of spatial adaptation and/or texture mapping. These
methods do a better job of approximating what the imagery should be but
are not exact and tend to be view-dependent. This paper introduces new
rendering mechanisms that specifically deal with the kinds of native
data generated by high-order finite element solvers. The exploratory
visualization tools are reassessed and cast in this system with the
focus on image accuracy. This is accomplished in a GPU setting to ensure
interactivity. expand
|
|
GPU-based Real-Time Approximation of the Ablation Zone for Radiofrequency Ablation |
|
Christian Rieder,
Tim Kroeger,
Christian Schumann,
Horst K. Hahn
|
|
Pages: 1812-1821 |
|
doi>10.1109/TVCG.2011.207 |
|
Percutaneous
radiofrequency ablation (RFA) is becoming a standard minimally invasive
clinical procedure for the treatment of liver tumors. However, planning
the applicator placement such that the malignant tissue is completely
destroyed, is a demanding ...
Percutaneous
radiofrequency ablation (RFA) is becoming a standard minimally invasive
clinical procedure for the treatment of liver tumors. However, planning
the applicator placement such that the malignant tissue is completely
destroyed, is a demanding task that requires considerable experience. In
this work, we present a fast GPU-based real-time approximation of the
ablation zone incorporating the cooling effect of liver vessels.
Weighted distance fields of varying RF applicator types are derived from
complex numerical simulations to allow a fast estimation of the
ablation zone. Furthermore, the heat-sink effect of the cooling blood
flow close to the applicator's electrode is estimated by means of a
preprocessed thermal equilibrium representation of the liver parenchyma
and blood vessels. Utilizing the graphics card, the weighted distance
field incorporating the cooling blood flow is calculated using a modular
shader framework, which facilitates the real-time visualization of the
ablation zone in projected slice views and in volume rendering. The
proposed methods are integrated in our software assistant prototype for
planning RFA therapy. The software allows the physician to interactively
place virtual RF applicator models. The real-time visualization of the
corresponding approximated ablation zone facilitates interactive
evaluation of the tumor coverage in order to optimize the applicator's
placement such that all cancer cells are destroyed by the ablation. expand
|
|
Feature-Based Statistical Analysis of Combustion Simulation Data |
|
Janine C. Bennett,
Vaidyanathan Krishnamoorthy,
Shusen Liu,
Ray W. Grout,
Evatt R. Hawkes,
Jacqueline H. Chen,
Jason Shepherd,
Valerio Pascucci,
Peer-Timo Bremer
|
|
Pages: 1822-1831 |
|
doi>10.1109/TVCG.2011.199 |
|
We
present a new framework for feature-based statistical analysis of
large-scale scientific data and demonstrate its effectiveness by
analyzing features from Direct Numerical Simulations (DNS) of turbulent
combustion. Turbulent flows are ubiquitous and ...
We
present a new framework for feature-based statistical analysis of
large-scale scientific data and demonstrate its effectiveness by
analyzing features from Direct Numerical Simulations (DNS) of turbulent
combustion. Turbulent flows are ubiquitous and account for transport and
mixing processes in combustion, astrophysics, fusion, and climate
modeling among other disciplines. They are also characterized by
coherent structure or organized motion, i.e. nonlocal entities whose
geometrical features can directly impact molecular mixing and reactive
processes. While traditional multi-point statistics provide correlative
information, they lack nonlocal structural information, and hence, fail
to provide mechanistic causality information between organized fluid
motion and mixing and reactive processes. Hence, it is of great interest
to capture and track flow features and their statistics together with
their correlation with relevant scalar quantities, e.g. temperature or
species concentrations. In our approach we encode the set of all
possible flow features by pre-computing merge trees augmented with
attributes, such as statistical moments of various scalar fields, e.g.
temperature, as well as length-scales computed via spectral analysis.
The computation is performed in an efficient streaming manner in a
pre-processing step and results in a collection of meta-data that is
orders of magnitude smaller than the original simulation data. This
meta-data is sufficient to support a fully flexible and interactive
analysis of the features, allowing for arbitrary thresholds, providing
per-feature statistics, and creating various global diagnostics such as
Cumulative Density Functions (CDFs), histograms, or time-series. We
combine the analysis with a rendering of the features in a linked-view
browser that enables scientists to interactively explore, visualize, and
analyze the equivalent of one terabyte of simulation data. We highlight
the utility of this new framework for combustion science; however, it
is applicable to many other science domains. expand
|
|
Quasi Interpolation With Voronoi Splines |
|
Mahsa Mirzargar,
Alireza Entezari
|
|
Pages: 1832-1841 |
|
doi>10.1109/TVCG.2011.230 |
|
We
present a quasi interpolation framework that attains the optimal
approximation-order of Voronoi splines for reconstruction of volumetric
data sampled on general lattices. The quasi interpolation framework of
Voronoi splines provides an unbiased reconstruction ...
We
present a quasi interpolation framework that attains the optimal
approximation-order of Voronoi splines for reconstruction of volumetric
data sampled on general lattices. The quasi interpolation framework of
Voronoi splines provides an unbiased reconstruction method across
various lattices. Therefore this framework allows us to analyze and
contrast the sampling-theoretic performance of general lattices, using
signal reconstruction, in an unbiased manner. Our quasi interpolation
methodology is implemented as an efficient FIR filter that can be
applied online or as a preprocessing step. We present visual and
numerical experiments that demonstrate the improved accuracy of
reconstruction across lattices, using the quasi interpolation framework. expand
|
|
Topological Spines: A Structure-preserving Visual Representation of Scalar Fields |
|
Carlos Correa,
Peter Lindstrom,
Peer-Timo Bremer
|
|
Pages: 1842-1851 |
|
doi>10.1109/TVCG.2011.244 |
|
We
present topological spines−a new visual representation that preserves
the topological and geometric structure of a scalar field. This
representation encodes the spatial relationships of the extrema of a
scalar field together with the local volume ...
We
present topological spines−a new visual representation that preserves
the topological and geometric structure of a scalar field. This
representation encodes the spatial relationships of the extrema of a
scalar field together with the local volume and nesting structure of the
surrounding contours. Unlike other topological representations, such as
contour trees, our approach preserves the local geometric structure of
the scalar field, including structural cycles that are useful for
exposing symmetries in the data. To obtain this representation, we
describe a novel mechanism based on the extraction of extremum
graphs−sparse subsets of the Morse-Smale complex that retain the
important structural information without the clutter and occlusion
problems that arise from visualizing the entire complex directly.
Extremum graphs form a natural multiresolution structure that allows the
user to suppress noise and enhance topological features via the
specification of a persistence range. Applications of our approach
include the visualization of 3D scalar fields without occlusion
artifacts, and the exploratory analysis of high-dimensional functions. expand
|
|
Towards Robust Topology of Sparsely Sampled Data |
|
Carlos Correa,
Peter Lindstrom
|
|
Pages: 1852-1861 |
|
doi>10.1109/TVCG.2011.245 |
|
Sparse,
irregular sampling is becoming a necessity for reconstructing large and
high-dimensional signals. However, the analysis of this type of data
remains a challenge. One issue is the robust selection of neighborhoods −
a crucial part of analytic ...
Sparse,
irregular sampling is becoming a necessity for reconstructing large and
high-dimensional signals. However, the analysis of this type of data
remains a challenge. One issue is the robust selection of neighborhoods −
a crucial part of analytic tools such as topological decomposition,
clustering and gradient estimation. When extracting the topology of
sparsely sampled data, common neighborhood strategies such as k-nearest
neighbors may lead to inaccurate results, either due to missing
neighborhood connections, which introduce false extrema, or due to
spurious connections, which conceal true extrema. Other neighborhoods,
such as the Delaunay triangulation, are costly to compute and store even
in relatively low dimensions. In this paper, we address these issues.
We present two new types of neighborhood graphs: a variation on and a
generalization of empty region graphs, which considerably improve the
robustness of neighborhood-based analysis tools, such as topological
decomposition. Our findings suggest that these neighborhood graphs lead
to more accurate topological representations of low- and high-
dimensional data sets at relatively low cost, both in terms of storage
and computation time. We describe the implications of our work in the
analysis and visualization of scalar functions, and provide general
strategies for computing and applying our neighborhood graphs towards
robust data analysis. expand
|
|
Visualization of AMR Data With Multi-Level Dual-Mesh Interpolation |
|
Patrick Moran,
David Ellsworth
|
|
Pages: 1862-1871 |
|
doi>10.1109/TVCG.2011.252 |
|
We
present a new technique for providing interpolation within
cell-centered Adaptive Mesh Refinement (AMR) data that achieves C^0
continuity throughout the 3D domain. Our technique improves on earlier
work in that it does not require that adjacent patches ...
We
present a new technique for providing interpolation within
cell-centered Adaptive Mesh Refinement (AMR) data that achieves C^0
continuity throughout the 3D domain. Our technique improves on earlier
work in that it does not require that adjacent patches differ by at most
one refinement level. Our approach takes the dual of each mesh patch
and generates "stitching cells" on the fly to fill the gaps between dual
meshes. We demonstrate applications of our technique with data from
Enzo, an AMR cosmological structure formation simulation code. We show
ray-cast visualizations that include contributions from particle data
(dark matter and stars, also output by Enzo) and gridded hydrodynamic
data. We also show results from isosurface studies, including surfaces
in regions where adjacent patches differ by more than one refinement
level. expand
|
|
Nodes on Ropes: A Comprehensive Data and Control Flow for Steering Ensemble Simulations |
|
Jurgen Waser,
Hrvoje Ribicic,
Raphael Fuchs,
Christian Hirsch,
Benjamin Schindler,
Gunther Bloschl,
Eduard Groller
|
|
Pages: 1872-1881 |
|
doi>10.1109/TVCG.2011.225 |
|
Flood
disasters are the most common natural risk and tremendous efforts are
spent to improve their simulation and management. However,
simulation-based investigation of actions that can be taken in case of
flood emergencies is rarely done. This is in ...
Flood
disasters are the most common natural risk and tremendous efforts are
spent to improve their simulation and management. However,
simulation-based investigation of actions that can be taken in case of
flood emergencies is rarely done. This is in part due to the lack of a
comprehensive framework which integrates and facilitates these efforts.
In this paper, we tackle several problems which are related to steering a
flood simulation. One issue is related to uncertainty. We need to
account for uncertain knowledge about the environment, such as
levee-breach locations. Furthermore, the steering process has to reveal
how these uncertainties in the boundary conditions affect the confidence
in the simulation outcome. Another important problem is that the
simulation setup is often hidden in a black-box. We expose system
internals and show that simulation steering can be comprehensible at the
same time. This is important because the domain expert needs to be able
to modify the simulation setup in order to include local knowledge and
experience. In the proposed solution, users steer parameter studies
through the World Lines interface to account for input uncertainties.
The transport of steering information to the underlying data-flow
components is handled by a novel meta-flow. The meta-flow is an
extension to a standard data-flow network, comprising additional nodes
and ropes to abstract parameter control. The meta-flow has a visual
representation to inform the user about which control operations happen.
Finally, we present the idea to use the data-flow diagram itself for
visualizing steering information and simulation results. We discuss a
case-study in collaboration with a domain expert who proposes different
actions to protect a virtual city from imminent flooding. The key to
choosing the best response strategy is the ability to compare different
regions of the parameter space while retaining an understanding of what
is happening inside the data-flow system. expand
|
|
Interactive, Graph-based Visual Analysis of High-dimensional, Multi-parameter Fluorescence Microscopy Data in Toponomics |
|
Steffen Oeltze,
Wolfgang Freiler
|
|
Pages: 1882-1891 |
|
doi>10.1109/TVCG.2011.217 |
|
In
Toponomics, the function protein pattern in cells or tissue (the
toponome) is imaged and analyzed for applications in toxicology, new
drug development and patient-drug-interaction. The most advanced imaging
technique is robot-driven multi-parameter ...
In
Toponomics, the function protein pattern in cells or tissue (the
toponome) is imaged and analyzed for applications in toxicology, new
drug development and patient-drug-interaction. The most advanced imaging
technique is robot-driven multi-parameter fluorescence microscopy. This
technique is capable of co-mapping hundreds of proteins and their
distribution and assembly in protein clusters across a cell or tissue
sample by running cycles of fluorescence tagging with monoclonal
antibodies or other affinity reagents, imaging, and bleaching in situ.
The imaging results in complex multi-parameter data composed of one
slice or a 3D volume per affinity reagent. Biologists are particularly
interested in the localization of co-occurring proteins, the frequency
of co-occurrence and the distribution of co-occurring proteins across
the cell. We present an interactive visual analysis approach for the
evaluation of multi-parameter fluorescence microscopy data in
toponomics. Multiple, linked views facilitate the definition of features
by brushing multiple dimensions. The feature specification result is
linked to all views establishing a focus+context visualization in 3D. In
a new attribute view, we integrate techniques from graph visualization.
Each node in the graph represents an affinity reagent while each edge
represents two co-occurring affinity reagent bindings. The graph
visualization is enhanced by glyphs which encode specific properties of
the binding. The graph view is equipped with brushing facilities. By
brushing in the spatial and attribute domain, the biologist achieves a
better understanding of the function protein patterns of a cell.
Furthermore, an interactive table view is integrated which summarizes
unique fluorescence patterns. We discuss our approach with respect to a
cell probe containing lymphocytes and a prostate tissue section. expand
|
|
Tuner: Principled Parameter Finding for Image Segmentation Algorithms Using Visual Response Surface Exploration |
|
Thomas Torsney-Weir,
Ahmed Saad,
Torsten Moller,
Hans-Christian Hege,
Britta Weber,
Jean-Marc Verbavatz
|
|
Pages: 1892-1901 |
|
doi>10.1109/TVCG.2011.248 |
|
In
this paper we address the difficult problem of parameter-finding in
image segmentation. We replace a tedious manual process that is often
based on guess-work and luck by a principled approach that
systematically explores the parameter space. Our core ...
In
this paper we address the difficult problem of parameter-finding in
image segmentation. We replace a tedious manual process that is often
based on guess-work and luck by a principled approach that
systematically explores the parameter space. Our core idea is the
following two-stage technique: We start with a sparse sampling of the
parameter space and apply a statistical model to estimate the response
of the segmentation algorithm. The statistical model incorporates a
model of uncertainty of the estimation which we use in conjunction with
the actual estimate in (visually) guiding the user towards areas that
need refinement by placing additional sample points. In the second stage
the user navigates through the parameter space in order to determine
areas where the response value (goodness of segmentation) is high. In
our exploration we rely on existing ground-truth images in order to
evaluate the "goodness" of an image segmentation technique. We evaluate
its usefulness by demonstrating this technique on two image segmentation
algorithms: a three parameter model to detect microtubules in electron
tomograms and an eight parameter model to identify functional regions in
dynamic Positron Emission Tomography scans. expand
|
|
Branching and Circular Features in High Dimensional Data |
|
Bei Wang,
Brian Summa,
Valerio Pascucci,
Mikael Vejdemo-Johansson
|
|
Pages: 1902-1911 |
|
doi>10.1109/TVCG.2011.177 |
|
Large
observations and simulations in scientific research give rise to
high-dimensional data sets that present many challenges and
opportunities in data analysis and visualization. Researchers in
application domains such as engineering, computational ...
Large
observations and simulations in scientific research give rise to
high-dimensional data sets that present many challenges and
opportunities in data analysis and visualization. Researchers in
application domains such as engineering, computational biology, climate
study, imaging and motion capture are faced with the problem of how to
discover compact representations of highdimensional data while
preserving their intrinsic structure. In many applications, the original
data is projected onto low-dimensional space via dimensionality
reduction techniques prior to modeling. One problem with this approach
is that the projection step in the process can fail to preserve
structure in the data that is only apparent in high dimensions.
Conversely, such techniques may create structural illusions in the
projection, implying structure not present in the original
high-dimensional data. Our solution is to utilize topological techniques
to recover important structures in high-dimensional data that contains
non-trivial topology. Specifically, we are interested in
high-dimensional branching structures. We construct local circle-valued
coordinate functions to represent such features. Subsequently, we
perform dimensionality reduction on the data while ensuring such
structures are visually preserved. Additionally, we study the effects of
global circular structures on visualizations. Our results reveal
never-before-seen structures on real-world data sets from a variety of
applications. expand
|
|
Features in Continuous Parallel Coordinates |
|
Dirk J. Lehmann,
Holger Theisel
|
|
Pages: 1912-1921 |
|
doi>10.1109/TVCG.2011.200 |
|
Continuous
Parallel Coordinates (CPC) are a contemporary visualization technique
in order to combine several scalar fields, given over a common domain.
They facilitate a continuous view for parallel coordinates by
considering a smooth scalar field instead ...
Continuous
Parallel Coordinates (CPC) are a contemporary visualization technique
in order to combine several scalar fields, given over a common domain.
They facilitate a continuous view for parallel coordinates by
considering a smooth scalar field instead of a finite number of straight
lines. We show that there are feature curves in CPC which appear to be
the dominant structures of a CPC. We present methods to extract and
classify them and demonstrate their usefulness to enhance the
visualization of CPCs. In particular, we show that these feature curves
are related to discontinuities in Continuous Scatterplots (CSP). We show
this by exploiting a curve-curve duality between parallel and Cartesian
coordinates, which is a generalization of the well-known point-line
duality. Furthermore, we illustrate the theoretical considerations.
Concluding, we discuss relations and aspects of the CPC's/CSP's features
concerning the data analysis. expand
|
|
About the Influence of Illumination Models on Image Comprehension in Direct Volume Rendering |
|
Florian Lindemann,
Timo Ropinski
|
|
Pages: 1922-1931 |
|
doi>10.1109/TVCG.2011.161 |
|
In
this paper, we present a user study in which we have investigated the
influence of seven state-of-the-art volumetric illumination models on
the spatial perception of volume rendered images. Within the study, we
have compared gradient-based shading ...
In
this paper, we present a user study in which we have investigated the
influence of seven state-of-the-art volumetric illumination models on
the spatial perception of volume rendered images. Within the study, we
have compared gradient-based shading with half angle slicing,
directional occlusion shading, multidirectional occlusion shading,
shadow volume propagation, spherical harmonic lighting as well as
dynamic ambient occlusion. To evaluate these models, users had to solve
three tasks relying on correct depth as well as size perception. Our
motivation for these three tasks was to find relations between the used
illumination model, user accuracy and the elapsed time. In an additional
task, users had to subjectively judge the output of the tested models.
After first reviewing the models and their features, we will introduce
the individual tasks and discuss their results. We discovered
statistically significant differences in the testing performance of the
techniques. Based on these findings, we have analyzed the models and
extracted those features which are possibly relevant for the improved
spatial comprehension in a relational task. We believe that a
combination of these distinctive features could pave the way for a novel
illumination model, which would be optimized based on our findings. expand
|
|
Automatic Transfer Functions Based on Informational Divergence |
|
Marc Ruiz,
Anton Bardera,
Imma Boada,
Ivan Viola
|
|
Pages: 1932-1941 |
|
doi>10.1109/TVCG.2011.173 |
|
In
this paper we present a framework to define transfer functions from a
target distribution provided by the user. A target distribution can
reflect the data importance, or highly relevant data value interval, or
spatial segmentation. Our approach is ...
In
this paper we present a framework to define transfer functions from a
target distribution provided by the user. A target distribution can
reflect the data importance, or highly relevant data value interval, or
spatial segmentation. Our approach is based on a communication channel
between a set of viewpoints and a set of bins of a volume data set, and
it supports 1D as well as 2D transfer functions including the gradient
information. The transfer functions are obtained by minimizing the
informational divergence or Kullback-Leibler distance between the
visibility distribution captured by the viewpoints and a target
distribution selected by the user. The use of the derivative of the
informational divergence allows for a fast optimization process.
Different target distributions for 1D and 2D transfer functions are
analyzed together with importance-driven and view-based techniques. expand
|
|
The Effect of Colour and Transparency on the Perception of Overlaid Grids |
|
Lyn Bartram,
Billy Cheung,
Maureen Stone
|
|
Pages: 1942-1948 |
|
doi>10.1109/TVCG.2011.242 |
|
Overlaid
reference elements need to be sufficiently visible to effectively
relate to the underlying information, but not so obtrusive that they
clutter the presentation. We seek to create guidelines for presenting
such structures through experimental ...
Overlaid
reference elements need to be sufficiently visible to effectively
relate to the underlying information, but not so obtrusive that they
clutter the presentation. We seek to create guidelines for presenting
such structures through experimental studies to define boundary
conditions for visual intrusiveness. We base our work on the practice of
designers, who use transparency to integrate overlaid grids with their
underlying imagery. Previous work discovered a useful range of alpha
values for black or white grids overlayed on scatterplot images rendered
in shades of gray over gray backgrounds of different lightness values.
This work compares black grids to blue and red ones on different image
types of scatterplots and maps. We expected that the coloured grids over
grayscale images would be more visually salient than black ones,
resulting in lower alpha values. Instead, we found that there was no
significant difference between the boundaries set for red and black
grids, but that the boundaries for blue grids were set consistently
higher (more opaque). As in our previous study, alpha values are
affected by image density rather than image type, and are consistently
lower than many default settings. These results have implications for
the design of subtle reference structures. expand
|
|
Flow Radar Glyphs—Static Visualization of Unsteady Flow with Uncertainty |
|
Marcel Hlawatsch,
Philipp Leube,
Wolfgang Nowak,
Daniel Weiskopf
|
|
Pages: 1949-1958 |
|
doi>10.1109/TVCG.2011.203 |
|
A
new type of glyph is introduced to visualize unsteady flow with static
images, allowing easier analysis of time-dependent phenomena compared to
animated visualization. Adopting the visual metaphor of radar displays,
this glyph represents flow directions ...
A
new type of glyph is introduced to visualize unsteady flow with static
images, allowing easier analysis of time-dependent phenomena compared to
animated visualization. Adopting the visual metaphor of radar displays,
this glyph represents flow directions by angles and time by radius in
spherical coordinates. Dense seeding of flow radar glyphs on the flow
domain naturally lends itself to multi-scale visualization: zoomed-out
views show aggregated overviews, zooming-in enables detailed analysis of
spatial and temporal characteristics. Uncertainty visualization is
supported by extending the glyph to display possible ranges of flow
directions. The paper focuses on 2D flow, but includes a discussion of
3D flow as well. Examples from CFD and the field of stochastic
hydrogeology show that it is easy to discriminate regions of different
spatiotemporal flow behavior and regions of different uncertainty
variations in space and time. The examples also demonstrate that
parameter studies can be analyzed because the glyph design facilitates
comparative visualization. Finally, different variants of interactive
GPU-accelerated implementations are discussed. expand
|
|
iView: A Feature Clustering Framework for Suggesting Informative Views in Volume Visualization |
|
Ziyi Zheng,
Nafees Ahmed,
Klaus Mueller
|
|
Pages: 1959-1968 |
|
doi>10.1109/TVCG.2011.218 |
|
The
unguided visual exploration of volumetric data can be both a
challenging and a time-consuming undertaking. Identifying a set of
favorable vantage points at which to start exploratory expeditions can
greatly reduce this effort and can also ensure ...
The
unguided visual exploration of volumetric data can be both a
challenging and a time-consuming undertaking. Identifying a set of
favorable vantage points at which to start exploratory expeditions can
greatly reduce this effort and can also ensure that no important
structures are being missed. Recent research efforts have focused on
entropy-based viewpoint selection criteria that depend on scalar values
describing the structures of interest. In contrast, we propose a
viewpoint suggestion pipeline that is based on feature-clustering in
high-dimensional space. We use gradient/normal variation as a metric to
identify interesting local events and then cluster these via k-means to
detect important salient composite features. Next, we compute the
maximum possible exposure of these composite feature for different
viewpoints and calculate a 2D entropy map parameterized in longitude and
latitude to point out promising view orientations. Superimposed onto an
interactive track-ball interface, users can then directly use this
entropy map to quickly navigate to potentially interesting viewpoints
where visibility-based transfer functions can be employed to generate
volume renderings that minimize occlusions. To give full exploration
freedom to the user, the entropy map is updated on the fly whenever a
view has been selected, pointing to new and promising but so far unseen
view directions. Alternatively, our system can also use a set-cover
optimization algorithm to provide a minimal set of views needed to
observe all features. The views so generated could then be saved into a
list for further inspection or into a gallery for a summary
presentation. expand
|
|
Volume Analysis Using Multimodal Surface Similarity |
|
Martin Haidacher,
Stefan Bruckner,
Eduard Groller
|
|
Pages: 1969-1978 |
|
doi>10.1109/TVCG.2011.258 |
|
The
combination of volume data acquired by multiple modalities has been
recognized as an important but challenging task. Modalities often differ
in the structures they can delineate and their joint information can be
used to extend the classification ...
The
combination of volume data acquired by multiple modalities has been
recognized as an important but challenging task. Modalities often differ
in the structures they can delineate and their joint information can be
used to extend the classification space. However, they frequently
exhibit differing types of artifacts which makes the process of
exploiting the additional information non-trivial. In this paper, we
present a framework based on an information-theoretic measure of
isosurface similarity between different modalities to overcome these
problems. The resulting similarity space provides a concise overview of
the differences between the two modalities, and also serves as the basis
for an improved selection of features. Multimodal classification is
expressed in terms of similarities and dissimilarities between the
isosurfaces of individual modalities, instead of data value
combinations. We demonstrate that our approach can be used to robustly
extract features in applications such as dual energy computed tomography
of parts in industrial manufacturing. expand
|
|
Asymmetric Tensor Field Visualization for Surfaces |
|
Darrel Palke,
Zhongzang Lin,
Guoning Chen,
Harry Yeh,
Paul Vincent,
Robert Laramee,
Eugene Zhang
|
|
Pages: 1979-1988 |
|
doi>10.1109/TVCG.2011.170 |
|
Asymmetric
tensor field visualization can provide important insight into fluid
flows and solid deformations. Existing techniques for asymmetric tensor
fields focus on the analysis, and simply use evenly-spaced
hyperstreamlines on surfaces following eigenvectors ...
Asymmetric
tensor field visualization can provide important insight into fluid
flows and solid deformations. Existing techniques for asymmetric tensor
fields focus on the analysis, and simply use evenly-spaced
hyperstreamlines on surfaces following eigenvectors and
dual-eigenvectors in the tensor field. In this paper, we describe a
hybrid visualization technique in which hyperstreamlines and elliptical
glyphs are used in real and complex domains, respectively. This enables a
more faithful representation of flow behaviors inside complex domains.
In addition, we encode tensor magnitude, an important quantity in tensor
field analysis, using the density of hyperstreamlines and sizes of
glyphs. This allows colors to be used to encode other important tensor
quantities. To facilitate quick visual exploration of the data from
different viewpoints and at different resolutions, we employ an
efficient image-space approach in which hyperstreamlines and glyphs are
generated quickly in the image plane. The combination of these
techniques leads to an efficient tensor field visualization system for
domain scientists. We demonstrate the effectiveness of our visualization
technique through applications to complex simulated engine fluid flow
and earthquake deformation data. Feedback from domain expert scientists,
who are also co-authors, is provided. expand
|
|
An Interactive Local Flattening Operator to Support Digital Investigations on Artwork Surfaces |
|
Nico Pietroni,
Corsini Massimiliano,
Paolo Cignoni,
Roberto Scopigno
|
|
Pages: 1989-1996 |
|
doi>10.1109/TVCG.2011.165 |
|
Analyzing
either high-frequency shape detail or any other 2D fields (scalar or
vector) embedded over a 3D geometry is a complex task, since detaching
the detail from the overall shape can be tricky. An alternative approach
is to move to the 2D space, ...
Analyzing
either high-frequency shape detail or any other 2D fields (scalar or
vector) embedded over a 3D geometry is a complex task, since detaching
the detail from the overall shape can be tricky. An alternative approach
is to move to the 2D space, resolving shape reasoning to easier image
processing techniques. In this paper we propose a novel framework for
the analysis of 2D information distributed over 3D geometry, based on a
locally smooth parametrization technique that allows us to treat local
3D data in terms of image content. The proposed approach has been
implemented as a sketch-based system that allows to design with a few
gestures a set of (possibly overlapping) parameterizations of
rectangular portions of the surface. We demonstrate that, due to the
locality of the parametrization, the distortion is under an acceptable
threshold, while discontinuities can be avoided since the parametrized
geometry is always homeomorphic to a disk. We show the effectiveness of
the proposed technique to solve specific Cultural Heritage (CH) tasks:
the analysis of chisel marks over the surface of a unfinished sculpture
and the local comparison of multiple photographs mapped over the surface
of an artwork. For this very difficult task, we believe that our
framework and the corresponding tool are the first steps toward a
computer-based shape reasoning system, able to support CH scholars with a
medium they are more used to. expand
|
|
Context Preserving Maps of Tubular Structures |
|
Joseph Marino
|
|
Pages: 1997-2004 |
|
doi>10.1109/TVCG.2011.182 |
|
When
visualizing tubular 3D structures, external representations are often
used for guidance and display, and such views in 2D can often contain
occlusions. Virtual dissection methods have been proposed where the
entire 3D structure can be mapped to ...
When
visualizing tubular 3D structures, external representations are often
used for guidance and display, and such views in 2D can often contain
occlusions. Virtual dissection methods have been proposed where the
entire 3D structure can be mapped to the 2D plane, though these will
lose context by straightening curved sections. We present a new method
of creating maps of 3D tubular structures that yield a succinct view
while preserving the overall geometric structure. Given a dominant view
plane for the structure, its curve skeleton is first projected to a 2D
skeleton. This 2D skeleton is adjusted to account for distortions in
length, modified to remove intersections, and optimized to preserve the
shape of the original 3D skeleton. Based on this shaped 2D skeleton, a
boundary for the map of the object is obtained based on a slicing path
through the structure and the radius around the skeleton. The sliced
structure is conformally mapped to a rectangle and then deformed via
harmonic mapping to match the boundary placement. This flattened map
preserves the general geometric context of a 3D object in a 2D display,
and rendering of this flattened map can be accomplished using volumetric
ray casting. We have evaluated our method on real datasets of human
colon models. expand
|
|
Authalic Parameterization of General Surfaces Using Lie Advection |
|
Guangyu Zou,
Jiaxi Hu,
Xianfeng Gu,
Jing Hua
|
|
Pages: 2005-2014 |
|
doi>10.1109/TVCG.2011.171 |
|
Parameterization
of complex surfaces constitutes a major means of visualizing highly
convoluted geometric structures as well as other properties associated
with the surface. It also enables users with the ability to navigate,
orient, and focus on regions ...
Parameterization
of complex surfaces constitutes a major means of visualizing highly
convoluted geometric structures as well as other properties associated
with the surface. It also enables users with the ability to navigate,
orient, and focus on regions of interest within a global view and
overcome the occlusions to inner concavities. In this paper, we propose a
novel area-preserving surface parameterization method which is rigorous
in theory, moderate in computation, yet easily extendable to surfaces
of non-disc and closed-boundary topologies. Starting from the distortion
induced by an initial parameterization, an area restoring diffeomorphic
flow is constructed as a Lie advection of differential 2-forms along
the manifold, which yields equality of the area elements between the
domain and the original surface at its final state. Existence and
uniqueness of result are assured through an analytical derivation. Based
upon a triangulated surface representation, we also present an
efficient algorithm in line with discrete differential modeling. As an
exemplar application, the utilization of this method for the effective
visualization of brain cortical imaging modalities is presented.
Compared with conformal methods, our method can reveal more subtle
surface patterns in a quantitative manner. It, therefore, provides a
competitive alternative to the existing parameterization techniques for
better surface-based analysis in various scenarios. expand
|
|
TransGraph: Hierarchical Exploration of Transition Relationships in Time-Varying Volumetric Data |
|
Yi Gu,
Chaoli Wang
|
|
Pages: 2015-2024 |
|
doi>10.1109/TVCG.2011.246 |
|
A
fundamental challenge for time-varying volume data analysis and
visualization is the lack of capability to observe and track data change
or evolution in an occlusion-free, controllable, and adaptive fashion.
In this paper, we propose to organize a ...
A
fundamental challenge for time-varying volume data analysis and
visualization is the lack of capability to observe and track data change
or evolution in an occlusion-free, controllable, and adaptive fashion.
In this paper, we propose to organize a timevarying data set into a
hierarchy of states. By deriving transition probabilities among states,
we construct a global map that captures the essential transition
relationships in the time-varying data. We introduce the TransGraph, a
graph-based representation to visualize hierarchical state transition
relationships. The TransGraph not only provides a visual mapping that
abstracts data evolution over time in different levels of detail, but
also serves as a navigation tool that guides data exploration and
tracking. The user interacts with the TransGraph and makes connection to
the volumetric data through brushing and linking. A set of intuitive
queries is provided to enable knowledge extraction from time-varying
data. We test our approach with time-varying data sets of different
characteristics and the results show that the TransGraph can effectively
augment our ability in understanding time-varying data. expand
|
|
Voronoi-Based Extraction and Visualization of Molecular Paths |
|
Norbert Lindow,
Daniel Baum,
Hans-Christian Hege
|
|
Pages: 2025-2034 |
|
doi>10.1109/TVCG.2011.259 |
|
Visual
analysis is widely used to study the behavior of molecules. Of
particular interest are the analysis of molecular interactions and the
investigation of binding sites. For large molecules, however, it is
difficult to detect possible binding sites ...
Visual
analysis is widely used to study the behavior of molecules. Of
particular interest are the analysis of molecular interactions and the
investigation of binding sites. For large molecules, however, it is
difficult to detect possible binding sites and paths leading to these
sites by pure visual inspection. In this paper, we present new methods
for the computation and visualization of potential molecular paths.
Using a novel filtering method, we extract the significant paths from
the Voronoi diagram of spheres. For the interactive visualization of
molecules and their paths, we present several methods using deferred
shading and other state-of-theart techniques. To allow for a fast
overview of reachable regions of the molecule, we illuminate the
molecular surface using a large number of light sources placed on the
extracted paths. We also provide a method to compute the extension
surface of selected paths and visualize it using the skin surface.
Furthermore, we use the extension surface to clip the molecule to allow
easy visual tracking of even deeply buried paths. The methods are
applied to several proteins to demonstrate their usefulness. expand
|
|
Symmetry in Scalar Field Topology |
|
Dilip Mathew Thomas,
Vijay Natarajan
|
|
Pages: 2035-2044 |
|
doi>10.1109/TVCG.2011.236 |
|
Study
of symmetric or repeating patterns in scalar fields is important in
scientific data analysis because it gives deep insights into the
properties of the underlying phenomenon. Though geometric symmetry has
been well studied within areas like shape ...
Study
of symmetric or repeating patterns in scalar fields is important in
scientific data analysis because it gives deep insights into the
properties of the underlying phenomenon. Though geometric symmetry has
been well studied within areas like shape processing, identifying
symmetry in scalar fields has remained largely unexplored due to the
high computational cost of the associated algorithms. We propose a
computationally efficient algorithm for detecting symmetric patterns in a
scalar field distribution by analysing the topology of level sets of
the scalar field. Our algorithm computes the contour tree of a given
scalar field and identifies subtrees that are similar. We define a
robust similarity measure for comparing subtrees of the contour tree and
use it to group similar subtrees together. Regions of the domain
corresponding to subtrees that belong to a common group are extracted
and reported to be symmetric. Identifying symmetry in scalar fields
finds applications in visualization, data exploration, and feature
detection. We describe two applications in detail: symmetry-aware
transfer function design and symmetry-aware isosurface extraction. expand
|
|
A Scale Space Based Persistence Measure for Critical Points in 2D Scalar Fields |
|
Jan Reininghaus,
Natallia Kotava,
David Guenther,
Jens Kasten,
Hans Hagen,
Ingrid Hotz
|
|
Pages: 2045-2052 |
|
doi>10.1109/TVCG.2011.159 |
|
This
paper introduces a novel importance measure for critical points in 2D
scalar fields. This measure is based on a combination of the deep
structure of the scale space with the well-known concept of homological
persistence. We enhance the noise robust ...
This
paper introduces a novel importance measure for critical points in 2D
scalar fields. This measure is based on a combination of the deep
structure of the scale space with the well-known concept of homological
persistence. We enhance the noise robust persistence measure by
implicitly taking the hill-, ridge- and outlier-like spatial extent of
maxima and minima into account. This allows for the distinction between
different types of extrema based on their persistence at multiple
scales. Our importance measure can be computed efficiently in an
out-of-core setting. To demonstrate the practical relevance of our
method we apply it to a synthetic and a real-world data set and evaluate
its performance and scalability. expand
|
|
Evaluation of Trend Localization with Multi-Variate Visualizations |
|
Mark Livingston,
Jonathan Decker
|
|
Pages: 2053-2062 |
|
doi>10.1109/TVCG.2011.194 |
|
Multi-valued
data sets are increasingly common, with the number of dimensions
growing. A number of multi-variate visualization techniques have been
presented to display such data. However, evaluating the utility of such
techniques for general data sets ...
Multi-valued
data sets are increasingly common, with the number of dimensions
growing. A number of multi-variate visualization techniques have been
presented to display such data. However, evaluating the utility of such
techniques for general data sets remains difficult. Thus most techniques
are studied on only one data set. Another criticism that could be
levied against previous evaluations of multi-variate visualizations is
that the task doesn't require the presence of multiple variables. At the
same time, the taxonomy of tasks that users may perform visually is
extensive. We designed a task, trend localization, that required
comparison of multiple data values in a multi-variate visualization. We
then conducted a user study with this task, evaluating five multivariate
visualization techniques from the literature (Brush Strokes,
Data-Driven Spots, Oriented Slivers, Color Blending, Dimensional
Stacking) and juxtaposed grayscale maps. We report the results and
discuss the implications for both the techniques and the task. expand
|
|
Straightening Tubular Flow for Side-by-Side Visualization |
|
Paolo Angelelli,
Helwig Hauser
|
|
Pages: 2063-2070 |
|
doi>10.1109/TVCG.2011.235 |
|
Flows
through tubular structures are common in many fields, including blood
flow in medicine and tubular fluid flows in engineering. The analysis of
such flows is often done with a strong reference to the main flow
direction along the tubular boundary. ...
Flows
through tubular structures are common in many fields, including blood
flow in medicine and tubular fluid flows in engineering. The analysis of
such flows is often done with a strong reference to the main flow
direction along the tubular boundary. In this paper we present an
approach for straightening the visualization of tubular flow. By
aligning the main reference direction of the flow, i.e., the center line
of the bounding tubular structure, with one axis of the screen, we are
able to natively juxtapose (1.) different visualizations of the same
flow, either utilizing different flow visualization techniques, or by
varying parameters of a chosen approach such as the choice of seeding
locations for integration-based flow visualization, (2.) the different
time steps of a time-dependent flow, (3.) different projections around
the center line , and (4.) quantitative flow visualizations in immediate
spatial relation to the more qualitative classical flow visualization.
We describe how to utilize this approach for an informative interactive
visual analysis. We demonstrate the potential of our approach by
visualizing two datasets from two different fields: an arterial blood
flow measurement and a tubular gas flow simulation from the automotive
industry. expand
|
|
Vortex Visualization in Ultra Low Reynolds Number Insect Flight |
|
Christopher Koehler,
Thomas Wischgoll,
Haibo Dong,
Zachary Gaston
|
|
Pages: 2071-2079 |
|
doi>10.1109/TVCG.2011.260 |
|
We
present the visual analysis of a biologically inspired CFD simulation
of the deformable flapping wings of a dragonfly as it takes off and
begins to maneuver, using vortex detection and integration-based flow
lines. The additional seed placement and ...
We
present the visual analysis of a biologically inspired CFD simulation
of the deformable flapping wings of a dragonfly as it takes off and
begins to maneuver, using vortex detection and integration-based flow
lines. The additional seed placement and perceptual challenges
introduced by having multiple dynamically deforming objects in the
highly unsteady 3D flow domain are addressed. A brief overview of the
high speed photogrammetry setup used to capture the dragonfly takeoff,
parametric surfaces used for wing reconstruction, CFD solver and
underlying flapping flight theory is presented to clarify the importance
of several unsteady flight mechanisms, such as the leading edge vortex,
that are captured visually. A novel interactive seed placement method
is used to simplify the generation of seed curves that stay in the
vicinity of relevant flow phenomena as they move with the flapping
wings. This method allows a user to define and evaluate the quality of a
seed's trajectory over time while working with a single time step. The
seed curves are then used to place particles, streamlines and
generalized streak lines. The novel concept of flowing seeds is also
introduced in order to add visual context about the instantaneous vector
fields surrounding smoothly animate streak lines. Tests show this
method to be particularly effective at visually capturing vortices that
move quickly or that exist for a very brief period of time. In addition,
an automatic camera animation method is used to address occlusion
issues caused when animating the immersed wing boundaries alongside many
geometric flow lines. Each visualization method is presented at
multiple time steps during the up-stroke and down-stroke to highlight
the formation, attachment and shedding of the leading edge vortices in
pairs of wings. Also, the visualizations show evidence of wake capture
at stroke reversal which suggests the existence of previously unknown
unsteady lift generation mechanisms that are unique to quad wing
insects. expand
|
|
Two-Dimensional Time-Dependent Vortex Regions Based on the Acceleration Magnitude |
|
Jens Kasten,
Jan Reininghaus,
Ingrid Hotz,
Hans-Christian Hege
|
|
Pages: 2080-2087 |
|
doi>10.1109/TVCG.2011.249 |
|
Acceleration
is a fundamental quantity of flow fields that captures Galilean
invariant properties of particle motion. Considering the magnitude of
this field, minima represent characteristic structures of the flow that
can be classified as saddle- or ...
Acceleration
is a fundamental quantity of flow fields that captures Galilean
invariant properties of particle motion. Considering the magnitude of
this field, minima represent characteristic structures of the flow that
can be classified as saddle- or vortex-like. We made the interesting
observation that vortex-like minima are enclosed by particularly
pronounced ridges. This makes it possible to define boundaries of vortex
regions in a parameter-free way. Utilizing scalar field topology, a
robust algorithm can be designed to extract such boundaries. They can be
arbitrarily shaped. An efficient tracking algorithm allows us to
display the temporal evolution of vortices. Various vortex models are
used to evaluate the method. We apply our method to two-dimensional
model systems from computational fluid dynamics and compare the results
to those arising from existing definitions. expand
|
|
Adaptive Extraction and Quantification of Geophysical Vortices |
|
Sean Williams,
Mark Petersen,
Peer-Timo Bremer,
Matthew Hecht,
Valerio Pascucci,
James Ahrens,
Mario Hlawitschka,
Bernd Hamann
|
|
Pages: 2088-2095 |
|
doi>10.1109/TVCG.2011.162 |
|
We
consider the problem of extracting discrete two-dimensional vortices
from a turbulent flow. In our approach we use a reference model
describing the expected physics and geometry of an idealized vortex. The
model allows us to derive a novel correlation ...
We
consider the problem of extracting discrete two-dimensional vortices
from a turbulent flow. In our approach we use a reference model
describing the expected physics and geometry of an idealized vortex. The
model allows us to derive a novel correlation between the size of the
vortex and its strength, measured as the square of its strain minus the
square of its vorticity. For vortex detection in real models we use the
strength parameter to locate potential vortex cores, then measure the
similarity of our ideal analytical vortex and the real vortex core for
different strength thresholds. This approach provides a metric for how
well a vortex core is modeled by an ideal vortex. Moreover, this
provides insight into the problem of choosing the thresholds that
identify a vortex. By selecting a target coefficient of determination
(i.e., statistical confidence), we determine on a per-vortex basis what
threshold of the strength parameter would be required to extract that
vortex at the chosen confidence. We validate our approach on real data
from a global ocean simulation and derive from it a map of expected
vortex strengths over the global ocean. expand
|
|
FoamVis: Visualization of 2D Foam Simulation Data |
|
Dan Lipsa,
Robert Laramee,
Simon Cox,
Tudur Davies
|
|
Pages: 2096-2105 |
|
doi>10.1109/TVCG.2011.204 |
|
Research
in the field of complex fluids such as polymer solutions, particulate
suspensions and foams studies how the flow of fluids with different
material parameters changes as a result of various constraints. Surface
Evolver, the standard solver software ...
Research
in the field of complex fluids such as polymer solutions, particulate
suspensions and foams studies how the flow of fluids with different
material parameters changes as a result of various constraints. Surface
Evolver, the standard solver software used to generate foam simulations,
provides large, complex, time-dependent data sets with hundreds or
thousands of individual bubbles and thousands of time steps. However
this software has limited visualization capabilities, and no foam
specific visualization software exists. We describe the foam research
application area where, we believe, visualization has an important role
to play. We present a novel application that provides various techniques
for visualization, exploration and analysis of time-dependent 2D foam
simulation data. We show new features in foam simulation data and new
insights into foam behavior discovered using our application. expand
|
|
WYSIWYG (What You See is What You Get) Volume Visualization |
|
Hanqi Guo,
Ningyu Mao,
Xiaoru Yuan
|
|
Pages: 2106-2114 |
|
doi>10.1109/TVCG.2011.261 |
|
In
this paper, we propose a volume visualization system that accepts
direct manipulation through a sketch-based What You See Is What You Get
(WYSIWYG) approach. Similar to the operations in painting applications
for 2D images, in our system, a full set ...
In
this paper, we propose a volume visualization system that accepts
direct manipulation through a sketch-based What You See Is What You Get
(WYSIWYG) approach. Similar to the operations in painting applications
for 2D images, in our system, a full set of tools have been developed to
enable direct volume rendering manipulation of color, transparency,
contrast, brightness, and other optical properties by brushing a few
strokes on top of the rendered volume image. To be able to smartly
identify the targeted features of the volume, our system matches the
sparse sketching input with the clustered features both in image space
and volume space. To achieve interactivity, both special algorithms to
accelerate the input identification and feature matching have been
developed and implemented in our system. Without resorting to tuning
transfer function parameters, our proposed system accepts sparse stroke
inputs and provides users with intuitive, flexible and effective
interaction during volume data exploration and visualization. expand
|
|
Interactive Volume Visualization of General Polyhedral Grids |
|
Philipp Muigg,
Markus Hadwiger,
Helmut Doleisch,
Eduard Groller
|
|
Pages: 2115-2124 |
|
doi>10.1109/TVCG.2011.216 |
|
This
paper presents a novel framework for visualizing volumetric data
specified on complex polyhedral grids, without the need to perform any
kind of a priori tetrahedralization. These grids are composed of
polyhedra that often are non-convex and have ...
This
paper presents a novel framework for visualizing volumetric data
specified on complex polyhedral grids, without the need to perform any
kind of a priori tetrahedralization. These grids are composed of
polyhedra that often are non-convex and have an arbitrary number of
faces, where the faces can be non-planar with an arbitrary number of
vertices. The importance of such grids in state-of-the-art simulation
packages is increasing rapidly. We propose a very compact, face-based
data structure for representing such meshes for visualization, called
two-sided face sequence lists (TSFSL), as well as an algorithm for
direct GPU-based ray-casting using this representation. The TSFSL data
structure is able to represent the entire mesh topology in a 1D TSFSL
data array of face records, which facilitates the use of efficient 1D
texture accesses for visualization. In order to scale to large data
sizes, we employ a mesh decomposition into bricks that can be handled
independently, where each brick is then composed of its own TSFSL array.
This bricking enables memory savings and performance improvements for
large meshes. We illustrate the feasibility of our approach with
real-world application results, by visualizing highly complex polyhedral
data from commercial state-of-the-art simulation packages. expand
|
|
Image Plane Sweep Volume Illumination |
|
Erik Sunden,
Anders Ynnerman,
Timo Ropinski
|
|
Pages: 2125-2134 |
|
doi>10.1109/TVCG.2011.211 |
|
In
recent years, many volumetric illumination models have been proposed,
which have the potential to simulate advanced lighting effects and thus
support improved image comprehension. Although volume ray-casting is
widely accepted as the volume rendering ...
In
recent years, many volumetric illumination models have been proposed,
which have the potential to simulate advanced lighting effects and thus
support improved image comprehension. Although volume ray-casting is
widely accepted as the volume rendering technique which achieves the
highest image quality, so far no volumetric illumination algorithm has
been designed to be directly incorporated into the ray-casting process.
In this paper we propose image plane sweep volume illumination (IPSVI),
which allows the integration of advanced illumination effects into a
GPU-based volume ray-caster by exploiting the plane sweep paradigm.
Thus, we are able to reduce the problem complexity and achieve
interactive frame rates, while supporting scattering as well as
shadowing. Since all illumination computations are performed directly
within a single rendering pass, IPSVI does not require any preprocessing
nor does it need to store intermediate results within an illumination
volume. It therefore has a significantly lower memory footprint than
other techniques. This makes IPSVI directly applicable to large data
sets. Furthermore, the integration into a GPU-based ray-caster allows
for high image quality as well as improved rendering performance by
exploiting early ray termination. This paper discusses the theory behind
IPSVI, describes its implementation, demonstrates its visual results
and provides performance measurements. expand
|
|
Interactive Multiscale Tensor Reconstruction for Multiresolution Volume Visualization |
|
Susanne K. Suter,
Jose A. Iglesias Guitian,
Fabio Marton,
Marco Agus,
Andreas Elsener,
Christoph P. E. Zollikofer,
M. Gopi,
Enrico Gobbetti,
Renato Pajarola
|
|
Pages: 2135-2143 |
|
doi>10.1109/TVCG.2011.214 |
|
Large
scale and structurally complex volume datasets from high-resolution 3D
imaging devices or computational simulations pose a number of technical
challenges for interactive visual analysis. In this paper, we present
the first integration of a multiscale ...
Large
scale and structurally complex volume datasets from high-resolution 3D
imaging devices or computational simulations pose a number of technical
challenges for interactive visual analysis. In this paper, we present
the first integration of a multiscale volume representation based on
tensor approximation within a GPU-accelerated out-of-core
multiresolution rendering framework. Specific contributions include (a) a
hierarchical brick-tensor decomposition approach for pre-processing
large volume data, (b) a GPU accelerated tensor reconstruction
implementation exploiting CUDA capabilities, and (c) an effective
tensor-specific quantization strategy for reducing data transfer
bandwidth and out-of-core memory footprint. Our multiscale
representation allows for the extraction, analysis and display of
structural features at variable spatial scales, while adaptive
level-of-detail rendering methods make it possible to interactively
explore large datasets within a constrained memory footprint. The
quality and performance of our prototype system is evaluated on large
structurally complex datasets, including gigabyte-sized
micro-tomographic volumes. expand
|
|
An Efficient Direct Volume Rendering Approach for Dichromats |
|
Weifeng Chen,
Wei Chen,
Hujun Bao
|
|
Pages: 2144-2152 |
|
doi>10.1109/TVCG.2011.164 |
|
Color
vision deficiency (CVD) affects a high percentage of the population
worldwide. When seeing a volume visualization result, persons with CVD
may be incapable of discriminating the classification information
expressed in the image if the color transfer ...
Color
vision deficiency (CVD) affects a high percentage of the population
worldwide. When seeing a volume visualization result, persons with CVD
may be incapable of discriminating the classification information
expressed in the image if the color transfer function or the color
blending used in the direct volume rendering is not appropriate.
Conventional methods used to address this problem adopt advanced image
recoloring techniques to enhance the rendering results frame-by-frame;
unfortunately, problematic perceptual results may still be generated.
This paper proposes an alternative solution that complements the image
recoloring scheme by reconfiguring the components of the direct volume
rendering (DVR) pipeline. Our approach optimizes the mapped colors of a
transfer function to simulate CVD-friendly effect that is generated by
applying the image recoloring to the results with the initial transfer
function. The optimization process has a low computational complexity,
and only needs to be performed once for a given transfer function. To
achieve detail-preserving and perceptually natural semi-transparent
effects, we introduce a new color composition mode that works in the
color space of dichromats. Experimental results and a pilot study
demonstrates that our approach can yield dichromats-friendly and
consistent volume visualization in real-time. expand
|
|
Interactive Virtual Probing of 4D MRI Blood-Flow |
|
Roy van Pelt,
Javier Olivan Bescos,
Marcel Breeuwer,
Rachel E. Clough,
M. Eduard Groller,
Bart ter Haar Romenij,
Anna Vilanova
|
|
Pages: 2153-2162 |
|
doi>10.1109/TVCG.2011.215 |
|
Better
understanding of hemodynamics conceivably leads to improved diagnosis
and prognosis of cardiovascular diseases. Therefore, an elaborate
analysis of the blood-flow in heart and thoracic arteries is essential.
Contemporary MRI techniques enable ...
Better
understanding of hemodynamics conceivably leads to improved diagnosis
and prognosis of cardiovascular diseases. Therefore, an elaborate
analysis of the blood-flow in heart and thoracic arteries is essential.
Contemporary MRI techniques enable acquisition of quantitative
time-resolved flow information, resulting in 4D velocity fields that
capture the blood-flow behavior. Visual exploration of these fields
provides comprehensive insight into the unsteady blood-flow behavior,
and precedes a quantitative analysis of additional blood-flow
parameters. The complete inspection requires accurate segmentation of
anatomical structures, encompassing a time-consuming and
hard-to-automate process, especially for malformed morphologies. We
present a way to avoid the laborious segmentation process in case of
qualitative inspection, by introducing an interactive virtual probe.
This probe is positioned semi-automatically within the blood-flow field,
and serves as a navigational object for visual exploration. The
difficult task of determining position and orientation along the
view-direction is automated by a fitting approach, aligning the probe
with the orientations of the velocity field. The aligned probe provides
an interactive seeding basis for various flow visualization approaches.
We demonstrate illustration-inspired particles, integral lines and
integral surfaces, conveying distinct characteristics of the unsteady
blood-flow. Lastly, we present the results of an evaluation with domain
experts, valuing the practical use of our probe and flow visualization
techniques. expand
|
|
Crepuscular Rays for Tumor Accessibility Planning |
|
Rostislav Khlebnikov,
Bernhard Kainz,
Judith Muehl,
Dieter Schmalstieg
|
|
Pages: 2163-2172 |
|
doi>10.1109/TVCG.2011.184 |
|
In
modern clinical practice, planning access paths to volumetric target
structures remains one of the most important and most complex tasks, and
a physician's insufficient experience in this can lead to severe
complications or even the death of the patient. ...
In
modern clinical practice, planning access paths to volumetric target
structures remains one of the most important and most complex tasks, and
a physician's insufficient experience in this can lead to severe
complications or even the death of the patient. In this paper, we
present a method for safety evaluation and the visualization of access
paths to assist physicians during preoperative planning. As a metaphor
for our method, we employ a well-known, and thus intuitively
perceivable, natural phenomenon that is usually called crepuscular rays.
Using this metaphor, we propose several ways to compute the safety of
paths from the region of interest to all tumor voxels and show how this
information can be visualized in real-time using a multi-volume
rendering system. Furthermore, we show how to estimate the extent of
connected safe areas to improve common medical 2D multi-planar
reconstruction (MPR) views. We evaluate our method by means of expert
interviews, an online survey, and a retrospective evaluation of 19 real
abdominal radio-frequency ablation (RFA) interventions, with expert
decisions serving as a gold standard. The evaluation results show clear
evidence that our method can be successfully applied in clinical
practice without introducing substantial overhead work for the acting
personnel. Finally, we show that our method is not limited to medical
applications and that it can also be useful in other fields. expand
|
|
Distance Visualization for Interactive 3D Implant Planning |
|
Christian Dick,
Rainer Burgkart,
Rudiger Westermann
|
|
Pages: 2173-2182 |
|
doi>10.1109/TVCG.2011.189 |
|
An
instant and quantitative assessment of spatial distances between two
objects plays an important role in interactive applications such as
virtual model assembly, medical operation planning, or computational
steering. While some research has been done ...
An
instant and quantitative assessment of spatial distances between two
objects plays an important role in interactive applications such as
virtual model assembly, medical operation planning, or computational
steering. While some research has been done on the development of
distance-based measures between two objects, only very few attempts have
been reported to visualize such measures in interactive scenarios. In
this paper we present two different approaches for this purpose, and we
investigate the effectiveness of these approaches for intuitive 3D
implant positioning in a medical operation planning system. The first
approach uses cylindrical glyphs to depict distances, which smoothly
adapt their shape and color to changing distances when the objects are
moved. This approach computes distances directly on the polygonal object
representations by means of ray/triangle mesh intersection. The second
approach introduces a set of slices as additional geometric structures,
and uses color coding on surfaces to indicate distances. This approach
obtains distances from a precomputed distance field of each object. The
major findings of the performed user study indicate that a visualization
that can facilitate an instant and quantitative analysis of distances
between two objects in interactive 3D scenarios is demanding, yet can be
achieved by including additional monocular cues into the visualization. expand
|
|
The FLOWLENS: A Focus-and-Context Visualization Approach for Exploration of Blood Flow in Cerebral Aneurysms |
|
Rocco Gasteiger,
Mathias Neugebauer,
Oliver Beuing,
Bernhard Preim
|
|
Pages: 2183-2192 |
|
doi>10.1109/TVCG.2011.243 |
|
Blood
flow and derived data are essential to investigate the initiation and
progression of cerebral aneurysms as well as their risk of rupture. An
effective visual exploration of several hemodynamic attributes like the
wall shear stress (WSS) and the ...
Blood
flow and derived data are essential to investigate the initiation and
progression of cerebral aneurysms as well as their risk of rupture. An
effective visual exploration of several hemodynamic attributes like the
wall shear stress (WSS) and the inflow jet is necessary to understand
the hemodynamics. Moreover, the correlation between focus-and-context
attributes is of particular interest. An expressive visualization of
these attributes and anatomic information requires appropriate
visualization techniques to minimize visual clutter and occlusions. We
present the FLOWLENS as a focus-and-context approach that addresses
these requirements. We group relevant hemodynamic attributes to pairs of
focus-and-context attributes and assign them to different anatomic
scopes. For each scope, we propose several FLOWLENS visualization
templates to provide a flexible visual filtering of the involved
hemodynamic pairs. A template consists of the visualization of the focus
attribute and the additional depiction of the context attribute inside
the lens. Furthermore, the FLOWLENS supports local probing and the
exploration of attribute changes over time. The FLOWLENS minimizes
visual cluttering, occlusions, and provides a flexible exploration of a
region of interest. We have applied our approach to seven representative
datasets, including steady and unsteady flow data from CFD simulations
and 4D PC-MRI measurements. Informal user interviews with three domain
experts confirm the usefulness of our approach. expand
|
|
Projection-Based Metal-Artifact Reduction for Industrial 3D X-ray Computed Tomography |
|
Artem Amirkhanov,
Christoph Heinzl,
Michael Reiter,
Johann Kastner,
Eduard Groller
|
|
Pages: 2193-2202 |
|
doi>10.1109/TVCG.2011.228 |
|
Multi-material
components, which contain metal parts surrounded by plastic materials,
are highly interesting for inspection using industrial 3D X-ray computed
tomography (3DXCT). Examples of this application scenario are
connectors or housings with metal ...
Multi-material
components, which contain metal parts surrounded by plastic materials,
are highly interesting for inspection using industrial 3D X-ray computed
tomography (3DXCT). Examples of this application scenario are
connectors or housings with metal inlays in the electronic or automotive
industry. A major problem of this type of components is the presence of
metal, which causes streaking artifacts and distorts the surrounding
media in the reconstructed volume. Streaking artifacts and dark-band
artifacts around metal components significantly influence the material
characterization (especially for the plastic components). In specific
cases these artifacts even prevent a further analysis. Due to the nature
and the different characteristics of artifacts, the development of an
efficient artifact-reduction technique in reconstruction-space is rather
complicated. In this paper we present a projection-space pipeline for
metal-artifacts reduction. The proposed technique first segments the
metal in the spatial domain of the reconstructed volume in order to
separate it from the other materials. Then metal parts are
forward-projected on the set of projections in a way that
metal-projection regions are treated as voids. Subsequently the voids,
which are left by the removed metal, are interpolated in the 2D
projections. Finally, the metal is inserted back into the reconstructed
3D volume during the fusion stage. We present a visual analysis tool,
allowing for interactive parameter estimation of the metal segmentation.
The results of the proposed artifact-reduction technique are
demonstrated on a test part as well as on real world components. For
these specimens we achieve a significant reduction of metal artifacts,
allowing an enhanced material characterization. expand
|
|
Quality Metrics in High-Dimensional Data Visualization: An Overview and Systematization |
|
Enrico Bertini
|
|
Pages: 2203-2212 |
|
doi>10.1109/TVCG.2011.229 |
|
In
this paper, we present a systematization of techniques that use quality
metrics to help in the visual exploration of meaningful patterns in
high-dimensional data. In a number of recent papers, different quality
metrics are proposed to automate the ...
In
this paper, we present a systematization of techniques that use quality
metrics to help in the visual exploration of meaningful patterns in
high-dimensional data. In a number of recent papers, different quality
metrics are proposed to automate the demanding search through large
spaces of alternative visualizations (e.g., alternative projections or
ordering), allowing the user to concentrate on the most promising
visualizations suggested by the quality metrics. Over the last decade,
this approach has witnessed a remarkable development but few reflections
exist on how these methods are related to each other and how the
approach can be developed further. For this purpose, we provide an
overview of approaches that use quality metrics in high-dimensional data
visualization and propose a systematization based on a thorough
literature review. We carefully analyze the papers and derive a set of
factors for discriminating the quality metrics, visualization
techniques, and the process itself. The process is described through a
reworked version of the well-known information visualization pipeline.
We demonstrate the usefulness of our model by applying it to several
existing approaches that use quality metrics, and we provide reflections
on implications of our model for future research. expand
|
|
Benefitting InfoVis with Visual Difficulties |
|
Jessica Hullman,
Eytan Adar,
Priti Shah
|
|
Pages: 2213-2222 |
|
doi>10.1109/TVCG.2011.175 |
|
Many
well-cited theories for visualization design state that a visual
representation should be optimized for quick and immediate
interpretation by a user. Distracting elements like decorative
"chartjunk" or extraneous information are avoided so as not ...
Many
well-cited theories for visualization design state that a visual
representation should be optimized for quick and immediate
interpretation by a user. Distracting elements like decorative
"chartjunk" or extraneous information are avoided so as not to slow
comprehension. Yet several recent studies in visualization research
provide evidence that non-efficient visual elements may benefit
comprehension and recall on the part of users. Similarly, findings from
studies related to learning from visual displays in various subfields of
psychology suggest that introducing cognitive difficulties to
visualization interaction can improve a user's understanding of
important information. In this paper, we synthesize empirical results
from cross-disciplinary research on visual information representations,
providing a counterpoint to efficiency-based design theory with
guidelines that describe how visual difficulties can be introduced to
benefit comprehension and recall. We identify conditions under which the
application of visual difficulties is appropriate based on underlying
factors in visualization interaction like active processing and
engagement. We characterize effective graph design as a trade-off
between efficiency and learning difficulties in order to provide
Information Visualization (InfoVis) researchers and practitioners with a
framework for organizing explorations of graphs for which comprehension
and recall are crucial. We identify implications of this view for the
design and evaluation of information visualizations. expand
|
|
Product Plots |
|
Hadley Wickham,
Heike Hofmann
|
|
Pages: 2223-2230 |
|
doi>10.1109/TVCG.2011.227 |
|
We
propose a new framework for visualising tables of counts, proportions
and probabilities. We call our framework product plots, alluding to the
computation of area as a product of height and width, and the
statistical concept of generating a joint distribution ...
We
propose a new framework for visualising tables of counts, proportions
and probabilities. We call our framework product plots, alluding to the
computation of area as a product of height and width, and the
statistical concept of generating a joint distribution from the product
of conditional and marginal distributions. The framework, with
extensions, is sufficient to encompass over 20 visualisations previously
described in fields of statistical graphics and infovis, including bar
charts, mosaic plots, treemaps, equal area plots and fluctuation
diagrams. expand
|
|
Visualization Rhetoric: Framing Effects in Narrative Visualization |
|
Jessica Hullman,
Nick Diakopoulos
|
|
Pages: 2231-2240 |
|
doi>10.1109/TVCG.2011.255 |
|
Narrative
visualizations combine conventions of communicative and exploratory
information visualization to convey an intended story. We demonstrate
visualization rhetoric as an analytical framework for understanding how
design techniques that prioritize ...
Narrative
visualizations combine conventions of communicative and exploratory
information visualization to convey an intended story. We demonstrate
visualization rhetoric as an analytical framework for understanding how
design techniques that prioritize particular interpretations in
visualizations that "tell a story" can significantly affect end-user
interpretation. We draw a parallel between narrative visualization
interpretation and evidence from framing studies in political messaging,
decision-making, and literary studies. Devices for understanding the
rhetorical nature of narrative information visualizations are presented,
informed by the rigorous application of concepts from critical theory,
semiotics, journalism, and political theory. We draw attention to how
design tactics represent additions or omissions of information at
various levels−the data, visual representation, textual annotations, and
interactivity−and how visualizations denote and connote phenomena with
reference to unstated viewing conventions and codes. Classes of
rhetorical techniques identified via a systematic analysis of recent
narrative visualizations are presented, and characterized according to
their rhetorical contribution to the visualization. We describe how
designers and researchers can benefit from the potentially positive
aspects of visualization rhetoric in designing engaging, layered
narrative visualizations and how our framework can shed light on how a
visualization design prioritizes specific interpretations. We identify
areas where future inquiry into visualization rhetoric can improve
understanding of visualization interpretation. expand
|
|
Adaptive Privacy-Preserving Visualization Using Parallel Coordinates |
|
Aritra Dasgupta,
Robert Kosara
|
|
Pages: 2241-2248 |
|
doi>10.1109/TVCG.2011.163 |
|
Current
information visualization techniques assume unrestricted access to
data. However, privacy protection is a key issue for a lot of real-world
data analyses. Corporate data, medical records, etc. are rich in
analytical value but cannot be shared ...
Current
information visualization techniques assume unrestricted access to
data. However, privacy protection is a key issue for a lot of real-world
data analyses. Corporate data, medical records, etc. are rich in
analytical value but cannot be shared without first going through a
transformation step where explicit identifiers are removed and the data
is sanitized. Researchers in the field of data mining have proposed
different techniques over the years for privacy-preserving data
publishing and subsequent mining techniques on such sanitized data. A
well-known drawback in these methods is that for even a small guarantee
of privacy, the utility of the datasets is greatly reduced. In this
paper, we propose an adaptive technique for privacy preser vation in
parallel coordinates. Based on knowledge about the sensitivity of the
data, we compute a clustered representation on the fly, which allows the
user to explore the data without breaching privacy. Through the use of
screen-space privacy metrics, the technique adapts to the user's screen
parameters and interaction. We demonstrate our method in a case study
and discuss potential attack scenarios. expand
|
|
Context-Preserving Visual Links |
|
Markus Steinberger,
Manuela Waldner,
Marc Streit,
Alexander Lex,
Dieter Schmalstieg
|
|
Pages: 2249-2258 |
|
doi>10.1109/TVCG.2011.183 |
|
Evaluating,
comparing, and interpreting related pieces of information are tasks
that are commonly performed during visual data analysis and in many
kinds of information-intensive work. Synchronized visual highlighting of
related elements is a well-known ...
Evaluating,
comparing, and interpreting related pieces of information are tasks
that are commonly performed during visual data analysis and in many
kinds of information-intensive work. Synchronized visual highlighting of
related elements is a well-known technique used to assist this task. An
alternative approach, which is more invasive but also more expressive
is visual linking in which line connections are rendered between related
elements. In this work, we present context-preserving visual links as a
new method for generating visual links. The method specifically aims to
fulfill the following two goals: first, visual links should minimize
the occlusion of important information; second, links should visually
stand out from surrounding information by minimizing visual
interference. We employ an image-based analysis of visual saliency to
determine the important regions in the original representation. A
consequence of the image-based approach is that our technique is
application-independent and can be employed in a large number of visual
data analysis scenarios in which the underlying content cannot or should
not be altered. We conducted a controlled experiment that indicates
that users can find linked elements in complex visualizations more
quickly and with greater subjective satisfaction than in complex
visualizations in which plain highlighting is used. Context-preserving
visual links were perceived as visually more attractive than traditional
visual links that do not account for the context information. expand
|
|
Design Study of LineSets, a Novel Set Visualization Technique |
|
Basak Alper,
Nathalie Riche,
Gonzalo Ramos,
Mary Czerwinski
|
|
Pages: 2259-2267 |
|
doi>10.1109/TVCG.2011.186 |
|
Computing
and visualizing sets of elements and their relationships is one of the
most common tasks one performs when analyzing and organizing large
amounts of data. Common representations of sets such as convex or
concave geometries can become cluttered ...
Computing
and visualizing sets of elements and their relationships is one of the
most common tasks one performs when analyzing and organizing large
amounts of data. Common representations of sets such as convex or
concave geometries can become cluttered and difficult to parse when
these sets overlap in multiple or complex ways, e.g., when multiple
elements belong to multiple sets. In this paper, we present a design
study of a novel set visual representation, LineSets, consisting of a
curve connecting all of the set's elements. Our approach to design the
visualization differs from traditional methodology used by the InfoVis
community. We first explored the potential of the visualization concept
by running a controlled experiment comparing our design sketches to
results from the state-of-the-art technique. Our results demonstrated
that LineSets are advantageous for certain tasks when compared to
concave shapes. We discuss an implementation of LineSets based on simple
heuristics and present a study demonstrating that our generated curves
do as well as human-drawn ones. Finally, we present two applications of
our technique in the context of search tasks on a map and community
analysis tasks in social networks. expand
|
|
Developing and Evaluating Quilts for the Depiction of Large Layered Graphs |
|
Juhee Bae,
Benjamin Watson
|
|
Pages: 2268-2275 |
|
doi>10.1109/TVCG.2011.187 |
|
Traditional
layered graph depictions such as flow charts are in wide use. Yet as
graphs grow more complex, these depictions can become difficult to
understand. Quilts are matrix-based depictions for layered graphs
designed to address this problem. In ...
Traditional
layered graph depictions such as flow charts are in wide use. Yet as
graphs grow more complex, these depictions can become difficult to
understand. Quilts are matrix-based depictions for layered graphs
designed to address this problem. In this research, we first improve
Quilts by developing three design alternatives, and then compare the
best of these alternatives to better-known node-link and matrix
depictions. A primary weakness in Quilts is their depiction of skip
links, links that do not simply connect to a succeeding layer. Therefore
in our first study, we compare Quilts using color-only, text-only, and
mixed (color and text) skip link depictions, finding that path finding
with the color-only depiction is significantly slower and less accurate,
and that in certain cases, the mixed depiction offers an advantage over
the text-only depiction. In our second study, we compare Quilts using
the mixed depiction to node-link diagrams and centered matrices. Overall
results show that users can find paths through graphs significantly
faster with Quilts (46.6 secs) than with node-link (58.3 secs) or matrix
(71.2 secs) diagrams. This speed advantage is still greater in large
graphs (e.g. in 200 node graphs, 55.4 secs vs. 71.1 secs for node-link
and 84.2 secs for matrix depictions). expand
|
|
Arc Length-Based Aspect Ratio Selection |
|
Pages: 2276-2282 |
|
doi>10.1109/TVCG.2011.167 |
|
The
aspect ratio of a plot has a dramatic impact on our ability to perceive
trends and patterns in the data. Previous approaches for automatically
selecting the aspect ratio have been based on adjusting the orientations
or angles of the line segments ...
The
aspect ratio of a plot has a dramatic impact on our ability to perceive
trends and patterns in the data. Previous approaches for automatically
selecting the aspect ratio have been based on adjusting the orientations
or angles of the line segments in the plot. In contrast, we recommend a
simple, effective method for selecting the aspect ratio: minimize the
arc length of the data curve while keeping the area of the plot
constant. The approach is parameterization invariant, robust to a wide
range of inputs, preserves visual symmetries in the data, and is a
compromise between previously proposed techniques. Further, we
demonstrate that it can be effectively used to select the aspect ratio
of contour plots. We believe arc length should become the default aspect
ratio selection method. expand
|
|
Asymmetric Relations in Longitudinal Social Networks |
|
Ulrik Brandes,
Bobo Nick
|
|
Pages: 2283-2290 |
|
doi>10.1109/TVCG.2011.169 |
|
In
modeling and analysis of longitudinal social networks, visual
exploration is used in particular to complement and inform other
methods. The most common graphical representations for this purpose
appear to be animations and small multiples of intermediate ...
In
modeling and analysis of longitudinal social networks, visual
exploration is used in particular to complement and inform other
methods. The most common graphical representations for this purpose
appear to be animations and small multiples of intermediate states,
depending on the type of media available. We present an alternative
approach based on matrix representation of gestaltlines (a combination
of Tufte's sparklines with glyphs based on gestalt theory). As a result,
we obtain static, compact, yet data-rich diagrams that support
specifically the exploration of evolving dyadic relations and persistent
group structure, although at the expense of cross-sectional network
views and indirect linkages. expand
|
|
VisBricks: Multiform Visualization of Large, Inhomogeneous Data |
|
Alexander Lex,
Hans-Jorg Schulz,
Marc Streit,
Christian Partl,
Dieter Schmalstieg
|
|
Pages: 2291-2300 |
|
doi>10.1109/TVCG.2011.250 |
|
Large
volumes of real-world data often exhibit inhomogeneities: vertically in
the form of correlated or independent dimensions and horizontally in
the form of clustered or scattered data items. In essence, these
inhomogeneities form the patterns in the ...
Large
volumes of real-world data often exhibit inhomogeneities: vertically in
the form of correlated or independent dimensions and horizontally in
the form of clustered or scattered data items. In essence, these
inhomogeneities form the patterns in the data that researchers are
trying to find and understand. Sophisticated statistical methods are
available to reveal these patterns, however, the visualization of their
outcomes is mostly still performed in a one-view-fits-all manner. In
contrast, our novel visualization approach, VisBricks, acknowledges the
inhomogeneity of the data and the need for different visualizations that
suit the individual characteristics of the different data subsets. The
overall visualization of the entire data set is patched together from
smaller visualizations, there is one VisBrick for each cluster in each
group of interdependent dimensions. Whereas the total impression of all
VisBricks together gives a comprehensive high-level overview of the
different groups of data, each VisBrick independently shows the details
of the group of data it represents. State-of-the-art brushing and visual
linking between all VisBricks furthermore allows the comparison of the
groupings and the distribution of data items among them. In this paper,
we introduce the VisBricks visualization concept, discuss its design
rationale and implementation, and demonstrate its usefulness by applying
it to a use case from the field of biomedicine. expand
|
|
D3 Data-Driven Documents |
|
Michael Bostock,
Vadim Ogievetsky,
Jeffrey Heer
|
|
Pages: 2301-2309 |
|
doi>10.1109/TVCG.2011.185 |
|
Data-Driven
Documents (D3) is a novel representation-transparent approach to
visualization for the web. Rather than hide the underlying scenegraph
within a toolkit-specific abstraction, D3 enables direct inspection and
manipulation of a native representation: ...
Data-Driven
Documents (D3) is a novel representation-transparent approach to
visualization for the web. Rather than hide the underlying scenegraph
within a toolkit-specific abstraction, D3 enables direct inspection and
manipulation of a native representation: the standard document object
model (DOM). With D3, designers selectively bind input data to arbitrary
document elements, applying dynamic transforms to both generate and
modify content. We show how representational transparency improves
expressiveness and better integrates with developer tools than prior
approaches, while offering comparable notational efficiency and
retaining powerful declarative components. Immediate evaluation of
operators further simplifies debugging and allows iterative development.
Additionally, we demonstrate how D3 transforms naturally enable
animation and interaction with dramatic performance improvements over
intermediate representations. expand
|
|
Flexible Linked Axes for Multivariate Data Visualization |
|
Jarry H. T. Claessen,
Jarke J. van Wijk
|
|
Pages: 2310-2316 |
|
doi>10.1109/TVCG.2011.201 |
|
Multivariate
data visualization is a classic topic, for which many solutions have
been proposed, each with its own strengths and weaknesses. In standard
solutions the structure of the visualization is fixed, we explore how to
give the user more freedom ...
Multivariate
data visualization is a classic topic, for which many solutions have
been proposed, each with its own strengths and weaknesses. In standard
solutions the structure of the visualization is fixed, we explore how to
give the user more freedom to define visualizations. Our new approach
is based on the usage of Flexible Linked Axes: The user is enabled to
define a visualization by drawing and linking axes on a canvas. Each
axis has an associated attribute and range, which can be adapted. Links
between pairs of axes are used to show data in either scatter plot- or
Parallel Coordinates Plot-style. Flexible Linked Axes enable users to
define a wide variety of different visualizations. These include
standard methods, such as scatter plot matrices, radar charts, and PCPs
[11]; less well known approaches, such as Hyperboxes [1], TimeWheels
[17], and many-to-many relational parallel coordinate displays [14]; and
also custom visualizations, consisting of combinations of scatter plots
and PCPs. Furthermore, our method allows users to define composite
visualizations that automatically support brushing and linking. We have
discussed our approach with ten prospective users, who found the concept
easy to understand and highly promising. expand
|
|
Synthetic Generation of High-Dimensional Datasets |
|
Georgia Albuquerque,
Thomas Lowe,
Marcus Magnor
|
|
Pages: 2317-2324 |
|
doi>10.1109/TVCG.2011.237 |
|
Generation
of synthetic datasets is a common practice in many research areas. Such
data is often generated to meet specific needs or certain conditions
that may not be easily found in the original, real data. The nature of
the data varies according to ...
Generation
of synthetic datasets is a common practice in many research areas. Such
data is often generated to meet specific needs or certain conditions
that may not be easily found in the original, real data. The nature of
the data varies according to the application area and includes text,
graphs, social or weather data, among many others. The common process to
create such synthetic datasets is to implement small scripts or
programs, restricted to small problems or to a specific application. In
this paper we propose a framework designed to generate high dimensional
datasets. Users can interactively create and navigate through multi
dimensional datasets using a suitable graphical user-interface. The data
creation is driven by statistical distributions based on a few
user-defined parameters. First, a grounding dataset is created according
to given inputs, and then structures and trends are included in
selected dimensions and orthogonal projection planes. Furthermore, our
framework supports the creation of complex non-orthogonal trends and
classified datasets. It can successfully be used to create synthetic
datasets simulating important trends as multidimensional clusters,
correlations and outliers. expand
|
|
Stereoscopic Highlighting: 2D Graph Visualization on Stereo Displays |
|
Basak Alper,
Tobias Hollerer,
JoAnn Kuchera-Morin,
Angus Forbes
|
|
Pages: 2325-2333 |
|
doi>10.1109/TVCG.2011.234 |
|
In
this paper we present a new technique and prototype graph visualization
system, stereoscopic highlighting, to help answer accessibility and
adjacency queries when interacting with a node-link diagram. Our
technique utilizes stereoscopic depth to highlight ...
In
this paper we present a new technique and prototype graph visualization
system, stereoscopic highlighting, to help answer accessibility and
adjacency queries when interacting with a node-link diagram. Our
technique utilizes stereoscopic depth to highlight regions of interest
in a 2D graph by projecting these parts onto a plane closer to the
viewpoint of the user. This technique aims to isolate and magnify
specific portions of the graph that need to be explored in detail
without resorting to other highlighting techniques like color or motion,
which can then be reserved to encode other data attributes. This
mechanism of stereoscopic highlighting also enables focus+context views
by juxtaposing a detailed image of a region of interest with the overall
graph, which is visualized at a further depth with correspondingly less
detail. In order to validate our technique, we ran a controlled
experiment with 16 subjects comparing static visual highlighting to
stereoscopic highlighting on 2D and 3D graph layouts for a range of
tasks. Our results show that while for most tasks the difference in
performance between stereoscopic highlighting alone and static visual
highlighting is not statistically significant, users performed better
when both highlighting methods were used concurrently. In more
complicated tasks, 3D layout with static visual highlighting
outperformed 2D layouts with a single highlighting method. However, it
did not outperform the 2D layout utilizing both highlighting techniques
simultaneously. Based on these results, we conclude that stereoscopic
highlighting is a promising technique that can significantly enhance
graph visualizations for certain use cases. expand
|
|
In Situ Exploration of Large Dynamic Networks |
|
Steffen Hadlak,
Hans-Jorg Schulz,
Heidrun Schumann
|
|
Pages: 2334-2343 |
|
doi>10.1109/TVCG.2011.213 |
|
The
analysis of large dynamic networks poses a challenge in many fields,
ranging from large bot-nets to social networks. As dynamic networks
exhibit different characteristics, e.g., being of sparse or dense
structure, or having a continuous or discrete ...
The
analysis of large dynamic networks poses a challenge in many fields,
ranging from large bot-nets to social networks. As dynamic networks
exhibit different characteristics, e.g., being of sparse or dense
structure, or having a continuous or discrete time line, a variety of
visualization techniques have been specifically designed to handle these
different aspects of network structure and time. This wide range of
existing techniques is well justified, as rarely a single visualization
is suitable to cover the entire visual analysis. Instead, visual
representations are often switched in the course of the exploration of
dynamic graphs as the focus of analysis shifts between the temporal and
the structural aspects of the data. To support such a switching in a
seamless and intuitive manner, we introduce the concept of in situ
visualization– a novel strategy that tightly integrates existing
visualization techniques for dynamic networks. It does so by allowing
the user to interactively select in a base visualization a region for
which a different visualization technique is then applied and embedded
in the selection made. This permits to change the way a locally selected
group of data items, such as nodes or time points, are shown – right in
the place where they are positioned, thus supporting the user's overall
mental map. Using this approach, a user can switch seamlessly between
different visual representations to adapt a region of a base
visualization to the specifics of the data within it or to the current
analysis focus. This paper presents and discusses the in situ
visualization strategy and its implications for dynamic graph
visualization. Furthermore, it illustrates its usefulness by employing
it for the visual exploration of dynamic networks from two different
fields: model versioning and wireless mesh networks. expand
|
|
Parallel Edge Splatting for Scalable Dynamic Graph Visualization |
|
Michael Burch,
Corinna Vehlow,
Fabian Beck,
Stephan Diehl,
Daniel Weiskopf
|
|
Pages: 2344-2353 |
|
doi>10.1109/TVCG.2011.226 |
|
We
present a novel dynamic graph visualization technique based on
node-link diagrams. The graphs are drawn side-byside from left to right
as a sequence of narrow stripes that are placed perpendicular to the
horizontal time line. The hierarchically organized ...
We
present a novel dynamic graph visualization technique based on
node-link diagrams. The graphs are drawn side-byside from left to right
as a sequence of narrow stripes that are placed perpendicular to the
horizontal time line. The hierarchically organized vertices of the
graphs are arranged on vertical, parallel lines that bound the stripes;
directed edges connect these vertices from left to right. To address
massive overplotting of edges in huge graphs, we employ a splatting
approach that transforms the edges to a pixel-based scalar field. This
field represents the edge densities in a scalable way and is depicted by
non-linear color mapping. The visualization method is complemented by
interaction techniques that support data exploration by aggregation,
filtering, brushing, and selective data zooming. Furthermore, we
formalize graph patterns so that they can be interactively highlighted
on demand. A case study on software releases explores the evolution of
call graphs extracted from the JUnit open source software project. In a
second application, we demonstrate the scalability of our approach by
applying it to a bibliography dataset containing more than 1.5 million
paper titles from 60 years of research history producing a vast amount
of relations between title words. expand
|
|
Divided Edge Bundling for Directional Network Data |
|
David Selassie,
Brandon Heller,
Jeffrey Heer
|
|
Pages: 2354-2363 |
|
doi>10.1109/TVCG.2011.190 |
|
The
node-link diagram is an intuitive and venerable way to depict a graph.
To reduce clutter and improve the readability of node-link views, Holten
& van Wijk's force-directed edge bundling employs a physical
simulation to spatially group graph edges. ...
The
node-link diagram is an intuitive and venerable way to depict a graph.
To reduce clutter and improve the readability of node-link views, Holten
& van Wijk's force-directed edge bundling employs a physical
simulation to spatially group graph edges. While both useful and
aesthetic, this technique has shortcomings: it bundles spatially
proximal edges regardless of direction, weight, or graph connectivity.
As a result, high-level directional edge patterns are obscured. We
present divided edge bundling to tackle these shortcomings. By modifying
the forces in the physical simulation, directional lanes appear as an
emergent property of edge direction. By considering graph topology, we
only bundle edges related by graph structure. Finally, we aggregate edge
weights in bundles to enable more accurate visualization of total
bundle weights. We compare visualizations created using our technique to
standard force-directed edge bundling, matrix diagrams, and clustered
graphs; we find that divided edge bundling leads to visualizations that
are easier to interpret and reveal both familiar and previously obscured
patterns. expand
|
|
Skeleton-Based Edge Bundling for Graph Visualization |
|
Ozan Ersoy,
Christophe Hurter,
Fernando Paulovich,
Gabriel Cantareiro,
Alex Telea
|
|
Pages: 2364-2373 |
|
doi>10.1109/TVCG.2011.233 |
|
In
this paper, we present a novel approach for constructing bundled
layouts of general graphs. As layout cues for bundles, we use medial
axes, or skeletons, of edges which are similar in terms of position
information. We combine edge clustering, distance ...
In
this paper, we present a novel approach for constructing bundled
layouts of general graphs. As layout cues for bundles, we use medial
axes, or skeletons, of edges which are similar in terms of position
information. We combine edge clustering, distance fields, and 2D
skeletonization to construct progressively bundled layouts for general
graphs by iteratively attracting edges towards the centerlines of level
sets of their distance fields. Apart from clustering, our entire
pipeline is image-based with an efficient implementation in graphics
hardware. Besides speed and implementation simplicity, our method allows
explicit control of the emphasis on structure of the bundled layout,
i.e. the creation of strongly branching (organic-like) or smooth
bundles. We demonstrate our method on several large real-world graphs. expand
|
|
BirdVis: Visualizing and Understanding Bird Populations |
|
Nivan Ferreira,
Lauro Lins,
Daniel Fink,
Steve Kelling,
Christopher Wood,
Juliana Freire,
Claudio Silva
|
|
Pages: 2374-2383 |
|
doi>10.1109/TVCG.2011.176 |
|
Birds
are unrivaled windows into biotic processes at all levels and are
proven indicators of ecological well-being. Understanding the
determinants of species distributions and their dynamics is an important
aspect of ecology and is critical for conservation ...
Birds
are unrivaled windows into biotic processes at all levels and are
proven indicators of ecological well-being. Understanding the
determinants of species distributions and their dynamics is an important
aspect of ecology and is critical for conservation and management.
Through crowdsourcing, since 2002, the eBird project has been collecting
bird observation records. These observations, together with local-scale
environmental covariates such as climate, habitat, and vegetation
phenology have been a valuable resource for a global community of
educators, land managers, ornithologists, and conservation biologists.
By associating environmental inputs with observed patterns of bird
occurrence, predictive models have been developed that provide a
statistical framework to harness available data for predicting species
distributions and making inferences about species-habitat associations.
Understanding these models, however, is challenging because they require
scientists to quantify and compare multiscale spatialtemporal patterns.
A large series of coordinated or sequential plots must be generated,
individually programmed, and manually composed for analysis. This
hampers the exploration and is a barrier to making the cross-species
comparisons that are essential for coordinating conservation and
extracting important ecological information. To address these
limitations, as part of a collaboration among computer scientists,
statisticians, biologists and ornithologists, we have developed BirdVis,
an interactive visualization system that supports the analysis of
spatio-temporal bird distribution models. BirdVis leverages
visualization techniques and uses them in a novel way to better assist
users in the exploration of interdependencies among model parameters.
Furthermore, the system allows for comparative visualization through
coordinated views, providing an intuitive interface to identify relevant
correlations and patterns. We justify our design decisions and present
case studies that show how BirdVis has helped scientists obtain new
evidence for existing hypotheses, as well as formulate new hypotheses in
their domain. expand
|
|
BallotMaps: Detecting Name Bias in Alphabetically Ordered Ballot Papers |
|
Jo Wood,
Donia Badawood,
Jason Dykes,
Aidan Slingsby
|
|
Pages: 2384-2391 |
|
doi>10.1109/TVCG.2011.174 |
|
The
relationship between candidates' position on a ballot paper and vote
rank is explored in the case of 5000 candidates for the UK 2010 local
government elections in the Greater London area. This design study uses
hierarchical spatially arranged graphics ...
The
relationship between candidates' position on a ballot paper and vote
rank is explored in the case of 5000 candidates for the UK 2010 local
government elections in the Greater London area. This design study uses
hierarchical spatially arranged graphics to represent two locations that
affect candidates at very different scales: the geographical areas for
which they seek election and the spatial location of their names on the
ballot paper. This approach allows the effect of position bias to be
assessed; that is, the degree to which the position of a candidate's
name on the ballot paper influences the number of votes received by the
candidate, and whether this varies geographically. Results show that
position bias was significant enough to influence rank order of
candidates, and in the case of many marginal electoral wards, to
influence who was elected to government. Position bias was observed most
strongly for Liberal Democrat candidates but present for all major
political parties. Visual analysis of classification of candidate names
by ethnicity suggests that this too had an effect on votes received by
candidates, in some cases overcoming alphabetic name bias. The results
found contradict some earlier research suggesting that alphabetic name
bias was not sufficiently significant to affect electoral outcome and
add new evidence for the geographic and ethnicity influences on voting
behaviour. The visual approach proposed here can be applied to a wider
range of electoral data and the patterns identified and hypotheses
derived from them could have significant implications for the design of
ballot papers and the conduct of fair elections. expand
|
|
Sequence Surveyor: Leveraging Overview for Scalable Genomic Alignment Visualization |
|
Danielle Albers,
Colin Dewey,
Michael Gleicher
|
|
Pages: 2392-2401 |
|
doi>10.1109/TVCG.2011.232 |
|
In
this paper, we introduce overview visualization tools for large-scale
multiple genome alignment data. Genome alignment visualization and, more
generally, sequence alignment visualization are an important tool for
understanding genomic sequence data. ...
In
this paper, we introduce overview visualization tools for large-scale
multiple genome alignment data. Genome alignment visualization and, more
generally, sequence alignment visualization are an important tool for
understanding genomic sequence data. As sequencing techniques improve
and more data become available, greater demand is being placed on
visualization tools to scale to the size of these new datasets. When
viewing such large data, we necessarily cannot convey details, rather we
specifically design overview tools to help elucidate large-scale
patterns. Perceptual science, signal processing theory, and generality
provide a framework for the design of such visualizations that can scale
well beyond current approaches. We present Sequence Surveyor, a
prototype that embodies these ideas for scalable multiple whole-genome
alignment overview visualization. Sequence Surveyor visualizes sequences
in parallel, displaying data using variable color, position, and
aggregation encodings. We demonstrate how perceptual science can inform
the design of visualization techniques that remain visually manageable
at scale and how signal processing concepts can inform aggregation
schemes that highlight global trends, outliers, and overall data
distributions as the problem scales. These techniques allow us to
visualize alignments with over 100 whole bacterial-sized genomes. expand
|
|
Visualization of Parameter Space for Image Analysis |
|
A. Johannes Pretorius,
Mark-Anthony Bray,
Anne E. Carpenter,
Roy A. Ruddle
|
|
Pages: 2402-2411 |
|
doi>10.1109/TVCG.2011.253 |
|
Image
analysis algorithms are often highly parameterized and much human input
is needed to optimize parameter settings. This incurs a time cost of up
to several days. We analyze and characterize the conventional parameter
optimization process for image ...
Image
analysis algorithms are often highly parameterized and much human input
is needed to optimize parameter settings. This incurs a time cost of up
to several days. We analyze and characterize the conventional parameter
optimization process for image analysis and formulate user
requirements. With this as input, we propose a change in paradigm by
optimizing parameters based on parameter sampling and interactive visual
exploration. To save time and reduce memory load, users are only
involved in the first step - initialization of sampling - and the last
step - visual analysis of output. This helps users to more thoroughly
explore the parameter space and produce higher quality results. We
describe a custom sampling plug-in we developed for CellProfiler - a
popular biomedical image analysis framework. Our main focus is the
development of an interactive visualization technique that enables users
to analyze the relationships between sampled input parameters and
corresponding output. We implemented this in a prototype called
Paramorama. It provides users with a visual overview of parameters and
their sampled values. User-defined areas of interest are presented in a
structured way that includes image-based output and a novel layout
algorithm. To find optimal parameter settings, users can tag high- and
low-quality results to refine their search. We include two case studies
to illustrate the utility of this approach. expand
|
|
TextFlow: Towards Better Understanding of Evolving Topics in Text |
|
Weiwei Cui,
Shixia Liu,
Li Tan,
Conglei Shi,
Yangqiu Song,
Zekai Gao,
Huamin Qu,
Xin Tong
|
|
Pages: 2412-2421 |
|
doi>10.1109/TVCG.2011.239 |
|
Understanding
how topics evolve in text data is an important and challenging task.
Although much work has been devoted to topic analysis, the study of
topic evolution has largely been limited to individual topics. In this
paper, we introduce TextFlow, ...
Understanding
how topics evolve in text data is an important and challenging task.
Although much work has been devoted to topic analysis, the study of
topic evolution has largely been limited to individual topics. In this
paper, we introduce TextFlow, a seamless integration of visualization
and topic mining techniques, for analyzing various evolution patterns
that emerge from multiple topics. We first extend an existing analysis
technique to extract three-level features: the topic evolution trend,
the critical event, and the keyword correlation. Then a coherent
visualization that consists of three new visual components is designed
to convey complex relationships between them. Through interaction, the
topic mining model and visualization can communicate with each other to
help users refine the analysis result and gain insights into the data
progressively. Finally, two case studies are conducted to demonstrate
the effectiveness and usefulness of TextFlow in helping users understand
the major topic evolution patterns in time-varying text data. expand
|
|
Exploratory Analysis of Time-Series with ChronoLenses |
|
Jian Zhao,
Fanny Chevalier,
Emmanuel Pietriga,
Ravin Balakrishnan
|
|
Pages: 2422-2431 |
|
doi>10.1109/TVCG.2011.195 |
|
Visual
representations of time-series are useful for tasks such as identifying
trends, patterns and anomalies in the data. Many techniques have been
devised to make these visual representations more scalable, enabling the
simultaneous display of multiple ...
Visual
representations of time-series are useful for tasks such as identifying
trends, patterns and anomalies in the data. Many techniques have been
devised to make these visual representations more scalable, enabling the
simultaneous display of multiple variables, as well as the multi-scale
display of time-series of very high resolution or that span long time
periods. There has been comparatively little research on how to support
the more elaborate tasks associated with the exploratory visual analysis
of timeseries, e.g., visualizing derived values, identifying
correlations, or discovering anomalies beyond obvious outliers. Such
tasks typically require deriving new time-series from the original data,
trying different functions and parameters in an iterative manner. We
introduce a novel visualization technique called ChronoLenses, aimed at
supporting users in such exploratory tasks. ChronoLenses perform
on-the-fly transformation of the data points in their focus area,
tightly integrating visual analysis with user actions, and enabling the
progressive construction of advanced visual analysis pipelines. expand
|
|
CloudLines: Compact Display of Event Episodes in Multiple Time-Series |
|
Milos Krstajic,
Enrico Bertini,
Daniel Keim
|
|
Pages: 2432-2439 |
|
doi>10.1109/TVCG.2011.179 |
|
We
propose incremental logarithmic time-series technique as a way to deal
with time-based representations of large and dynamic event data sets in
limited space. Modern data visualization problems in the domains of news
analysis, network security and ...
We
propose incremental logarithmic time-series technique as a way to deal
with time-based representations of large and dynamic event data sets in
limited space. Modern data visualization problems in the domains of news
analysis, network security and financial applications, require visual
analysis of incremental data, which poses specific challenges that are
normally not solved by static visualizations. The incremental nature of
the data implies that visualizations have to necessarily change their
content and still provide comprehensible representations. In particular,
in this paper we deal with the need to keep an eye on recent events
together with providing a context on the past and to make relevant
patterns accessible at any scale. Our technique adapts to the incoming
data by taking care of the rate at which data items occur and by using a
decay function to let the items fade away according to their relevance.
Since access to details is also important, we also provide a novel
distortion magnifying lens technique which takes into account the
distortions introduced by the logarithmic time scale to augment
readability in selected areas of interest. We demonstrate the validity
of our techniques by applying them on incremental data coming from
online news streams in different time frames. expand
|
|
Evaluation of Traditional, Orthogonal, and Radial Tree Diagrams by an Eye Tracking Study |
|
Michael Burch,
Natalia Konevtsova,
Julian Heinrich,
Markus Hoeferlin,
Daniel Weiskopf
|
|
Pages: 2440-2448 |
|
doi>10.1109/TVCG.2011.193 |
|
Node-link
diagrams are an effective and popular visualization approach for
depicting hierarchical structures and for showing parent-child
relationships. In this paper, we present the results of an eye tracking
experiment investigating traditional, orthogonal, ...
Node-link
diagrams are an effective and popular visualization approach for
depicting hierarchical structures and for showing parent-child
relationships. In this paper, we present the results of an eye tracking
experiment investigating traditional, orthogonal, and radial node-link
tree layouts as a piece of empirical basis for choosing between those
layouts. Eye tracking was used to identify visual exploration behaviors
of participants that were asked to solve a typical hierarchy exploration
task by inspecting a static tree diagram: finding the least common
ancestor of a given set of marked leaf nodes. To uncover exploration
strategies, we examined fixation points, duration, and saccades of
participants' gaze trajectories. For the non-radial diagrams, we
additionally investigated the effect of diagram orientation by switching
the position of the root node to each of the four main orientations. We
also recorded and analyzed correctness of answers as well as completion
times in addition to the eye movement data. We found out that
traditional and orthogonal tree layouts significantly outperform radial
tree layouts for the given task. Furthermore, by applying trajectory
analysis techniques we uncovered that participants cross-checked their
task solution more often in the radial than in the non-radial layouts. expand
|
|
TreeNetViz: Revealing Patterns of Networks over Tree Structures |
|
Liang Gou,
Xiaolong (Luke) Zhang
|
|
Pages: 2449-2458 |
|
doi>10.1109/TVCG.2011.247 |
|
Network
data often contain important attributes from various dimensions such as
social affiliations and areas of expertise in a social network. If such
attributes exhibit a tree structure, visualizing a compound graph
consisting of tree and network structures ...
Network
data often contain important attributes from various dimensions such as
social affiliations and areas of expertise in a social network. If such
attributes exhibit a tree structure, visualizing a compound graph
consisting of tree and network structures becomes complicated. How to
visually reveal patterns of a network over a tree has not been fully
studied. In this paper, we propose a compound graph model, TreeNet, to
support visualization and analysis of a network at multiple levels of
aggregation over a tree. We also present a visualization design,
TreeNetViz, to offer the multiscale and cross-scale exploration and
interaction of a TreeNet graph. TreeNetViz uses a Radial, Space-Filling
(RSF) visualization to represent the tree structure, a circle layout
with novel optimization to show aggregated networks derived from
TreeNet, and an edge bundling technique to reduce visual complexity. Our
circular layout algorithm reduces both total edge-crossings and edge
length and also considers hierarchical structure constraints and edge
weight in a TreeNet graph. These experiments illustrate that the
algorithm can reduce visual cluttering in TreeNet graphs. Our case study
also shows that TreeNetViz has the potential to support the analysis of
a compound graph by revealing multiscale and cross-scale network
patterns. expand
|
|
Improved Similarity Trees and their Application to Visual Data Classification |
|
Jose Gustavo Paiva,
Laura Florian,
Helio Pedrini,
Guilherme Telles,
Rosane Minghim
|
|
Pages: 2459-2468 |
|
doi>10.1109/TVCG.2011.212 |
|
An
alternative form to multidimensional projections for the visual
analysis of data represented in multidimensional spaces is the
deployment of similarity trees, such as Neighbor Joining trees. They
organize data objects on the visual plane emphasizing ...
An
alternative form to multidimensional projections for the visual
analysis of data represented in multidimensional spaces is the
deployment of similarity trees, such as Neighbor Joining trees. They
organize data objects on the visual plane emphasizing their levels of
similarity with high capability of detecting and separating groups and
subgroups of objects. Besides this similarity-based hierarchical data
organization, some of their advantages include the ability to decrease
point clutter; high precision; and a consistent view of the data set
during focusing, offering a very intuitive way to view the general
structure of the data set as well as to drill down to groups and
subgroups of interest. Disadvantages of similarity trees based on
neighbor joining strategies include their computational cost and the
presence of virtual nodes that utilize too much of the visual space.
This paper presents a highly improved version of the similarity tree
technique. The improvements in the technique are given by two
procedures. The first is a strategy that replaces virtual nodes by
promoting real leaf nodes to their place, saving large portions of space
in the display and maintaining the expressiveness and precision of the
technique. The second improvement is an implementation that
significantly accelerates the algorithm, impacting its use for larger
data sets. We also illustrate the applicability of the technique in
visual data mining, showing its advantages to support visual
classification of data sets, with special attention to the case of image
classification. We demonstrate the capabilities of the tree for
analysis and iterative manipulation and employ those capabilities to
support evolving to a satisfactory data organization and classification. expand
|
|
A Study on Dual-Scale Data Charts |
|
Petra Isenberg,
Anastasia Bezerianos
|
|
Pages: 2469-2478 |
|
doi>10.1109/TVCG.2011.160 |
|
We
present the results of a user study that compares different ways of
representing Dual-Scale data charts. Dual-Scale charts incorporate two
different data resolutions into one chart in order to emphasize data in
regions of interest or to enable the ...
We
present the results of a user study that compares different ways of
representing Dual-Scale data charts. Dual-Scale charts incorporate two
different data resolutions into one chart in order to emphasize data in
regions of interest or to enable the comparison of data from distant
regions. While some design guidelines exist for these types of charts,
there is currently little empirical evidence on which to base their
design. We fill this gap by discussing the design space of Dual-Scale
cartesian-coordinate charts and by experimentally comparing the
performance of different chart types with respect to elementary
graphical perception tasks such as comparing lengths and distances. Our
study suggests that cut-out charts which include collocated full context
and focus are the best alternative, and that superimposed charts in
which focus and context overlap on top of each other should be avoided. expand
|
|
Evaluation of Artery Visualizations for Heart Disease Diagnosis |
|
Michelle Borkin,
Krzysztof Gajos,
Amanda Peters,
Dimitrios Mitsouras,
Simone Melchionna,
Frank Rybicki,
Charles Feldman,
Hanspeter Pfister
|
|
Pages: 2479-2488 |
|
doi>10.1109/TVCG.2011.192 |
|
Heart
disease is the number one killer in the United States, and finding
indicators of the disease at an early stage is critical for treatment
and prevention. In this paper we evaluate visualization techniques that
enable the diagnosis of coronary artery ...
Heart
disease is the number one killer in the United States, and finding
indicators of the disease at an early stage is critical for treatment
and prevention. In this paper we evaluate visualization techniques that
enable the diagnosis of coronary artery disease. A key physical quantity
of medical interest is endothelial shear stress (ESS). Low ESS has been
associated with sites of lesion formation and rapid progression of
disease in the coronary arteries. Having effective visualizations of a
patient's ESS data is vital for the quick and thorough non-invasive
evaluation by a cardiologist. We present a task taxonomy for
hemodynamics based on a formative user study with domain experts. Based
on the results of this study we developed HemoVis, an interactive
visualization application for heart disease diagnosis that uses a novel
2D tree diagram representation of coronary artery trees. We present the
results of a formal quantitative user study with domain experts that
evaluates the effect of 2D versus 3D artery representations and of color
maps on identifying regions of low ESS. We show statistically
significant results demonstrating that our 2D visualizations are more
accurate and efficient than 3D representations, and that a perceptually
appropriate color map leads to fewer diagnostic mistakes than a rainbow
color map. expand
|
|
Exploring Ambient and Artistic Visualization for Residential Energy Use Feedback |
|
Johnny Rodgers,
Lyn Bartram
|
|
Pages: 2489-2497 |
|
doi>10.1109/TVCG.2011.196 |
|
Providing
effective feedback on resource consumption in the home is a key
challenge of environmental conservation efforts. One promising approach
for providing feedback about residential energy consumption is the use
of ambient and artistic visualizations. ...
Providing
effective feedback on resource consumption in the home is a key
challenge of environmental conservation efforts. One promising approach
for providing feedback about residential energy consumption is the use
of ambient and artistic visualizations. Pervasive computing technologies
enable the integration of such feedback into the home in the form of
distributed point-of-consumption feedback devices to support
decision-making in everyday activities. However, introducing these
devices into the home requires sensitivity to the domestic context. In
this paper we describe three abstract visualizations and suggest four
design requirements that this type of device must meet to be effective:
pragmatic, aesthetic, ambient, and ecological. We report on the findings
from a mixed methods user study that explores the viability of using
ambient and artistic feedback in the home based on these requirements.
Our findings suggest that this approach is a viable way to provide
resource use feedback and that both the aesthetics of the representation
and the context of use are important elements that must be considered
in this design space. expand
|
|
Human-Centered Approaches in Geovisualization Design: Investigating Multiple Methods Through a Long-Term Case Study |
|
David Lloyd,
Jason Dykes
|
|
Pages: 2498-2507 |
|
doi>10.1109/TVCG.2011.209 |
|
Working
with three domain specialists we investigate human-centered approaches
to geovisualization following an ISO13407 taxonomy covering context of
use, requirements and early stages of design. Our case study, undertaken
over three years, draws attention ...
Working
with three domain specialists we investigate human-centered approaches
to geovisualization following an ISO13407 taxonomy covering context of
use, requirements and early stages of design. Our case study, undertaken
over three years, draws attention to repeating trends: that generic
approaches fail to elicit adequate requirements for geovis application
design; that the use of real data is key to understanding needs and
possibilities; that trust and knowledge must be built and developed with
collaborators. These processes take time but modified human-centred
approaches can be effective. A scenario developed through contextual
inquiry but supplemented with domain data and graphics is useful to
geovis designers. Wireframe, paper and digital prototypes enable
successful communication between specialist and geovis domains when
incorporating real and interesting data, prompting exploratory behaviour
and eliciting previously unconsidered requirements. Paper prototypes
are particularly successful at eliciting suggestions, especially for
novel visualization. Enabling specialists to explore their data freely
with a digital prototype is as effective as using a structured task
protocol and is easier to administer. Autoethnography has potential for
framing the design process. We conclude that a common understanding of
context of use, domain data and visualization possibilities are
essential to successful geovis design and develop as this progresses. HC
approaches can make a significant contribution here. However, modified
approaches, applied with flexibility, are most promising. We advise
early, collaborative engagement with data – through simple, transient
visual artefacts supported by data sketches and existing designs –
before moving to successively more sophisticated data wireframes and
data prototypes. expand
|
|
Visual Thinking In Action: Visualizations As Used On Whiteboards |
|
Jagoda Walny,
Sheelagh Carpendale,
Nathalie Henry Riche,
Gina Venolia,
Philip Fawcett
|
|
Pages: 2508-2517 |
|
doi>10.1109/TVCG.2011.251 |
|
While
it is still most common for information visualization researchers to
develop new visualizations from a data- or taskdriven perspective, there
is growing interest in understanding the types of visualizations people
create by themselves for personal ...
While
it is still most common for information visualization researchers to
develop new visualizations from a data- or taskdriven perspective, there
is growing interest in understanding the types of visualizations people
create by themselves for personal use. As part of this recent
direction, we have studied a large collection of whiteboards in a
research institution, where people make active use of combinations of
words, diagrams and various types of visuals to help them further their
thought processes. Our goal is to arrive at a better understanding of
the nature of visuals that are created spontaneously during
brainstorming, thinking, communicating, and general problem solving on
whiteboards. We use the qualitative approaches of open coding,
interviewing, and affinity diagramming to explore the use of
recognizable and novel visuals, and the interplay between visualization
and diagrammatic elements with words, numbers and labels. We discuss the
potential implications of our findings on information visualization
design. expand
|
|
Composite Density Maps for Multivariate Trajectories |
|
Roeland Scheepens,
Niels Willems,
Huub van de Wetering,
Gennady Andrienko,
Natalia Andrienko,
Jarke J. van Wijk
|
|
Pages: 2518-2527 |
|
doi>10.1109/TVCG.2011.181 |
|
We
consider moving objects as multivariate time-series. By visually
analyzing the attributes, patterns may appear that explain why certain
movements have occurred. Density maps as proposed by Scheepens et al.
[25] are a way to reveal these patterns by ...
We
consider moving objects as multivariate time-series. By visually
analyzing the attributes, patterns may appear that explain why certain
movements have occurred. Density maps as proposed by Scheepens et al.
[25] are a way to reveal these patterns by means of aggregations of
filtered subsets of trajectories. Since filtering is often not
sufficient for analysts to express their domain knowledge, we propose to
use expressions instead. We present a flexible architecture for density
maps to enable custom, versatile exploration using multiple density
fields. The flexibility comes from a script, depicted in this paper as a
block diagram, which defines an advanced computation of a density
field. We define six different types of blocks to create, compose, and
enhance trajectories or density fields. Blocks are customized by means
of expressions that allow the analyst to model domain knowledge. The
versatility of our architecture is demonstrated with several maritime
use cases developed with domain experts. Our approach is expected to be
useful for the analysis of objects in other domains. expand
|
|
Focus+Context Metro Maps |
|
Yu-Shuen Wang,
Ming-Te Chi
|
|
Pages: 2528-2535 |
|
doi>10.1109/TVCG.2011.205 |
|
We
introduce a focus+context method to visualize a complicated metro map
of a modern city on a small displaying area. The context of our work is
with regard the popularity of mobile devices. The best route to the
destination, which can be obtained from ...
We
introduce a focus+context method to visualize a complicated metro map
of a modern city on a small displaying area. The context of our work is
with regard the popularity of mobile devices. The best route to the
destination, which can be obtained from the arrival time of trains, is
highlighted. The stations on the route enjoy larger spaces, whereas the
other stations are rendered smaller and closer to fit the whole map into
a screen. To simplify the navigation and route planning for visitors,
we formulate various map characteristics such as octilinear
transportation lines and regular station distances into energy terms. We
then solve for the optimal layout in a least squares sense. In
addition, we label the names of stations that are on the route of a
passenger according to human preferences, occlusions, and consistencies
of label positions using the graph cuts method. Our system achieves
real-time performance by being able to report instant information
because of the carefully designed energy terms. We apply our method to
layout a number of metro maps and show the results and timing statistics
to demonstrate the feasibility of our technique. expand
|
|
Flow Map Layout via Spiral Trees |
|
Kevin Buchin,
Bettina Speckmann,
Kevin Verbeek
|
|
Pages: 2536-2544 |
|
doi>10.1109/TVCG.2011.202 |
|
Flow
maps are thematic maps that visualize the movement of objects, such as
people or goods, between geographic regions. One or more sources are
connected to several targets by lines whose thickness corresponds to the
amount of flow between a source ...
Flow
maps are thematic maps that visualize the movement of objects, such as
people or goods, between geographic regions. One or more sources are
connected to several targets by lines whose thickness corresponds to the
amount of flow between a source and a target. Good flow maps reduce
visual clutter by merging (bundling) lines smoothly and by avoiding
self-intersections. Most flow maps are still drawn by hand and only few
automated methods exist. Some of the known algorithms do not support
edgebundling and those that do, cannot guarantee crossing-free flows. We
present a new algorithmic method that uses edge-bundling and computes
crossing-free flows of high visual quality. Our method is based on
so-called spiral trees, a novel type of Steiner tree which uses
logarithmic spirals. Spiral trees naturally induce a clustering on the
targets and smoothly bundle lines. Our flows can also avoid obstacles,
such as map features, region outlines, or even the targets. We
demonstrate our approach with extensive experiments. expand
|
|
Exploring Uncertainty in Geodemographics with Interactive Graphics |
|
Aidan Slingsby,
Jason Dykes,
Jo Wood
|
|
Pages: 2545-2554 |
|
doi>10.1109/TVCG.2011.197 |
|
Geodemographic
classifiers characterise populations by categorising geographical areas
according to the demographic and lifestyle characteristics of those who
live within them. The dimension-reducing quality of such classifiers
provides a simple and ...
Geodemographic
classifiers characterise populations by categorising geographical areas
according to the demographic and lifestyle characteristics of those who
live within them. The dimension-reducing quality of such classifiers
provides a simple and effective means of characterising population
through a manageable set of categories, but inevitably hides
heterogeneity, which varies within and between the demographic
categories and geographical areas, sometimes systematically. This may
have implications for their use, which is widespread in government and
commerce for planning, marketing and related activities. We use novel
interactive graphics to delve into OAC – a free and open geodemographic
classifier that classifies the UK population in over 200,000 small
geographical areas into 7 super-groups, 21 groups and 52 sub-groups. Our
graphics provide access to the original 41 demographic variables used
in the classification and the uncertainty associated with the
classification of each geographical area on-demand. It also supports
comparison geographically and by category. This serves the dual purpose
of helping understand the classifier itself leading to its more informed
use and providing a more comprehensive view of population in a
comprehensible manner. We assess the impact of these interactive
graphics on experienced OAC users who explored the details of the
classification, its uncertainty and the nature of between – and within –
class variation and then reflect on their experiences. Visualization of
the complexities and subtleties of the classification proved to be a
thought-provoking exercise both confirming and challenging users’
understanding of population, the OAC classifier and the way it is used
in their organisations. Users identified three contexts for which the
techniques were deemed useful in the context of local government,
confirming the validity of the proposed methods. expand
|
|
Drawing Road Networks with Focus Regions |
|
Jan-Henrik Haunert,
Leon Sering
|
|
Pages: 2555-2562 |
|
doi>10.1109/TVCG.2011.191 |
|
Mobile
users of maps typically need detailed information about their
surroundings plus some context information about remote places. In order
to avoid that the map partly gets too dense, cartographers have
designed mapping functions that enlarge a user-defined ...
Mobile
users of maps typically need detailed information about their
surroundings plus some context information about remote places. In order
to avoid that the map partly gets too dense, cartographers have
designed mapping functions that enlarge a user-defined focus region -
such functions are sometimes called fish-eye projections. The extra map
space occupied by the enlarged focus region is compensated by distorting
other parts of the map. We argue that, in a map showing a network of
roads relevant to the user, distortion should preferably take place in
those areas where the network is sparse. Therefore, we do not apply a
predefined mapping function. Instead, we consider the road network as a
graph whose edges are the road segments. We compute a new spatial
mapping with a graph-based optimization approach, minimizing the square
sum of distortions at edges. Our optimization method is based on a
convex quadratic program (CQP); CQPs can be solved in polynomial time.
Important requirements on the output map are expressed as linear
inequalities. In particular, we show how to forbid edge crossings. We
have implemented our method in a prototype tool. For instances of
different sizes, our method generated output maps that were far less
distorted than those generated with a predefined fish-eye projection.
Future work is needed to automate the selection of roads relevant to the
user. Furthermore, we aim at fast heuristics for application in
real-time systems. expand
|
|
Local Affine Multidimensional Projection |
|
Paulo Joia,
Danilo Coimbra,
Jose A. Cuminato,
Fernando V. Paulovich,
Luis G. Nonato
|
|
Pages: 2563-2571 |
|
doi>10.1109/TVCG.2011.220 |
|
Multidimensional
projection techniques have experienced many improvements lately, mainly
regarding computational times and accuracy. However, existing methods
do not yet provide flexible enough mechanisms for visualization-oriented
fully interactive ...
Multidimensional
projection techniques have experienced many improvements lately, mainly
regarding computational times and accuracy. However, existing methods
do not yet provide flexible enough mechanisms for visualization-oriented
fully interactive applications. This work presents a new
multidimensional projection technique designed to be more flexible and
versatile than other methods. This novel approach, called Local Affine
Multidimensional Projection (LAMP), relies on orthogonal mapping theory
to build accurate local transformations that can be dynamically modified
according to user knowledge. The accuracy, flexibility and
computational efficiency of LAMP is confirmed by a comprehensive set of
comparisons. LAMP's versatility is exploited in an application which
seeks to correlate data that, in principle, has no connection as well as
in visual exploration of textual documents. expand
|
|
Angular Histograms: Frequency-Based Visualizations for Large, High Dimensional Data |
|
Zhao Geng,
ZhenMin Peng,
Robert S.Laramee,
Jonathan C. Roberts,
Rick Walker
|
|
Pages: 2572-2580 |
|
doi>10.1109/TVCG.2011.166 |
|
Parallel
coordinates is a popular and well-known multivariate data visualization
technique. However, one of their inherent limitations has to do with
the rendering of very large data sets. This often causes an overplotting
problem and the goal of the ...
Parallel
coordinates is a popular and well-known multivariate data visualization
technique. However, one of their inherent limitations has to do with
the rendering of very large data sets. This often causes an overplotting
problem and the goal of the visual information seeking mantra is
hampered because of a cluttered overview and non-interactive update
rates. In this paper, we propose two novel solutions, namely, angular
histograms and attribute curves. These techniques are frequency-based
approaches to large, high-dimensional data visualization. They are able
to convey both the density of underlying polylines and their slopes.
Angular histogram and attribute curves offer an intuitive way for the
user to explore the clustering, linear correlations and outliers in
large data sets without the over-plotting and clutter problems
associated with traditional parallel coordinates. We demonstrate the
results on a wide variety of data sets including real-world,
high-dimensional biological data. Finally, we compare our methods with
the other popular frequency-based algorithms. expand
|
|
DICON: Interactive Visual Analysis of Multidimensional Clusters |
|
Nan Cao,
David Gotz,
Jimeng Sun,
Huamin Qu
|
|
Pages: 2581-2590 |
|
doi>10.1109/TVCG.2011.188 |
|
Clustering
as a fundamental data analysis technique has been widely used in many
analytic applications. However, it is often difficult for users to
understand and evaluate multidimensional clustering results, especially
the quality of clusters and their ...
Clustering
as a fundamental data analysis technique has been widely used in many
analytic applications. However, it is often difficult for users to
understand and evaluate multidimensional clustering results, especially
the quality of clusters and their semantics. For large and complex data,
high-level statistical information about the clusters is often needed
for users to evaluate cluster quality while a detailed display of
multidimensional attributes of the data is necessary to understand the
meaning of clusters. In this paper, we introduce DICON, an icon-based
cluster visualization that embeds statistical information into a
multi-attribute display to facilitate cluster interpretation,
evaluation, and comparison. We design a treemap-like icon to represent a
multidimensional cluster, and the quality of the cluster can be
conveniently evaluated with the embedded statistical information. We
further develop a novel layout algorithm which can generate similar
icons for similar clusters, making comparisons of clusters easier. User
interaction and clutter reduction are integrated into the system to help
users more effectively analyze and refine clustering results for large
datasets. We demonstrate the power of DICON through a user study and a
case study in the healthcare domain. Our evaluation shows the benefits
of the technique, especially in support of complex multidimensional
cluster analysis. expand
|
|
Brushing Dimensions—A Dual Visual Analysis Model for High-Dimensional Data |
|
Cagatay Turkay,
Peter Filzmoser,
Helwig Hauser
|
|
Pages: 2591-2599 |
|
doi>10.1109/TVCG.2011.178 |
|
In
many application fields, data analysts have to deal with datasets that
contain many expressions per item. The effective analysis of such
multivariate datasets is dependent on the user's ability to understand
both the intrinsic dimensionality of the ...
In
many application fields, data analysts have to deal with datasets that
contain many expressions per item. The effective analysis of such
multivariate datasets is dependent on the user's ability to understand
both the intrinsic dimensionality of the dataset as well as the
distribution of the dependent values with respect to the dimensions. In
this paper, we propose a visualization model that enables the joint
interactive visual analysis of multivariate datasets with respect to
their dimensions as well as with respect to the actual data values. We
describe a dual setting of visualization and interaction in items space
and in dimensions space. The visualization of items is linked to the
visualization of dimensions with brushing and focus+context
visualization. With this approach, the user is able to jointly study the
structure of the dimensions space as well as the distribution of data
items with respect to the dimensions. Even though the proposed
visualization model is general, we demonstrate its application in the
context of a DNA microarray data analysis. expand
|
|
MoleView: An Attribute and Structure-Based Semantic Lens for Large Element-Based Plots |
|
Christophe Hurter,
Alexandru Telea,
Ozan Ersoy
|
|
Pages: 2600-2609 |
|
doi>10.1109/TVCG.2011.223 |
|
We
present MoleView, a novel technique for interactive exploration of
multivariate relational data. Given a spatial embedding of the data, in
terms of a scatter plot or graph layout, we propose a semantic lens
which selects a specific spatial and attribute-related ...
We
present MoleView, a novel technique for interactive exploration of
multivariate relational data. Given a spatial embedding of the data, in
terms of a scatter plot or graph layout, we propose a semantic lens
which selects a specific spatial and attribute-related data range. The
lens keeps the selected data in focus unchanged and continuously deforms
the data out of the selection range in order to maintain the context
around the focus. Specific deformations include distance-based repulsion
of scatter plot points, deforming straight-line node-link graph
drawings, and as varying the simplification degree of bundled edge graph
layouts. Using a brushing-based technique, we further show the
applicability of our semantic lens for scenarios requiring a complex
selection of the zones of interest. Our technique is simple to implement
and provides real-time performance on large datasets. We demonstrate
our technique with actual data from air and road traffic control,
medical imaging, and software comprehension applications. expand
|
|
Author Index |
|
Pages: xxv-xxv |
|
doi>10.1109/TVCG.2011.172 |
|
|