|
TVCG Vis/InfoVis 2009 Front Matter |
|
Pages: i-XXVIII |
|
doi>10.1109/TVCG.2009.193 |
|
Full text available:
Publisher Site
|
|
|
|
ABySS-Explorer: Visualizing Genome Sequence Assemblies |
|
Cydney B. Nielsen,
Shaun D. Jackman,
Inanç Birol,
Steven J. M. Jones
|
|
Pages: 881-888 |
|
doi>10.1109/TVCG.2009.116 |
|
Full text available:
Publisher Site
|
|
One
bottleneck in large-scale genome sequencing projects is reconstructing
the full genome sequence from the short subsequences produced by current
technologies. The final stages of the genome assembly process
inevitably require manual inspection of ...
One
bottleneck in large-scale genome sequencing projects is reconstructing
the full genome sequence from the short subsequences produced by current
technologies. The final stages of the genome assembly process
inevitably require manual inspection of data inconsistencies and could
be greatly aided by visualization. This paper presents our design
decisions in translating key data features identified through
discussions with analysts into a concise visual encoding. Current
visualization tools in this domain focus on local sequence errors making
high-level inspection of the assembly difficult if not impossible. We
present a novel interactive graph display, ABySS-Explorer, that
emphasizes the global assembly structure while also integrating salient
data features such as sequence length. Our tool replaces manual and in
some cases pen-and-paper based analysis tasks, and we discuss how user
feedback was incorporated into iterative design refinements. Finally, we
touch on applications of this representation not initially considered
in our design phase, suggesting the generality of this encoding for DNA
sequence data. expand
|
|
Constructing Overview + Detail Dendrogram-Matrix Views |
|
Jin Chen,
Alan M. MacEachren,
Donna J. Peuquet
|
|
Pages: 889-896 |
|
doi>10.1109/TVCG.2009.130 |
|
Full text available:
Publisher Site
|
|
A
dendrogram that visualizes a clustering hierarchy is often integrated
with a re-orderable matrix for pattern identification. The method is
widely used in many research fields including biology, geography,
statistics, and data mining. However, most ...
A
dendrogram that visualizes a clustering hierarchy is often integrated
with a re-orderable matrix for pattern identification. The method is
widely used in many research fields including biology, geography,
statistics, and data mining. However, most dendrograms do not scale up
well, particularly with respect to problems of graphical and cognitive
information overload. This research proposes a strategy that links an
overview dendrogram and a detail-view dendrogram, each integrated with a
re-orderable matrix. The overview displays only a user-controlled,
limited number of nodes that represent the “skeleton” of a hierarchy.
The detail view displays the sub-tree represented by a selected
meta-node in the overview. The research presented here focuses on
constructing a concise overview dendrogram and its coordination with a
detail view. The proposed method has the following benefits: dramatic
alleviation of information overload, enhanced scalability and data
abstraction quality on the dendrogram, and the support of data
exploration at arbitrary levels of detail. The contribution of the paper
includes a new metric to measure the “importance” of nodes in a
dendrogram; the method to construct the concise overview dendrogram from
the dynamically-identified, important nodes; and measure for evaluating
the data abstraction quality for dendrograms. We evaluate and compare
the proposed method to some related existing methods, and demonstrating
how the proposed method can help users find interesting patterns through
a case study on county-level U.S. cervical cancer mortality and
demographic data. expand
|
|
MizBee: A Multiscale Synteny Browser |
|
Miriah Meyer,
Tamara Munzner,
Hanspeter Pfister
|
|
Pages: 897-904 |
|
doi>10.1109/TVCG.2009.167 |
|
Full text available:
Publisher Site
|
|
In
the field of comparative genomics, scientists seek to answerquestions
about evolution and genomic function by comparing thegenomes of species
to find regions of shared sequences. Conservedsyntenic blocks are an
important biological data abstraction ...
In
the field of comparative genomics, scientists seek to answerquestions
about evolution and genomic function by comparing thegenomes of species
to find regions of shared sequences. Conservedsyntenic blocks are an
important biological data abstraction for indicating regions of shared
sequences. The goal of this work is to show multiple types of
relationships at multiple scales in a way thatis visually comprehensible
in accordance with known perceptual principles. We present a task
analysis for this domain where thefundamental questions asked by
biologists can be understood by a characterization of relationships into
the four types of proximity/location, size, orientation, and
similarity/strength, andthe four scales of genome, chromosome, block,
and genomic feature. We also propose a new taxonomy of the design space
for visually encoding conservation data. We present MizBee, a multiscale
synteny browser with the unique property of providing interactive
side-by-side views of the data across the range of scales supporting
exploration of all of these relationship types. We conclude with case
studies from two biologists who used MizBee to augment their previous
automatic analysis work flow, providing anecdotal evidence about the
efficacy ofthe system for the visualization of syntenic data, the
analysis of conservation relationships, and the communication of
scientific insights. expand
|
|
GeneShelf: A Web-based Visual Interface for Large Gene Expression Time-Series Data Repositories |
|
Bohyoung Kim,
Bongshin Lee,
Susan Knoblach,
Eric Hoffman,
Jinwook Seo
|
|
Pages: 905-912 |
|
doi>10.1109/TVCG.2009.146 |
|
Full text available:
Publisher Site
|
|
A
widespread use of high-throughput gene expression analysis techniques
enabled the biomedical research community to share a huge body of gene
expression datasets in many public databases on the web. However,
current gene expression data repositories ...
A
widespread use of high-throughput gene expression analysis techniques
enabled the biomedical research community to share a huge body of gene
expression datasets in many public databases on the web. However,
current gene expression data repositories provide static representations
of the data and support limited interactions. This hinders biologists
from effectively exploring shared gene expression datasets. Responding
to the growing need for better interfaces to improve the utility of the
public datasets, we have designed and developed a new web-based visual
interface entitled GeneShelf
(http://bioinformatics.cnmcresearch.org/GeneShelf). It builds upon a
zoomable grid display to represent two categorical dimensions. It also
incorporates an augmented timeline with expandable time points that
better shows multiple data values for the focused time point by
embedding bar charts. We applied GeneShelf to one of the largest
microarray datasets generated to study the progression and recovery
process of injuries at the spinal cord of mice and rats. We present a
case study and a preliminary qualitative user study with biologists to
show the utility and usability of GeneShelf. expand
|
|
Spatiotemporal Analysis of Sensor Logs using Growth Ring Maps |
|
Peter Bak,
Florian Mansmann,
Halldor Janetzko,
Daniel Keim
|
|
Pages: 913-920 |
|
doi>10.1109/TVCG.2009.182 |
|
Full text available:
Publisher Site
|
|
Abstract—Spatiotemporal
analysis of sensor logs is a challenging research field due to three
facts: a) traditional two-dimensional maps do not support multiple
events to occur at the same spatial location, b) three-dimensional
solutions introduce ...
Abstract—Spatiotemporal
analysis of sensor logs is a challenging research field due to three
facts: a) traditional two-dimensional maps do not support multiple
events to occur at the same spatial location, b) three-dimensional
solutions introduce ambiguity and are hard to navigate, and c) map
distortions to solve the overlap problem are unfamiliar to most users.
This paper introduces a novel approach to represent spatial data
changing over time by plotting a number of non-overlapping pixels, close
to the sensor positions in a map. Thereby, we encode the amount of time
that a subject spent at a particular sensor to the number of plotted
pixels. Color is used in a twofold manner; while distinct colors
distinguish between sensor nodes in different regions, the colors’
intensity is used as an indicator to the temporal property of the
subjects’ activity. The resulting visualization technique, called Growth
Ring Maps, enables users to find similarities and extract patterns of
interest in spatiotemporal data by using humans’ perceptual abilities.
We demonstrate the newly introduced technique on a dataset that shows
the behavior of healthy and Alzheimer transgenic, male and female mice.
We motivate the new technique by showing that the temporal analysis
based on hierarchical clustering and the spatial analysis based on
transition matrices only reveal limited results. Results and findings
are cross-validated using multidimensional scaling. While the focus of
this paper is to apply our visualization for monitoring animal behavior,
the technique is also applicable for analyzing data, such as packet
tracing, geographic monitoring of sales development, or mobile phone
capacity planning. expand
|
|
A Nested Model for Visualization Design and Validation |
|
Tamara Munzner
|
|
Pages: 921-928 |
|
doi>10.1109/TVCG.2009.111 |
|
Full text available:
Publisher Site
|
|
We
present a nested model for the visualization design process withfour
layers: characterize the problem domain, abstract into operationson data
types, design visual encoding and interaction techniques, andcreate
algorithms to execute techniques efficiently. ...
We
present a nested model for the visualization design process withfour
layers: characterize the problem domain, abstract into operationson data
types, design visual encoding and interaction techniques, andcreate
algorithms to execute techniques efficiently. The output from alevel
above is input to the level below, bringing attention to thedesign
challenge that an upstream error inevitably cascades to alldownstream
levels. This model provides prescriptive guidance fordetermining
appropriate evaluation approaches by identifying threatsto validity
unique to each level. We call attention to specific stepsin the design
and evaluation process that are often given shortshrift. We also provide
three recommendations motivated by this model:authors should
distinguish between these levels when claimingcontributions at more than
one of them, authors should explicitlystate upstream assumptions at
levels above the focus of a paper, andvisualization venues should accept
more papers on domaincharacterization. expand
|
|
Conjunctive Visual Forms |
|
Chris Weaver
|
|
Pages: 929-936 |
|
doi>10.1109/TVCG.2009.129 |
|
Full text available:
Publisher Site
|
|
Visual
exploration of multidimensional data is a process of isolating and
extracting relationships within and between dimensions. Coordinated
multiple view approaches are particularly effective for visual
exploration because they support precise expression ...
Visual
exploration of multidimensional data is a process of isolating and
extracting relationships within and between dimensions. Coordinated
multiple view approaches are particularly effective for visual
exploration because they support precise expression of heterogeneous
multidimensional queries using simple interactions. Recent visual
analytics research has made significant progress in identifying and
understanding patterns of composed views and coordinations that support
fast, flexible, and open-ended data exploration. What is missing is
formalization of the space of expressible queries in terms of visual
representation and interaction. This paper introduces the Conjunctive
Visual Form model in which visual exploration consists of
interactively-driven sequences of transitions between visual states that
correspond to conjunctive normal forms in boolean logic. The model
predicts several new and useful ways to extend the space of rapidly
expressible queries through addition of simple interactive capabilities
to existing compositional patterns. Two recent related visual tools
offer a subset of these capabilities, providing a basis for conjecturing
about such extensions. expand
|
|
Interaction Techniques for Selecting and Manipulating Subgraphs in Network Visualizations |
|
Michael J. McGuffin,
Igor Jurisica
|
|
Pages: 937-944 |
|
doi>10.1109/TVCG.2009.151 |
|
Full text available:
Publisher Site
|
|
We
present a novel and extensible set of interaction techniques for
manipulating visualizations of networks by selecting subgraphs and then
applying various commands to modify their layout or graphical
properties.Our techniques integrate traditional ...
We
present a novel and extensible set of interaction techniques for
manipulating visualizations of networks by selecting subgraphs and then
applying various commands to modify their layout or graphical
properties.Our techniques integrate traditional rectangle and lasso
selection, and also support selecting a node's neighbourhood by dragging
out its radius (in edges) using a novel kind of radial menu.Commands
for translation, rotation, scaling, or modifying graphical properties
(such as opacity) and layout patterns can be performed by using a hotbox
(a transiently popped-up, semi-transparent set of widgets) that has
been extended in novel ways to integrate specification of commands with
1D or 2D arguments.Our techniques require only one mouse button and one
keyboard key, and are designed for fast, gestural, in-place
interaction.We present the design and integration of these interaction
techniques, and illustrate their use in interactive graph
visualization.Our techniques are implemented in NAViGaTOR, a software
package for visualizing and analyzing biological networks.An initial
usability study is also reported. expand
|
|
ActiviTree: Interactive Visual Exploration of Sequences in Event-Based Data Using Graph Similarity |
|
Katerina Vrotsou,
Jimmy Johansson,
Matthew Cooper
|
|
Pages: 945-952 |
|
doi>10.1109/TVCG.2009.117 |
|
Full text available:
Publisher Site
|
|
The
identification of significant sequences in large and complex
event-based temporal data is a challenging problem with applications in
many areas of today's information intensive society. Pure visual
representations can be used for the analysis, but ...
The
identification of significant sequences in large and complex
event-based temporal data is a challenging problem with applications in
many areas of today's information intensive society. Pure visual
representations can be used for the analysis, but are constrained to
small data sets. Algorithmic search mechanisms used for larger data sets
become expensive as the data size increases and typically focus on
frequency of occurrence to reduce the computational complexity, often
overlooking important infrequent sequences and outliers. In this paper
we introduce an interactive visual data mining approach based on an
adaptation of techniques developed for web searching, combined with an
intuitive visual interface, to facilitate user-centred exploration of
the data and identification of sequences significant to that user. The
search algorithm used in the exploration executes in negligible time,
even for large data, and so no pre-processing of the selected data is
required, making this a completely interactive experience for the user.
Our particular application area is social science diary data but the
technique is applicable across many other disciplines. expand
|
|
“Search, Show Context, Expand on Demand”: Supporting Large Graph Exploration with Degree-of-Interest |
|
Frank van Ham,
Adam Perer
|
|
Pages: 953-960 |
|
doi>10.1109/TVCG.2009.108 |
|
Full text available:
Publisher Site
|
|
A
common goal in graph visualization research is the design of novel
techniques for displaying an overview of an entire graph. However, there
are many situations where such an overview is not relevant or practical
for users, as analyzing the global structure ...
A
common goal in graph visualization research is the design of novel
techniques for displaying an overview of an entire graph. However, there
are many situations where such an overview is not relevant or practical
for users, as analyzing the global structure may not be related to the
main task of the users that have semi-specific information needs.
Furthermore, users accessing large graph databases through an online
connection or users running on less powerful (mobile) hardware simply do
not have the resources needed to compute these overviews. In this
paper, we advocate an interaction model that allows users to remotely
browse the immediate context graph around a specific node of interest.
We show how Furnas’ original degree of interest function can be adapted
from trees to graphs and how we can use this metric to extract useful
contextual subgraphs, control the complexity of the generated
visualization and direct users to interesting datapoints in the context.
We demeffectiveness of our approach with an exploration of a dense
online database containing over 3 million legal citations. expand
|
|
A Comparison of User-Generated and Automatic Graph Layouts |
|
Tim Dwyer,
Bongshin Lee,
Danyel Fisher,
Kori Inkpen Quinn,
Petra Isenberg,
George Robertson,
Chris North
|
|
Pages: 961-968 |
|
doi>10.1109/TVCG.2009.109 |
|
Full text available:
Publisher Site
|
|
The
research presented in this paper compares user-generated and automatic
graph layouts. Following the methods suggested by van Ham et al. (2008),
a group of users generated graph layouts using both multi-touch
interaction on a tabletop display and ...
The
research presented in this paper compares user-generated and automatic
graph layouts. Following the methods suggested by van Ham et al. (2008),
a group of users generated graph layouts using both multi-touch
interaction on a tabletop display and mouse interaction on a desktop
computer. Users were asked to optimize their layout for aesthetics and
analytical tasks with a social network. We discuss characteristics of
the user-generated layouts and interaction methods employed by users in
this process. We then report on a web-based study to compare these
layouts with the output of popular automatic layout algorithms. Our
results demonstrate that the best of the user-generated layouts
performed as well as or better than the physics-based layout. Orthogonal
and circular automatic layouts were found to be considerably less
effective than either the physics-based layout or the best of the
user-generated layouts. We highlight several attributes of the various
layouts that led to high accuracy and improved task completion time, as
well as aspects in which traditional automatic layout methods were
unsuccessful for our tasks. expand
|
|
Smooth Graphs for Visual Exploration of Higher-Order State Transitions |
|
Jorik Blaas,
Charl Botha,
Edward Grundy,
Mark Jones,
Robert Laramee,
Frits Post
|
|
Pages: 969-976 |
|
doi>10.1109/TVCG.2009.181 |
|
Full text available:
Publisher Site
|
|
In
this paper, we present a new visual way of exploring state sequencesin
large observational time-series. A key advantage of our method is that
it can directly visualizehigher-order state transitions.A standard first
order state transitionis a sequence ...
In
this paper, we present a new visual way of exploring state sequencesin
large observational time-series. A key advantage of our method is that
it can directly visualizehigher-order state transitions.A standard first
order state transitionis a sequence of two states that are linked by a
transition.A higher-orderstate transition is a sequence of three or more
states where thesequence of participating states are linked together by
consecutive firstorder state transitions.Our method extends the current
state-graph exploration methods by employinga two dimensional graph, in
which higher-order state transitions arevisualized as curved lines.All
transitions are bundled into thicksplines, so that the thickness of an
edge represents the frequency of instances.The bundling between two
states takes into account the state transitionsbefore and after the
transition.This is done in such a way that it formsa continuous
representation in which any subsequence of the timeseries isrepresented
by a continuous smooth line.The edge bundles in these graphscan be
explored interactively through our incremental selection algorithm.We
demonstrate our method with an application in exploring
labeledtime-series data from a biological survey, where a clustering
hasassigned a single label to the data at each time-point.In
thesesequences, a large number of cyclic patterns occur, which in turn
arelinked to specific activities.We demonstrate how our method helps
tofind these cycles, and how the interactive selection process helps to
findand investigate activities. expand
|
|
Configuring Hierarchical Layouts to Address Research Questions |
|
Aidan Slingsby,
Jason Dykes,
Jo Wood
|
|
Pages: 977-984 |
|
doi>10.1109/TVCG.2009.128 |
|
Full text available:
Publisher Site
|
|
We
explore the effects of selecting alternative layouts in hierarchical
displays that show multiple aspects of large multivariate datasets,
including spatial and temporal characteristics. Hierarchical displays of
this type condition a dataset by multiple ...
We
explore the effects of selecting alternative layouts in hierarchical
displays that show multiple aspects of large multivariate datasets,
including spatial and temporal characteristics. Hierarchical displays of
this type condition a dataset by multiple discrete variable values,
creating nested graphical summaries of the resulting subsets in which
size, shape and colour can be used to show subset properties. These
'small multiples' are ordered by the conditioning variable values and
are laid out hierarchically using dimensional stacking. Crucially, we
consider the use of different layouts at different hierarchical levels,
so that the coordinates of the plane can be used more effectively to
draw attention to trends and anomalies in the data. We argue that these
layouts should be informed by the type of conditioning variable and by
the research question being explored. We focus on space-filling
rectangular layouts that provide data-dense and rich overviews of data
to address research questions posed in our exploratory analysis of
spatial and temporal aspects of property sales in London. We develop a
notation ('HiVE') that describes visualisation and layout states and
provides reconfiguration operators, demonstrate its use for
reconfiguring layouts to pursue research questions and provide
guidelines for this process. We demonstrate how layouts can be related
through animated transitions to reduce the cognitive load associated
with their reconfiguration whilst supporting the exploratory process. expand
|
|
Visualizing Social Photos on a Hasse Diagram for Eliciting Relations and Indexing New Photos |
|
Michel Crampes,
Jeremy de Oliveira-Kumar,
Sylvie Ranwez,
Jean Villerd
|
|
Pages: 985-992 |
|
doi>10.1109/TVCG.2009.201 |
|
Full text available:
Publisher Site
|
|
Social
photos, which are taken during family events or parties, represent
individuals or groups of people. We show in this paper how a Hasse
diagram is an efficient visualization strategy for eliciting different
groups and navigating through them. However, ...
Social
photos, which are taken during family events or parties, represent
individuals or groups of people. We show in this paper how a Hasse
diagram is an efficient visualization strategy for eliciting different
groups and navigating through them. However, we do not limit this
strategy to these traditional uses. Instead we show how it can also be
used for assisting in indexing new photos.Indexing consists of
identifying the event and people in photos. It is an integral phase that
takes place before searching and sharing. In our method we use existing
indexed photos to index new photos. This is performed through a manual
drag and drop procedure followed by a content fusion process that we
call ’propagation’. At the core of this process is the necessity to
organize and visualize the photos that will be used for indexing in a
manner that is easily recognizable and accessible by the user. In this
respect we make use of an Object Galois Sub-Hierarchy and display it
using a Hasse diagram. The need for an incremental display that
maintains the user’s mental map also leads us to propose a novel way of
building the Hasse diagram. To validate the approach, we present some
tests conducted with a sample of users that confirm the interest of this
organization, visualization and indexation approach. Finally, we
conclude by considering scalability, the possibility to extract social
networks and automatically create personalised albums. expand
|
|
Interactive Dimensionality Reduction Through User-defined Combinations of Quality Metrics |
|
Sara Johansson,
Jimmy Johansson
|
|
Pages: 993-1000 |
|
doi>10.1109/TVCG.2009.153 |
|
Full text available:
Publisher Site
|
|
Multivariate
data sets including hundreds of variables are increasingly common in
many application areas. Most multivariate visualization techniques are
unable to display such data effectively, and a common approach is to
employ dimensionality reduction ...
Multivariate
data sets including hundreds of variables are increasingly common in
many application areas. Most multivariate visualization techniques are
unable to display such data effectively, and a common approach is to
employ dimensionality reduction prior to visualization. Most existing
dimensionality reduction systems focus on preserving one or a few
significant structures in data. For many analysis tasks, however,
several types of structures can be of high significance and the
importance of a certain structure compared to the importance of another
is often task-dependent. This paper introduces a system for
dimensionality reduction by combining user-defined quality metrics using
weight functions to preserve as many important structures as possible.
The system aims at effective visualization and exploration of structures
within large multivariate data sets and provides enhancement of diverse
structures by supplying a range of automatic variable orderings.
Furthermore it enables a quality-guided reduction of variables through
an interactive display facilitating investigation of trade-offs between
loss of structure and the number of variables to keep. The generality
and interactivity of the system is demonstrated through a case scenario. expand
|
|
Scattering Points in Parallel Coordinates |
|
Xiaoru Yuan,
Peihong Guo,
He Xiao,
Hong Zhou,
Huamin Qu
|
|
Pages: 1001-1008 |
|
doi>10.1109/TVCG.2009.179 |
|
Full text available:
Publisher Site
|
|
In
this paper, we present a novel parallel coordinates design integrated
with points (Scattering Points in Parallel Coordinates, SPPC), by taking
advantage of both parallel coordinates and scatterplots. Different from
most multiple views visualization ...
In
this paper, we present a novel parallel coordinates design integrated
with points (Scattering Points in Parallel Coordinates, SPPC), by taking
advantage of both parallel coordinates and scatterplots. Different from
most multiple views visualization frameworks involving parallel
coordinates where each visualization type occupies an individual window,
we convert two selected neighboring coordinate axes into a scatterplot
directly. Multidimensional scaling is adopted to allow converting
multiple axes into a single subplot. The transition between two visual
types is designed in a seamless way. In our work, a series of
interaction tools has been developed. Uniform brushing functionality is
implemented to allow the user to perform data selection on both points
and parallel coordinate polylines without explicitly switching tools. A
GPU accelerated Dimensional Incremental Multidimensional Scaling (DIMDS)
has been developed to significantly improve the system performance. Our
case study shows that our scheme is more efficient than traditional
multi-view methods in performing visual analysis tasks. expand
|
|
Bubble Sets: Revealing Set Relations with Isocontours over Existing Visualizations |
|
Christopher Collins,
Gerald Penn,
Sheelagh Carpendale
|
|
Pages: 1009-1016 |
|
doi>10.1109/TVCG.2009.122 |
|
Full text available:
Publisher Site
|
|
While
many data sets contain multiple relationships, depicting more than one
data relationship within a single visualization is challenging. We
introduce Bubble Sets as a visualization technique for data that has
both a primary data relation with a semantically ...
While
many data sets contain multiple relationships, depicting more than one
data relationship within a single visualization is challenging. We
introduce Bubble Sets as a visualization technique for data that has
both a primary data relation with a semantically significant spatial
organization and a significant set membership relation in which members
of the same set are not necessarily adjacent in the primary layout. In
order to maintain the spatial rights of the primary data relation, we
avoid layout adjustment techniques that improve set cluster continuity
and density. Instead, we use a continuous, possibly concave, isocontour
to delineate set membership, without disrupting the primary layout.
Optimizations minimize cluster overlap and provide for calculation of
the isocontours at interactive speeds. Case studies show how this
technique can be used to indicate multiple sets on a variety of common
visualizations. expand
|
|
FromDaDy: Spreading Aircraft Trajectories Across Views to Support Iterative Queries |
|
Christophe Hurter,
Benjamin Tissoires,
Stéphane Conversy
|
|
Pages: 1017-1024 |
|
doi>10.1109/TVCG.2009.145 |
|
Full text available:
Publisher Site
|
|
When
displaying thousands of aircraft trajectories on a screen, the
visualization is spoiled by a tangle of trails. The visual analysis is
therefore difficult, especially if a specific class of trajectories in
an erroneous dataset has to be studied. ...
When
displaying thousands of aircraft trajectories on a screen, the
visualization is spoiled by a tangle of trails. The visual analysis is
therefore difficult, especially if a specific class of trajectories in
an erroneous dataset has to be studied. We designed FromDaDy, a
trajectory visualization tool that tackles the difficulties of exploring
the visualization of multiple trails. This multidimensional data
exploration is based on scatterplots, brushing, pick and drop,
juxtaposed views and rapid visual design. Users can organize the
workspace composed of multiple juxtaposed views. They can define the
visual configuration of the views by connecting data dimensions from the
dataset to Bertin’s visual variables. They can then brush trajectories,
and with a pick and drop operation they can spread the brushed
information across views. They can then repeat these interactions, until
they extract a set of relevant data, thus formulating complex queries.
Through two real-world scenarios, we show how FromDaDy supports
iterative queries and the extraction of trajectories in a dataset that
contains up to 5 million data. expand
|
|
SellTrend: Inter-Attribute Visual Analysis of Temporal Transaction Data |
|
Zhicheng Liu,
John Stasko,
Timothy Sullivan
|
|
Pages: 1025-1032 |
|
doi>10.1109/TVCG.2009.180 |
|
Full text available:
Publisher Site
|
|
We
present a case study of our experience designing SellTrend, a
visualization system for analyzing airline travel purchase requests. The
relevant transaction data can be characterized as multi-variate
temporal and categorical event sequences, and the ...
We
present a case study of our experience designing SellTrend, a
visualization system for analyzing airline travel purchase requests. The
relevant transaction data can be characterized as multi-variate
temporal and categorical event sequences, and the chief problem
addressed is how to help company analysts identify complex combinations
of transaction attributes that contribute to failed purchase requests.
SellTrend combines a diverse set of techniques ranging from time series
visualization to faceted browsing and historical trend analysis in order
to help analysts make sense of the data. We believe that the
combination of views and interaction capabilities in SellTrend provides
an innovative approach to this problem and to other similar types of
multivariate, temporally driven transaction data analysis. Initial
feedback from company analysts confirms the utility and benefits of the
system. expand
|
|
Comparing Dot and Landscape Spatializations for Visual Memory Differences |
|
Melanie Tory,
Colin Swindells,
Rebecca Dreezer
|
|
Pages: 1033-1040 |
|
doi>10.1109/TVCG.2009.127 |
|
Full text available:
Publisher Site
|
|
Spatialization
displays use a geographic metaphor to arrange non-spatial data. For
example, spatializations arecommonly applied to document collections so
that document themes appear as geographic features such as hills. Many
common spatialization interfaces ...
Spatialization
displays use a geographic metaphor to arrange non-spatial data. For
example, spatializations arecommonly applied to document collections so
that document themes appear as geographic features such as hills. Many
common spatialization interfaces use a 3-D landscape metaphor to present
data. However, it is not clear whether 3-D spatializations afford
improved speed and accuracy for user tasks compared to similar 2-D
spatializations. We describe a user study comparing users’ ability to
remember dot displays, 2-D landscapes, and 3-D landscapes for two
different data densities (500 vs. 1000 points). Participants’ visual
memory was statistically more accurate when viewing dot displays and 3-D
landscapes compared to 2-D landscapes. Furthermore, accuracy
remembering a spatialization was significantly better overall for denser
spatializations. Theseresults are of benefit to visualization designers
who are contemplating the best ways to present data using
spatialization techniques. expand
|
|
Flow Mapping and Multivariate Visualization of Large Spatial Interaction Data |
|
Diansheng Guo
|
|
Pages: 1041-1048 |
|
doi>10.1109/TVCG.2009.143 |
|
Full text available:
Publisher Site
|
|
Spatial
interactions (or flows), such as population migration and disease
spread, naturally form a weighted location-to-location network (graph).
Such geographically embedded networks (graphs) are usually very large.
For example, the county-to-county ...
Spatial
interactions (or flows), such as population migration and disease
spread, naturally form a weighted location-to-location network (graph).
Such geographically embedded networks (graphs) are usually very large.
For example, the county-to-county migration data in the U.S. has
thousands of counties and about a million migration paths. Moreover,
many variables are associated with each flow, such as the number of
migrants for different age groups, income levels, and occupations. It is
a challenging task to visualize such data and discover network
structures, multivariate relations, and their geographic patterns
simultaneously. This paper addresses these challenges by developing an
integrated interactive visualization framework that consists three
coupled components: (1) a spatially constrained graph partitioning
method that can construct a hierarchy of geographical regions
(communities), where there are more flows or connections within regions
than across regions; (2) a multivariate clustering and visualization
method to detect and present multivariate patterns in the aggregated
region-to-region flows; and (3) a highly interactive flow mapping
component to map both flow and multivariate patterns in the geographic
space, at different hierarchical levels. The proposed approach can
process relatively large data sets and effectively discover and
visualize major flow structures and multivariate relations at the same
time. User interactions are supported to facilitate the understanding of
both an overview and detailed patterns. expand
|
|
Temporal Summaries: Supporting Temporal Categorical Searching, Aggregation and Comparison |
|
Taowei David Wang,
Catherine Plaisant,
Ben Shneiderman,
Neil Spring,
David Roseman,
Greg Marchand,
Vikramjit Mukherjee,
Mark Smith
|
|
Pages: 1049-1056 |
|
doi>10.1109/TVCG.2009.187 |
|
Full text available:
Publisher Site
|
|
When
analyzing thousands of event histories, analysts often want to see the
events as an aggregate to detect insights and generate new hypotheses
about the data.An analysis tool must emphasize both the prevalence and
the temporal ordering of these events. ...
When
analyzing thousands of event histories, analysts often want to see the
events as an aggregate to detect insights and generate new hypotheses
about the data.An analysis tool must emphasize both the prevalence and
the temporal ordering of these events. Additionally, the analysis tool
must also support flexible comparisons to allow analysts to gather
visual evidence.In a previsous work, we introduced align, rank, and
filter (ARF) to accentuate temporal ordering.In this paper, we present
temporal summaries, an interactive visualization technique that
highlights the prevalence of event occurrences.Temporal summaries
dynamically aggregate events in multiple granularities (year, month,
week, day, hour, etc.) for the purpose of spotting trends over time and
comparing several groups of records.They provide affordances for
analysts to perform temporal range filters.We demonstrate the
applicability of this approach in two extensive case studies with
analysts who applied temporal summaries to search, filter, and look for
patterns in electronic health records and academic records. expand
|
|
ResultMaps: Visualization for Search Interfaces |
|
Edward Clarkson,
Krishna Desai,
James Foley
|
|
Pages: 1057-1064 |
|
doi>10.1109/TVCG.2009.176 |
|
Full text available:
Publisher Site
|
|
Hierarchical
representations are common in digital repositories, yet are not always
fully leveraged in their onlinesearch interfaces. This work describes
ResultMaps, which use hierarchical treemap representations with query
string-driven digital library ...
Hierarchical
representations are common in digital repositories, yet are not always
fully leveraged in their onlinesearch interfaces. This work describes
ResultMaps, which use hierarchical treemap representations with query
string-driven digital library search engines. We describe two lab
experiments, which find that ResultsMap users yield significantly better
results over a control condition on some subjective measures, and we
find evidence that ResultMaps have ancillary benefits via increased
understanding of some aspects of repository content. The ResultMap
system and experiments contribute an understanding of the
benefits—direct and indirect—of the ResultMap approach to repository
search visualization. expand
|
|
Lark: Coordinating Co-located Collaboration with Information Visualization |
|
Matthew Tobiasz,
Petra Isenberg,
Sheelagh Carpendale
|
|
Pages: 1065-1072 |
|
doi>10.1109/TVCG.2009.162 |
|
Full text available:
Publisher Site
|
|
Large
multi-touch displays are expanding the possibilities of
multiple-coordinated views by allowing multiple people to interact with
data in concert or independently. We present Lark, a system that
facilitates the coordination of interactions with information ...
Large
multi-touch displays are expanding the possibilities of
multiple-coordinated views by allowing multiple people to interact with
data in concert or independently. We present Lark, a system that
facilitates the coordination of interactions with information
visualizations on shared digital workspaces. We focus on supporting this
coordination according to four main criteria: scoped interaction,
temporal flexibility, spatial flexibility, and changing collaboration
styles. These are achieved by integrating a representation of the
information visualization pipeline into the shared workspace, thus
explicitly indicating coordination points on data, representation,
presentation, and view levels. This integrated meta-visualization
supports both the awareness of how views are linked and the freedom to
work in concert or independently. Lark incorporates these four main
criteria into a coherent visualization collaboration interaction
environment by providing direct visual and algorithmic support for the
coordination of data analysis actions over shared large displays. expand
|
|
The Benefits of Synchronous Collaborative Information Visualization: Evidence from an Experimental Evaluation |
|
Sabrina Bresciani,
Martin J. Eppler
|
|
Pages: 1073-1080 |
|
doi>10.1109/TVCG.2009.188 |
|
Full text available:
Publisher Site
|
|
A
great corpus of studies reports empirical evidence of how information
visualization supports comprehension and analysis of data. The benefits
of visualization for synchronous group knowledge work, however, have not
been addressed extensively.Anecdotal ...
A
great corpus of studies reports empirical evidence of how information
visualization supports comprehension and analysis of data. The benefits
of visualization for synchronous group knowledge work, however, have not
been addressed extensively.Anecdotal evidence and use cases illustrate
the benefits of synchronous collaborative information visualization, but
very few empirical studies have rigorously examined the impact of
visualization on group knowledge work. We have consequently designed and
conducted an experiment in which we have analyzed the impact of
visualization on knowledge sharing in situated work groups. Our
experimental study consists of evaluating the performance of 131
subjects (all experienced managers) in groups of 5 (for a total of 26
groups), working together on a real-life knowledge sharing task. We
compare (1) the control condition (no visualization provided), with two
visualization supports: (2) optimal and (3) suboptimal visualization
(based on a previous survey). The facilitator of each group was asked to
populate the provided interactive visual template with insights from
the group, and to organize the contributions according to the group
consensus. We have evaluated the results through both objective and
subjective measures. Our statistical analysis clearly shows that
interactive visualization has a statistically significant, objective and
positive impact on the outcomes of knowledge sharing, but that the
subjects seem not to be aware of this. In particular, groups supported
by visualization achieved higher productivity, higher quality of outcome
and greater knowledge gains. No statistically significant results could
be found between an optimal and a suboptimal visualization though (as
classified by the pre-experiment survey). Subjects also did not seem to
be aware of the benefits that the visualizations provided as no
difference between the visualization and the control conditions was
found for the self-reported measures of satisfaction and participation.
An implication of our study for information visualization applications
is to extend them by using real-time group annotation functionalities
that aid in the group sense making process of the represented data. expand
|
|
Harnessing the Information Ecosystem with Wiki-based Visualization Dashboards |
|
Matt McKeon
|
|
Pages: 1081-1088 |
|
doi>10.1109/TVCG.2009.148 |
|
Full text available:
Publisher Site
|
|
We
describe the design and deployment of Dashiki, a public website where
users may collaboratively build visualization dashboards through a
combination of a wiki-like syntax and interactive editors. Our goals are
to extend existing research on social ...
We
describe the design and deployment of Dashiki, a public website where
users may collaboratively build visualization dashboards through a
combination of a wiki-like syntax and interactive editors. Our goals are
to extend existing research on social data analysis into presentation
and organization of data from multiple sources, explore new metaphors
for these activities, and participate more fully in the web!s
information ecology by providing tighter integration with real-time
data. To support these goals, our design includes novel and low-barrier
mechanisms for editing and layout of dashboard pages and visualizations,
connection to data sources, and coordinating interaction between
visualizations. In addition to describing these technologies, we provide
a preliminary report on the public launch of a prototype based on this
design, including a description of the activities of our users derived
from observation and interviews expand
|
|
SpicyNodes: Radial Layout Authoring for the General Public |
|
Michael Douma,
Grzegorz Ligierko,
Ovidiu Ancuta,
Pavel Gritsai,
Sean Liu
|
|
Pages: 1089-1096 |
|
doi>10.1109/TVCG.2009.183 |
|
Full text available:
Publisher Site
|
|
Trees
and graphs are relevant to many online tasks such as visualizing social
networks, product catalogs, educational portals, digital libraries, the
semantic web, concept maps and personalized information management.
SpicyNodes is an information-visualization ...
Trees
and graphs are relevant to many online tasks such as visualizing social
networks, product catalogs, educational portals, digital libraries, the
semantic web, concept maps and personalized information management.
SpicyNodes is an information-visualization technology that builds upon
existing research on radial tree layouts and graph structures. Users can
browse a tree, clicking from node to node, as well as successively
viewing a node, immediately related nodes and the path back to the
“home” nodes. SpicyNodes’ layout algorithms maintain balanced layouts
using a hybrid mixture of a geometric layout (a succession of spanning
radial trees) and force-directed layouts to minimize overlapping nodes,
plus several other improvements over prior art. It provides XML-based
API and GUI authoring tools. The goal of the SpicyNodes project is to
implement familiar principles of radial maps and focus+context with an
attractive and inviting look and feel in an open system that is
accessible to virtually any Internet user. expand
|
|
code_swarm: A Design Study in Organic Software Visualization |
|
Michael Ogawa,
Kwan-Liu Ma
|
|
Pages: 1097-1104 |
|
doi>10.1109/TVCG.2009.123 |
|
Full text available:
Publisher Site
|
|
In
May of 2008, we published online a series of software visualization
videos using a method called code_swarm. Shortly thereafter, we made the
code open source and its popularity took off. This paper is a study of
our code swarm application, comprising ...
In
May of 2008, we published online a series of software visualization
videos using a method called code_swarm. Shortly thereafter, we made the
code open source and its popularity took off. This paper is a study of
our code swarm application, comprising its design, results and public
response. We share our design methodology, including why we chose the
organic information visualization technique, how we designed for both
developers and a casual audience, and what lessons we learned from our
experiment. We validate the results produced by code_swarm through a
qualitative analysis and by gathering online user comments. Furthermore,
we successfully released the code as open source, and the software
community used it to visualize their own projects and shared their
results as well. In the end, we believe code_swarm has positive
implications for the future of organic information design and open
source information visualization practice. expand
|
|
Towards Utilizing GPUs in Information Visualization: A Model and Implementation of Image-Space Operations |
|
Bryan McDonnel,
Niklas Elmqvist
|
|
Pages: 1105-1112 |
|
doi>10.1109/TVCG.2009.191 |
|
Full text available:
Publisher Site
|
|
Modern
programmable GPUs represent a vast potential in terms of performance
and visual flexibility for information visualization research, but
surprisingly few applications even begin to utilize this potential. In
this paper, we conjecture that this ...
Modern
programmable GPUs represent a vast potential in terms of performance
and visual flexibility for information visualization research, but
surprisingly few applications even begin to utilize this potential. In
this paper, we conjecture that this may be due to the mismatch between
the high-level abstract data types commonly visualized in our field, and
the low-level floating-point model supported by current GPU shader
languages. To help remedy this situation, we present a refinement of the
traditional information visualization pipeline that is amenable to
implementation using GPU shaders. The refinement consists of a final
image-space step in the pipeline where the multivariate data of the
visualization is sampled in the resolution of the current view. To
concretize the theoretical aspects of this work, we also present a
visual programming environment for constructing visualization shaders
using a simple drag-and-drop interface. Finally, we give some examples
of the use of shaders for well-known visualization techniques. expand
|
|
A Multi-Threading Architecture to Support Interactive Visual Exploration |
|
Harald Piringer,
Christian Tominski,
Philipp Muigg,
Wolfgang Berger
|
|
Pages: 1113-1120 |
|
doi>10.1109/TVCG.2009.110 |
|
Full text available:
Publisher Site
|
|
During
continuous user interaction, it is hard to provide rich visual feedback
at interactive rates for datasets containing millions of entries. The
contribution of this paper is a generic architecture that ensures
responsiveness of the application even ...
During
continuous user interaction, it is hard to provide rich visual feedback
at interactive rates for datasets containing millions of entries. The
contribution of this paper is a generic architecture that ensures
responsiveness of the application even when dealing with large data and
that is applicable to most types of information visualizations. Our
architecture builds on the separation of the main application thread and
the visualization thread, which can be cancelled early due to user
interaction. In combination with a layer mechanism, our architecture
facilitates generating previews incrementally to provide rich visual
feedback quickly. To help avoiding common pitfalls of multi-threading,
we discuss synchronization and communication in detail. We explicitly
denote design choices to control trade-offs. A quantitative evaluation
based on the system VI S P L ORE shows fast visual feedback during
continuous interaction even for millions of entries. We describe
instantiations of our architecture in additional tools. expand
|
|
Protovis: A Graphical Toolkit for Visualization |
|
Michael Bostock,
Jeffrey Heer
|
|
Pages: 1121-1128 |
|
doi>10.1109/TVCG.2009.174 |
|
Full text available:
Publisher Site
|
|
Despite
myriad tools for visualizing data, there remains a gap between the
notational efficiency of high-level visualization systems and the
expressiveness and accessibility of low-level graphical systems.
Powerful visualization systems may be inflexible ...
Despite
myriad tools for visualizing data, there remains a gap between the
notational efficiency of high-level visualization systems and the
expressiveness and accessibility of low-level graphical systems.
Powerful visualization systems may be inflexible or impose abstractions
foreign to visual thinking, while graphical systems such as rendering
APIs and vector-based drawing programs are tedious for complex work. We
argue that an easy-to-use graphical system tailored for visualization is
needed. In response, we contribute Protovis, an extensible toolkit for
constructing visualizations by composing simple graphical primitives. In
Protovis, designers specify visualizations as a hierarchy of marks with
visual properties defined as functions of data. This representation
achieves a level of expressiveness comparable to low-level graphics
systems, while improving efficiency--the effort required to specify a
visualization--and accessibility--the effort required to learn and
modify the representation. We substantiate this claim through a diverse
collection of examples and comparative analysis with popular
visualization tools. expand
|
|
Visual Analysis of Inter-Process Communication for Large-Scale Parallel Computing |
|
Chris Muelder,
Francois Gygi,
Kwan-Liu Ma
|
|
Pages: 1129-1136 |
|
doi>10.1109/TVCG.2009.196 |
|
Full text available:
Publisher Site
|
|
In
serial computation, program profiling is often helpful for optimization
of key sections of code. When moving to parallel computation, not only
does the code execution need to be considered but also communication
between the different processes which ...
In
serial computation, program profiling is often helpful for optimization
of key sections of code. When moving to parallel computation, not only
does the code execution need to be considered but also communication
between the different processes which can induce delays that are
detrimental to performance. As the number of processes increases, so
does the impact of the communication delays on performance. For
large-scale parallel applications, it is critical to understand how the
communication impacts performance in order to make the code more
efficient. There are several tools available for visualizing program
execution and communications on parallel systems. These tools generally
provide either views which statistically summarize the entire program
execution or process-centric views. However, process-centric
visualizations do not scale well as the number of processes gets very
large. In particular, the most common representation of parallel
processes is a Gantt char t with a row for each process. As the number
of processes increases, these charts can become difficult to work with
and can even exceed screen resolution. We propose a new visualization
approach that affords more scalability and then demonstrate it on
systems running with up to 16,384 processes. expand
|
|
Participatory Visualization with Wordle |
|
Fernanda B. Viegas,
Martin Wattenberg,
Jonathan Feinberg
|
|
Pages: 1137-1144 |
|
doi>10.1109/TVCG.2009.171 |
|
Full text available:
Publisher Site
|
|
We
discuss the design and usage of “Wordle,” a web-based tool for
visualizing text. Wordle creates tag-cloud-like displays that give
careful attention to typography, color, and composition. We describe the
algorithms used to balance various ...
We
discuss the design and usage of “Wordle,” a web-based tool for
visualizing text. Wordle creates tag-cloud-like displays that give
careful attention to typography, color, and composition. We describe the
algorithms used to balance various aesthetic criteria and create the
distinctive Wordle layouts. We then present the results of a study of
Wordle usage, based both on spontaneous behaviour observed in the wild,
and on a large-scale survey of Wordle users. The results suggest that
Wordles have become a kind of medium of expression, and that a
“participatory culture” has arisen around them. expand
|
|
Document Cards: A Top Trumps Visualization for Documents |
|
Hendrik Strobelt,
Daniela Oelke,
Christian Rohrdantz,
Andreas Stoffel,
Daniel A. Keim,
Oliver Deussen
|
|
Pages: 1145-1152 |
|
doi>10.1109/TVCG.2009.139 |
|
Full text available:
Publisher Site
|
|
Finding
suitable, less space consuming views for a document’s main content is
crucial to provide convenient access to large document collections on
display devices of different size. We present a novel compact
visualization which represents the ...
Finding
suitable, less space consuming views for a document’s main content is
crucial to provide convenient access to large document collections on
display devices of different size. We present a novel compact
visualization which represents the document’s key semantic as a mixture
of images and important key terms, similar to cards in a top trumps
game. The key terms are extracted using an advanced text mining approach
based on a fully automatic document structure extraction. The images
and their captions are extracted using a graphical heuristic and the
captions are used for a semi-semantic image weighting. Furthermore, we
use the image color histogram for classification and show at least one
representative from each non-empty image class. The approach is
demonstrated for the IEEE InfoVis publications of a complete year. The
method can easily be applied to other publication collections and sets
of documents which contain images. expand
|
|
Visualizing the Intellectual Structure with Paper-Reference Matrices |
|
Jian Zhang,
Chaomei Chen,
Jiexun Li
|
|
Pages: 1153-1160 |
|
doi>10.1109/TVCG.2009.202 |
|
Full text available:
Publisher Site
|
|
Visualizing
the intellectual structure of scientific domains using co-cited units
such as references or authors has become a routine for domain analysis.
In previous studies, paper-reference matrices are usually transformed
into reference-reference matrices ...
Visualizing
the intellectual structure of scientific domains using co-cited units
such as references or authors has become a routine for domain analysis.
In previous studies, paper-reference matrices are usually transformed
into reference-reference matrices to obtain co-citation relationships,
which are then visualized in different representations, typically as
node-link networks, to represent the intellectual structures of
scientific domains. Such network visualizations sometimes contain
tightly knit components, which make visual analysis of the intellectual
structure a challenging task. In this study, we propose a new approach
to reveal co-citation relationships. Instead of using a
reference-reference matrix, we directly use the original paper-reference
matrix as the information source, and transform the paper-reference
matrix into an FP-tree and visualize it in a Java-based prototype
system. We demonstrate the usefulness of our approach through visual
analyses of the intellectual structure of two domains: Information
Visualization and Sloan Digital Sky Survey (SDSS). The results show that
our visualization not only retains the major information of co-citation
relationships, but also reveals more detailed sub-structures of tightly
knit clusters than a conventional node-link network visualization. expand
|
|
Exemplar-based Visualization of Large Document Corpus (InfoVis2009-1115) |
|
Yanhua Chen,
Lijun Wang,
Ming Dong,
Jing Hua
|
|
Pages: 1161-1168 |
|
doi>10.1109/TVCG.2009.140 |
|
Full text available:
Publisher Site
|
|
With
the rapid growth of the World Wide Web and electronic information
services,text corpus is becoming available on-line at an incredible
rate.By displaying text data in a logical layout (e.g., color
graphs),text visualization presents a direct way ...
With
the rapid growth of the World Wide Web and electronic information
services,text corpus is becoming available on-line at an incredible
rate.By displaying text data in a logical layout (e.g., color
graphs),text visualization presents a direct way to observe the
documentsas well as understand the relationship between them.In this
paper, we propose a novel technique, Exemplar-based Visualization (EV),
to visualizean extremely large text corpus. Capitalizing on recent
advances in matrixapproximation and decomposition, EV presents a
probabilistic multidimensional projection modelin the low-rank text
subspace with a sound objective function. The probability of each
document proportion to the topics is obtained through iterative
optimization andembedded to a low dimensional space using parameter
embedding.By selecting the representative exemplars, we obtain a
compactapproximation of the data. This makes the visualization highly
efficient and flexible. In addition, the selected exemplars neatly
summarize the entire data set and greatly reduce the cognitiveoverload
in the visualization, leading to an easier interpretation oflarge text
corpus. Empirically, we demonstrate the superior performance of
EVthrough extensive experiments performed on the publicly available text
data sets. expand
|
|
Mapping Text with Phrase Nets |
|
Frank van Ham,
Martin Wattenberg,
Fernanda B. Viegas
|
|
Pages: 1169-1176 |
|
doi>10.1109/TVCG.2009.165 |
|
Full text available:
Publisher Site
|
|
We
present a new technique, the phrase net, for generating visual
overviews of unstructured text. A phrase net displays a graph whose
nodes are words and whose edges indicate that two words are linked by a
user-specified relation. These relations may ...
We
present a new technique, the phrase net, for generating visual
overviews of unstructured text. A phrase net displays a graph whose
nodes are words and whose edges indicate that two words are linked by a
user-specified relation. These relations may be defined either at the
syntactic or lexical level; different relations often produce very
different perspectives on the same text. Taken together, these
perspectives often provide an illuminating visual overview of the key
concepts and relations in a document or set of documents. expand
|
|
Loop surgery for volumetric meshes: Reeb graphs reduced to contour trees |
|
Julien Tierny,
Attila Gyulassy,
Eddie Simon,
Valerio Pascucci
|
|
Pages: 1177-1184 |
|
doi>10.1109/TVCG.2009.163 |
|
Full text available:
Publisher Site
|
|
This
paper introduces an efficient algorithm for computing the Reeb graph of
a scalar function f defined on a volumetric mesh M in R^3. We introduce
a procedure called "loop surgery" that transforms M into a mesh M' by a
sequence of cuts and guarantees ...
This
paper introduces an efficient algorithm for computing the Reeb graph of
a scalar function f defined on a volumetric mesh M in R^3. We introduce
a procedure called "loop surgery" that transforms M into a mesh M' by a
sequence of cuts and guarantees the Reeb graph of f(M') to be loop
free. Therefore, loop surgery reduces Reeb graph computation to the
simpler problem of computing a contour tree, for which well-known
algorithms exist that are theoretically efficient (O(n log n)) and fast
in practice. Inverse cuts reconstruct the loops removed at the
beginning.The time complexity of our algorithm is that of a contour tree
computation plus a loop surgery overhead, which depends on the number
of handles of the mesh. Our systematic experiments confirm that for
real-life data, this overhead is comparable to the computation of the
contour tree, demonstrating virtually linear scalability on meshes
ranging from 70 thousand to 3.5 million tetrahedra. Performance numbers
show that our algorithm, although restricted to volumetric data, has an
average speedup factor of 6,500 over the previous fastest techniques,
handling larger and more complex data-sets.We demonstrate the verstility
of our approach by extending fast topologically clean isosurface
extraction to non simply-connected domains. We apply this technique in
the context of pressure analysis for mechanical design. In this case,
our technique produces results in matter of seconds even for the largest
meshes. For the same models, previous Reeb graph techniques do not
produce a result. expand
|
|
Applying Manifold Learning to Plotting Approximate Contour Trees |
|
Shigeo Takahashi,
Issei Fujishiro,
Masato Okada
|
|
Pages: 1185-1192 |
|
doi>10.1109/TVCG.2009.119 |
|
Full text available:
Publisher Site
|
|
A
contour tree is a powerful tool for delineating the topological
evolution of isosurfaces of a single-valued function, and thus has been
frequently used as a means of extracting features from volumes and their
time-varying behaviors.Several sophisticated ...
A
contour tree is a powerful tool for delineating the topological
evolution of isosurfaces of a single-valued function, and thus has been
frequently used as a means of extracting features from volumes and their
time-varying behaviors.Several sophisticated algorithms have been
proposed for constructing contour trees while they often complicate the
software implementation especially for higher-dimensional cases such as
time-varying volumes.This paper presents a simple yet effective approach
to plotting in 3D space, approximate contour trees from a set of
scattered samples embedded in the high-dimensional space.Our main idea
is to take advantage of manifold learning so that we can elongate the
distribution of high-dimensional data samples to embed it into a
low-dimensional space while respecting its local proximity of sample
points.The contribution of this paper lies in the introduction of new
distance metrics to manifold learning, which allows us to reformulate
existing algorithms as a variant of currently available dimensionality
reduction scheme.Efficient reduction of data sizes together with
segmentation capability is also developed to equip our approach with a
coarse-to-fine analysis even for large-scale datasets.Examples are
provided to demonstrate that our proposed scheme can successfully
traverse the features of volumes and their temporal behaviors through
the constructed contour trees. expand
|
|
Intrinsic Geometric Scale Space by Shape Diffusion |
|
Guangyu Zou,
Jing Hua,
Zhaoqiang Lai,
Xianfeng Gu,
Ming Dong
|
|
Pages: 1193-1200 |
|
doi>10.1109/TVCG.2009.159 |
|
Full text available:
Publisher Site
|
|
This
paper formalizes a novel, intrinsic geometric scale space (IGSS) of 3D
surface shapes. The intrinsic geometry of a surface is diffused by means
of the Ricci flow for the generation of a geometric scale space. We
rigorously prove that this multiscale ...
This
paper formalizes a novel, intrinsic geometric scale space (IGSS) of 3D
surface shapes. The intrinsic geometry of a surface is diffused by means
of the Ricci flow for the generation of a geometric scale space. We
rigorously prove that this multiscale shape representation satisfies the
axiomatic causality property. Within the theoretical framework, we fur
ther present a feature-based shape representation derived from IGSS
processing, which is shown to be theoretically plausible and practically
effective. By integrating the concept of scale-dependent saliency into
the shape description, this representation is not only highly
descriptive of the local structures, but also exhibits several desired
characteristics of global shape representations, such as being compact,
robust to noise and computationally efficient. We demonstrate the
capabilities of our approach through salient geometric feature detection
and highly discriminative matching of 3D scans. expand
|
|
Multi-Scale Surface Descriptors |
|
Gregory Cipriano,
George N. Phillips Jr.,
Michael Gleicher
|
|
Pages: 1201-1208 |
|
doi>10.1109/TVCG.2009.168 |
|
Full text available:
Publisher Site
|
|
Local
shape descriptors compactly characterize regions of a surface, and have
been applied to tasks in visualization, shape matching, and analysis.
Classically, curvature has be used as a shape descriptor; however, this
differential property characterizes ...
Local
shape descriptors compactly characterize regions of a surface, and have
been applied to tasks in visualization, shape matching, and analysis.
Classically, curvature has be used as a shape descriptor; however, this
differential property characterizes only an infinitesimal neighborhood.
In this paper, we provide shape descriptors for surface meshes designed
to be multi-scale, that is, capable of characterizing regions of varying
size. These descriptors capture statistically the shape of a
neighborhood around a central point by fitting a quadratic surface. They
therefore mimic differential curvature, are efficient to compute, and
encode anisotropy. We show how simple variants of mesh operations can be
used to compute the descriptors without resorting to expensive
parameterizations, and additionally provide a statistical approximation
for reduced computational cost. We show how these descriptors apply to a
number of uses in visualization, analysis, and matching of surfaces,
particularly to tasks in protein surface analysis. expand
|
|
A User Study to Compare Four Uncertainty Visualization Methods for 1D and 2D Datasets |
|
Jibonananda Sanyal,
Song Zhang,
Gargi Bhattacharya,
Phil Amburn,
Robert Moorhead
|
|
Pages: 1209-1218 |
|
doi>10.1109/TVCG.2009.114 |
|
Full text available:
Publisher Site
|
|
Many
techniques have been proposed to show uncertainty in data
visualizations.However, very little is known about their effectiveness
in conveying meaningful information. In this paper, we present a user
study that evaluates the perception of uncertainty ...
Many
techniques have been proposed to show uncertainty in data
visualizations.However, very little is known about their effectiveness
in conveying meaningful information. In this paper, we present a user
study that evaluates the perception of uncertainty amongst four of the
most commonly used techniques for visualizing uncertainty in
one-dimensional and two-dimensional data. The techniques evaluated are
traditional errorbars, scaled size of glyphs, color-mapping on glyphs,
and color-mapping of uncertainty on the data surface. The study uses
generated data that was designed to represent the systematic and random
uncertainty components. Twenty-seven users performed two types of search
tasks and two types of counting tasks on 1D and 2D datasets. The search
tasks involved finding data points that were least or most uncertain.
The counting tasks involved counting data features or uncertainty
features. A 4x4 full-factorial ANOVA indicated a significant interaction
between the techniques used and the type of tasks assigned for both
datasets indicating that differences in performance between the four
techniques depended on the type of task performed. Several one-way
ANOVAs were computed to explore the simple main effects. Bonferronni’s
correction was used to control for the family-wise error rate for
alpha-inflation. Although we did not find a consistent order among the
four techniques for all the tasks, there are several findings from the
study that we think are useful for uncertainty visualization design. We
found a significant difference in user performance between searching for
locations of high and searching for locations of low uncertainty.
Errorbars consistently underperformed throughout the experiment. Scaling
the size of glyphs and color-mapping of the surface performed
reasonably well. The efficiency of most of these techniques were highly
dependent on the tasks performed. We believe that these findings can be
used in future uncertainty visualization design. In addition, the
framework developed in this user study presents a structured approach to
evaluate uncertainty visualization techniques, as well as provides a
basis for future research in uncertainty visualization. expand
|
|
Comparing 3D Vector Field Visualization Methods: A User Study |
|
Andrew Forsberg,
Jian Chen,
David Laidlaw
|
|
Pages: 1219-1226 |
|
doi>10.1109/TVCG.2009.126 |
|
Full text available:
Publisher Site
|
|
In
a user study comparing four visualization methods for three-dimensional
vector data, participants used visualizations from each method to
perform five simple but representative tasks: 1) determining whether a
given point was a critical point, 2) determining ...
In
a user study comparing four visualization methods for three-dimensional
vector data, participants used visualizations from each method to
perform five simple but representative tasks: 1) determining whether a
given point was a critical point, 2) determining the type of a critical
point, 3) determining whether an integral curve would advect through two
points, 4) determining whether swirling movement is present at a point,
and 5) determining whether the vector field is moving faster at one
point than another. The visualization methods were line and tube
representations of integral curves with both monoscopic and stereoscopic
viewing. While participants reported a preference for stereo lines,
quantitative results showed performance among the tasks varied by
method. Users performed all tasks better with methods that: 1) gave a
clear representation with no perceived occlusion, 2) clearly visualized
curve speed and direction information, and 3) provided fewer rich 3D
cues (e.g., shading, polygonal arrows, overlap cues, and surface
textures). These results provide quantitative support for anecdotal
evidence on visualization methods. The tasks and testing framework also
give a basis for comparing other visualization methods, for creating
more effective methods, and for defining additional tasks to explore
further the tradeoffs among the methods. expand
|
|
Verifiable Visualization for Isosurface Extraction |
|
Tiago Etiene,
Carlos Scheidegger,
Luis Gustavo Nonato,
Robert Mike Kirby,
Cláudio Silva
|
|
Pages: 1227-1234 |
|
doi>10.1109/TVCG.2009.194 |
|
Full text available:
Publisher Site
|
|
Visual
representations of isosurfaces are ubiquitous in the scientific and
engineering literature. In this paper, we present techniques to assess
the behavior of isosurface extraction codes. Where applicable, these
techniques allow us to distinguish ...
Visual
representations of isosurfaces are ubiquitous in the scientific and
engineering literature. In this paper, we present techniques to assess
the behavior of isosurface extraction codes. Where applicable, these
techniques allow us to distinguish whether anomalies in isosurface
features can be attributed to the underlying physical process or to
artifacts from the extraction process. Such scientific scrutiny is at
the heart of verifiable visualization – subjecting visualization
algorithms to the same verification process that is used in other
components of the scientific pipeline. More concretely, we derive
formulas for the expected order of accuracy (or convergence rate) of
several isosurface features, and compare them to experimentally observed
results in the selected codes. This technique is practical: in two
cases, it exposed actual problems in implementations. We provide the
reader with the range of responses they can expect to encounter with
isosurface techniques, both under “normal operating conditions” and also
under adverse conditions. Armed with this information – the results of
the verification process – practitioners can judiciously select the
isosurface extraction technique appropriate for their problem of
interest, and have confidence in its behavior. expand
|
|
Curve-Centric Volume Reformation for Comparative Visualization |
|
Ove Daae Lampe,
Carlos Correa,
Kwan-Liu Ma,
Helwig Hauser
|
|
Pages: 1235-1242 |
|
doi>10.1109/TVCG.2009.136 |
|
Full text available:
Publisher Site
|
|
We
present two visualization techniques for curve-centric volume
reformation with the aim to create compelling comparative
visualizations. A curve-centric volume reformation deforms a volume,
with regards to a curve in space, to create a new space in ...
We
present two visualization techniques for curve-centric volume
reformation with the aim to create compelling comparative
visualizations. A curve-centric volume reformation deforms a volume,
with regards to a curve in space, to create a new space in which the
curve evaluates to zero in two dimensions and spans its arc-length in
the third. The volume surrounding the curve is deformed such that
spatial neighborhood to the curve is preserved. The result of the
curve-centric reformation produces images where one axis is aligned to
arc-length, and thus allows researchers and practitioners to apply their
arc-length parameterized data visualizations in parallel for
comparison. Furthermore we show that when visualizing dense data, our
technique provides an inside out projection, from the curve and out into
the volume, which allows for inspection what is around the curve.
Finally we demonstrate the usefulness of our techniques in the context
of two application cases. We show that existing data visualizations of
arc-length parameterized data can be enhanced by using our techniques,
in addition to creating a new view and perspective on volumetric data
around curves. Additionally we show how volumetric data can be brought
into plotting environments that allow precise readouts. In the first
case we inspect streamlines in a flow field around a car, and in the
second we inspect seismic volumes and well logs fromdrilling. expand
|
|
Predictor-Corrector Schemes for Visualization ofSmoothed Particle Hydrodynamics Data |
|
Benjamin Schindler,
Raphael Fuchs,
John Biddiscombe,
Ronald Peikert
|
|
Pages: 1243-1250 |
|
doi>10.1109/TVCG.2009.173 |
|
Full text available:
Publisher Site
|
|
In
this paper we present a method for vortex core line extraction which
operates directly on the smoothed particle hydrodynamics (SPH)
representation and, by this, generates smoother and more (spatially and
temporally) coherent results in an efficient ...
In
this paper we present a method for vortex core line extraction which
operates directly on the smoothed particle hydrodynamics (SPH)
representation and, by this, generates smoother and more (spatially and
temporally) coherent results in an efficient way. The underlying
predictor-corrector scheme is general enough to be applied to other
line-type features and it is extendable to the extraction of surfaces
such as isosurfaces or Lagrangian coherent structures. The proposed
method exploits temporal coherence to speed up computation for
subsequent time steps. We show how the predictor-corrector formulation
can be specialized for severalvariants of vortex core line definitions
including two recent unsteady extensions, and we contribute a
theoretical and practical comparison of these. In particular, we reveal a
close relation between unsteady extensions of Fuchs et al. and Weinkauf
et al. and we give a proof of the Galilean invariance of the
latter.When visualizing SPH data, there is the possibility to use the
same interpolation method for visualization as has been used for the
simulation. This is different from the case of finite volume simulation
results, where it is not possible to recover from the results the
spatial interpolation that was used during the simulation. Such data are
typically interpolated using the basic trilinear interpolant, and if
smoothness is required, some artificial processing is added. In SPH
data, however, the smoothing kernels are specified from the simulation,
and they provide an exact and smooth interpolation of data or gradients
at arbitrary points in the domain. expand
|
|
Exploring the Millennium Run - Scalable Rendering of Large-Scale Cosmological Datasets |
|
Roland Fraedrich,
Jens Schneider,
Rüdiger Westermann
|
|
Pages: 1251-1258 |
|
doi>10.1109/TVCG.2009.142 |
|
Full text available:
Publisher Site
|
|
In
this paper we investigate scalability limitations in the visualization
of large-scale particle-based cosmological simulations, and we present
methods to reduce these limitations on current PC architectures. To
minimize the amount of data to be streamed ...
In
this paper we investigate scalability limitations in the visualization
of large-scale particle-based cosmological simulations, and we present
methods to reduce these limitations on current PC architectures. To
minimize the amount of data to be streamed from disk to the graphics
subsystem, we propose a visually continuous level-of-detail (LOD)
particle representation based on a hierarchical quantization scheme for
particle coordinates and rules for generating coarse particle
distributions. Given the maximal world space error per level, our LOD
selection technique guarantees a sub-pixel screen space error during
rendering. A brick-based pagetree allows to further reduce the number of
disk seek operations to be performed. Additional particle quantities
like density, velocity dispersion, and radius are compressed at no
visible loss using vector quantization of logarithmically encoded
floating point values. By fine-grain view-frustum culling and presence
acceleration in a geometry shader the required geometry throughput on
the GPU can be significantly reduced. We validate the quality and
scalability of our method by presenting visualizations of a
particle-based cosmological dark-matter simulation exceeding 10 billion
elements. expand
|
|
Interactive Streak Surface Visualization on the GPU |
|
Kai Buerger,
Florian Ferstl,
Holger Theisel,
Rüdiger Westermann
|
|
Pages: 1259-1266 |
|
doi>10.1109/TVCG.2009.154 |
|
Full text available:
Publisher Site
|
|
In
this paper we present techniques for the visualization of unsteady
flows using streak surfaces, which allow for the first time an adaptive
integration and rendering of such surfaces in real-time. The techniques
consist of two main components, which ...
In
this paper we present techniques for the visualization of unsteady
flows using streak surfaces, which allow for the first time an adaptive
integration and rendering of such surfaces in real-time. The techniques
consist of two main components, which are both realized on the GPU to
exploit computational and bandwidth capacities for numerical particle
integration and to minimize bandwidth requirements in the rendering of
the surface. In the construction stage, an adaptive surface
representation is generated. Surface refinement and coarsening
strategies are based on local surface properties like distortion and
curvature. We compare two different methods to generate a streak
surface: a) by computing a patch-based surface representation that
avoids any interdependence between patches, and b) by computing a
particle-based surface representation including particle connectivity,
and by updating this connectivity during particle refinement and
coarsening. In the rendering stage, the surface is either rendered as a
set of quadrilateral surface patches using high-quality point-based
approaches, or a surface triangulation is built in turn from the given
particle connectivity and the resulting triangle mesh is rendered. We
perform a comparative study of the proposed techniques with respect to
surface quality, visual quality and performance by visualizing streak
surfaces in real flows using different rendering options. expand
|
|
Time and Streak Surfaces for Flow Visualization in Large Time-Varying Data Sets |
|
Hari Krishnan,
Christoph Garth,
Kenneth Joy
|
|
Pages: 1267-1274 |
|
doi>10.1109/TVCG.2009.190 |
|
Full text available:
Publisher Site
|
|
Time
and streak surfaces are ideal tools to illustrate time-varying vector
fields since they directly appeal to the intuition about coherently
moving particles. However, efficient generation of high-quality time and
streak surfaces for complex, large ...
Time
and streak surfaces are ideal tools to illustrate time-varying vector
fields since they directly appeal to the intuition about coherently
moving particles. However, efficient generation of high-quality time and
streak surfaces for complex, large and time-varying vector field data
has been elusive due to the computational effort involved. In this work,
we propose a novel algorithm for computing such surfaces. Our approach
is based on a decoupling of surface advection and surface adaptation and
yields improved efficiency over other surface tracking methods, and
allows us to leverage inherent parallelization opportunities in the
surface advection, resulting in more rapid parallel computation.
Moreover, we obtain as a result of our algorithm the entire evolution of
a time or streak surface in a compact representation, allowing for
interactive, high-quality rendering, visualization and exploration of
the evolving surface. Finally, we discuss a number of ways to improve
surface depiction through advanced rendering and texturing, while
preserving interactivity, and provide a number of examples for
real-world datasets and analyze the behavior of our algorithm on them. expand
|
|
Hue-Preserving Color Blending |
|
Johnson Chuang,
Daniel Weiskopf,
Torsten Moller
|
|
Pages: 1275-1282 |
|
doi>10.1109/TVCG.2009.150 |
|
Full text available:
Publisher Site
|
|
We
propose a new perception-guided compositing operator for color
blending. The operator maintains the same rules for achromatic
compositing as standard operators (such as the over operator), but it
modifies the computation of the chromatic channels. ...
We
propose a new perception-guided compositing operator for color
blending. The operator maintains the same rules for achromatic
compositing as standard operators (such as the over operator), but it
modifies the computation of the chromatic channels. Chromatic
compositing aims at preserving the hue of the input colors; color
continuity is achieved by reducing the saturation of colors that are to
change their hue value. The main benefit of hue preservation is that
color can be used for proper visual labeling, even under the constraint
of transparency rendering or image overlays. Therefore, the
visualization of nominal data is improved. Hue-preserving blending can
be used in any existing compositing algorithm, and it is particularly
useful for volume rendering. The usefulness of hue-preserving blending
and its visual characteristics are shown for several examples of volume
visualization. expand
|
|
Perception-Based Transparency Optimization for Direct Volume Rendering |
|
Ming-Yuen Chan,
Yingcai Wu,
Wai-Ho Mak,
Wei Chen,
Huamin Qu
|
|
Pages: 1283-1290 |
|
doi>10.1109/TVCG.2009.172 |
|
Full text available:
Publisher Site
|
|
The
semi-transparent nature of direct volume rendered images is useful to
depict layered structures in a volume. However, obtaining a
semi-transparent result with the layers clearly revealed is difficult
and may involve tedious adjustment on opacity ...
The
semi-transparent nature of direct volume rendered images is useful to
depict layered structures in a volume. However, obtaining a
semi-transparent result with the layers clearly revealed is difficult
and may involve tedious adjustment on opacity and other rendering
parameters. Furthermore, the visual quality of layers also depends on
various perceptual factors. In this paper, we propose an auto-correction
method for enhancing the perceived quality of the semi-transparent
layers in direct volume rendered images. We introduce a suite of new
measures based on psychological principles to evaluate the perceptual
quality of transparent structures in the rendered images. By optimizing
rendering parameters within an adaptive and intuitive user interaction
process, the quality of the images is enhanced such that specific user
requirements can be met. Experimental results on various datasets
demonstrate the effectiveness and robustness of our method expand
|
|
A Physiologically-based Model for Simulation of Color Vision Deficiency |
|
Gustavo M. Machado,
Manuel M. Oliveira,
Leandro A. F. Fernandes
|
|
Pages: 1291-1298 |
|
doi>10.1109/TVCG.2009.113 |
|
Full text available:
Publisher Site
|
|
Color
vision deficiency (CVD) affects approximately 200 million people
worldwide, compromising the ability of these individuals to effectively
perform color and visualization-related tasks. This has a significant
impact on their private and professional ...
Color
vision deficiency (CVD) affects approximately 200 million people
worldwide, compromising the ability of these individuals to effectively
perform color and visualization-related tasks. This has a significant
impact on their private and professional lives. We present a
physiologically-based model for simulating color vision. Our model is
based on the stage theory of human color vision and is derived from data
reported in electrophysiological studies. It is the first model to
consistently handle normal color vision, anomalous trichromacy, and
dichromacy in a unified way. We have validated the proposed model
through an experimentalevaluation involving groups of color vision
deficient individuals and normal color vision ones. Our model can
provide insights and feedback on how to improve visualization
experiences for individuals with CVD. It also provides a framework for
testing hypotheses about some aspects of the retinal photoreceptors in
color vision deficient individuals. expand
|
|
Depth-Dependent Halos: Illustrative Rendering of Dense Line Data |
|
Maarten H. Everts,
Henk Bekker,
Jos B. T. M. Roerdink,
Tobias Isenberg
|
|
Pages: 1299-1306 |
|
doi>10.1109/TVCG.2009.138 |
|
Full text available:
Publisher Site
|
|
We
present a technique for the illustrative rendering of 3D line data at
interactive frame rates. We create depth-dependent halos around lines to
emphasize tight line bundles while less structured lines are
de-emphasized. Moreover, the depth-dependent ...
We
present a technique for the illustrative rendering of 3D line data at
interactive frame rates. We create depth-dependent halos around lines to
emphasize tight line bundles while less structured lines are
de-emphasized. Moreover, the depth-dependent halos combined with depth
cueing via line width attenuation increase depth perception, extending
techniques from sparse line rendering to the illustrative visualization
of dense line data. We demonstrate how the technique can be used, in
particular, for illustrating DTI fiber tracts but also show examples
from gas and fluid flow simulations and mathematics as well as describe
how the technique extends to point data. We report on an informal
evaluation of the illustrative DTI fiber tract visualizations with
domain experts in neurosurgery and tractography who commented positively
about the results and suggested a number of directions for future work. expand
|
|
Markerless View-Independent Registration of Multiple Distorted Projectors on Extruded Surfaces Using an Uncalibrated Camera |
|
Behzad Sajadi,
Aditi Majumder
|
|
Pages: 1307-1316 |
|
doi>10.1109/TVCG.2009.166 |
|
Full text available:
Publisher Site
|
|
In
this paper, we present the first algorithm to geometrically register
multiple projectors in a view-independent manner (i.e. wallpapered) on a
common type of curved surface, vertically extruded surface, using an
uncalibrated camera without attaching ...
In
this paper, we present the first algorithm to geometrically register
multiple projectors in a view-independent manner (i.e. wallpapered) on a
common type of curved surface, vertically extruded surface, using an
uncalibrated camera without attaching any obtrusive markers to the
display screen. Further, it can also tolerate large non-linear geometric
distortions in the projectors as is common when mounting short throw
lenses to allow a compact set-up. Our registration achieves sub-pixel
accuracy on a large number of different vertically extruded surfaces and
the image correction to achieve this registration can be run in real
time on the GPU. This simple markerless registration has the potential
to have a large impact on easy set-up and maintenance of large curved
multi-projector displays, common for visualization, edutainment,
training and simulation applications. expand
|
|
Color Seamlessness in Multi-Projector Displays Using Constrained Gamut Morphing |
|
Behzad Sajadi,
Maxim Lazarov,
M. Gopi,
Aditi Majumder
|
|
Pages: 1317-1326 |
|
doi>10.1109/TVCG.2009.124 |
|
Full text available:
Publisher Site
|
|
Multi-projector
displays show significant spatial variation in 3D color gamut due to
variation in the chromaticity gamuts across the projectors, vignetting
effect of each projector and also overlap across adjacent projectors. In
this paper we present ...
Multi-projector
displays show significant spatial variation in 3D color gamut due to
variation in the chromaticity gamuts across the projectors, vignetting
effect of each projector and also overlap across adjacent projectors. In
this paper we present a new constrained gamut morphing algorithm that
removes all these variations and results in true color seamlessness
across tiled multiprojector displays. Our color morphing algorithm
adjusts the intensities of light from each pixel of each projector
precisely to achieve a smooth morphing from one projector’s gamut to the
other’s through the overlap region. This morphing is achieved by
imposing precise constraints on the perceptual difference between the
gamuts of two adjacent pixels. In addition, our gamut morphing assures a
C1 continuity yielding visually pleasing appearance across the entire
display.We demonstrate our method successfully on a planar and a curved
display using both low and high-end projectors. Our approach is
completely scalable, efficient and automatic. We also demonstrate the
real-time performance of our image correction algorithm on GPUs for
interactive applications. To the best of our knowledge, this is the
first work that presents a scalable method with a strong foundation in
perception and realizes, for the first time, a truly seamless display
where the number of projectors cannot be deciphered. expand
|
|
Visual Human+Machine Learning |
|
Raphael Fuchs,
Jürgen Waser,
Meister Eduard Groller
|
|
Pages: 1327-1334 |
|
doi>10.1109/TVCG.2009.199 |
|
Full text available:
Publisher Site
|
|
In
this paper we describe a novel method to integrate interactive visual
analysis and machine learning to support the insight generation of the
user. The suggested approach combines the vast search and processing
power of the computer with the superior ...
In
this paper we describe a novel method to integrate interactive visual
analysis and machine learning to support the insight generation of the
user. The suggested approach combines the vast search and processing
power of the computer with the superior reasoning and pattern
recognition capabilities of the human user. An evolutionary search
algorithm has been adapted to assist in the fuzzy logic formalization of
hypotheses that aim at explaining features inside multivariate,
volumetric data. Up to now, users solely rely on their knowledge and
expertise when looking for explanatory theories. However, it often
remains unclear whether the selected attribute ranges represent the real
explanation for the feature of interest. Other selections hidden in the
large number of data variables could potentially lead to similar
features. Moreover, as simulation complexity grows, users are confronted
with huge multidimensional data sets making it almost impossible to
find meaningful hypotheses at all. We propose an interactive cycle of
knowledge-based analysis and automatic hypothesis generation. Starting
from initial hypotheses, created with linking and brushing, the user
steers a heuristic search algorithm to look for alternative or related
hypotheses. The results are analyzed in information visualization views
that are linked to the volume rendering. Individual properties as well
as global aggregates are visually presented to provide insight into the
most relevant aspects of the generated hypotheses. This novel approach
becomes computationally feasible due to a GPU implementation of the
time-critical parts in the algorithm. A thorough evaluation of search
times and noise sensitivity as well as a case study on data from the
automotive domain substantiate the usefulness of the suggested approach. expand
|
|
Interactive Visual Optimization and Analysis for RFID Benchmarking |
|
Yingcai Wu,
Ka-Kei Chung,
Huamin Qu,
Xiaoru Yuan,
S. C. Cheung
|
|
Pages: 1335-1342 |
|
doi>10.1109/TVCG.2009.156 |
|
Full text available:
Publisher Site
|
|
Radio
frequency identification (RFID) is a powerful automatic remote
identification technique that has wide applications. To facilitate RFID
deployment, an RFID benchmarking instrument called aGate has been
invented to identify the strengths and weaknesses ...
Radio
frequency identification (RFID) is a powerful automatic remote
identification technique that has wide applications. To facilitate RFID
deployment, an RFID benchmarking instrument called aGate has been
invented to identify the strengths and weaknesses of different RFID
technologies in various environments. However, the data acquired by
aGate are usually complex time varying multidimensional 3D volumetric
data, which are extremely challenging for engineers to analyze. In this
paper, we introduce a set of visualization techniques, namely, parallel
coordinate plots, orientation plots, a visual history mechanism, and a
3D spatial viewer, to help RFID engineers analyze benchmark data
visually and intuitively. With the techniques, we further introduce two
workflow procedures (a visual optimization procedure for finding the
optimum reader antenna configuration and a visual analysis procedure for
comparing the performance and identifying the flaws of RFID devices)
for the RFID benchmarking, with focus on the performance analysis of the
aGate system. The usefulness and usability of the system are
demonstrated in the user evaluation. expand
|
|
A Visual Approach to Efficient Analysis and Quantification of Ductile Iron and Reinforced Sprayed Concrete |
|
Laura Fritz,
Markus Hadwiger,
Georg Geier,
Gerhard Pittino,
M. Eduard Groller
|
|
Pages: 1343-1350 |
|
doi>10.1109/TVCG.2009.115 |
|
Full text available:
Publisher Site
|
|
This
paper describes advanced volume visualization and quantification for
applications in non-destructive testing (NDT), which results in novel
and highly effective interactive workflows for NDT practitioners. We
employ a visual approach to explore and ...
This
paper describes advanced volume visualization and quantification for
applications in non-destructive testing (NDT), which results in novel
and highly effective interactive workflows for NDT practitioners. We
employ a visual approach to explore and quantify the features of
interest, based on transfer functions in the parameter spaces of
specific application scenarios. Examples are the orientations of fibres
or the roundness of par ticles. The applicability and effectiveness of
our approach is illustrated using two specific scenarios of high
practical relevance. First, we discuss the analysis of Steel Fibre
Reinforced Sprayed Concrete (SFRSpC). We investigate the orientations of
the enclosed steel fibres and their distribution, depending on the
concrete’s application direction. This is a crucial step in assessing
the material’s behavior under mechanical stress, which is still in its
infancy and therefore a hot topic in the building industr y. The second
application scenario is the designation of the microstructure of ductile
cast irons with respect to the contained graphite. This corresponds to
the requirements of the ISO standard 945-1, which deals with 2D
metallographic samples. We illustrate how the necessar y analysis steps
can be carried out much more efficiently using our system for 3D
volumes. Overall, we show that a visual approach with custom transfer
functions in specific application domains offers significant benefits
and has the potential of greatly improving and optimizing the workflows
of domain scientists and engineers. expand
|
|
Interactive Visual Analysis of Complex Scientific Data as Families of Data Surfaces |
|
Kresimir Matkovic,
Denis Gracanin,
Borislav Klarin,
Helwig Hauser
|
|
Pages: 1351-1358 |
|
doi>10.1109/TVCG.2009.155 |
|
Full text available:
Publisher Site
|
|
The
widespread use of computational simulation in science and engineering
provides challenging research opportunities. Multiple independent
variables are considered and large and complex data are computed,
especially in the case of multi-run simulation. ...
The
widespread use of computational simulation in science and engineering
provides challenging research opportunities. Multiple independent
variables are considered and large and complex data are computed,
especially in the case of multi-run simulation. Classical visualization
techniques deal well with 2D or 3D data and also with time-dependent
data. Additional independent dimensions, however, provide interesting
new challenges. We present an advanced visual analysis approach that
enables a thorough investigation of families of data surfaces, i.e.,
datasets, with respect to pairs of independent dimensions. While it is
almost trivial to visualize one such data surface, the visual
exploration and analysis of many such data surfaces is a grand
challenge, stressing the users’ perception and cognition. We propose an
approach that integrates projections and aggregations of the data
surfaces at different levels (one scalar aggregate per surface, a 1D
profile per surface, or the surface as such). We demonstrate the
necessity for a flexible visual analysis system that integrates many
different (linked) views for making sense of this highly complex data.
To demonstrate its usefulness, we exemplify our approach in the context
of a meteorological multi-run simulation data case and in the context of
the engineering domain, where our collaborators are working with the
simulation of elastohydrodynamic (EHD) lubrication bearing in the
automotive industry. expand
|
|
Visualization and Exploration of Temporal Trend Relationships in Multivariate Time-Varying Data |
|
Teng-Yok Lee,
Han-Wei Shen
|
|
Pages: 1359-1366 |
|
doi>10.1109/TVCG.2009.200 |
|
Full text available:
Publisher Site
|
|
We
present a new algorithm to explore and visualize multivariate
time-varying data sets. We identify important trend relationships among
the variables based on how the values of the variables change over time
and how those changes are related to each ...
We
present a new algorithm to explore and visualize multivariate
time-varying data sets. We identify important trend relationships among
the variables based on how the values of the variables change over time
and how those changes are related to each other in different spatial
regions and time intervals. The trend relationships can be used to
describe the correlation and causal effects among the different
variables. To identify the temporal trends from a local region, we
design a new algorithm called SUBDTW to estimate when a trend appears
and vanishes in a given time series. Based on the beginning and ending
times of the trends, their temporal relationships can be modeled as a
state machine representing the trend sequence. Since a scientific data
set usually contains millions of data points, we propose an algorithm to
extract important trend relationships in linear time complexity. We
design novel user interfaces to explore the trend relationships, to
visualize their temporal characteristics, and to display their spatial
distributions. We use several scientific data sets to test our algorithm
and demonstrate its utilities. expand
|
|
Isosurface Extraction and View-Dependent Filtering from Time-Varying Fields Using Persistent Time-Octree (PTOT) |
|
Cong Wang,
Yi-Jen Chiang
|
|
Pages: 1367-1374 |
|
doi>10.1109/TVCG.2009.160 |
|
Full text available:
Publisher Site
|
|
We
develop a new algorithm for isosurface extraction andview-dependent
filtering from large time-varying fields, by using anovel Persistent
Time-Octree (PTOT) indexingstructure. Previously, the Persistent Octree
(POT) was proposed toperform isosurface ...
We
develop a new algorithm for isosurface extraction andview-dependent
filtering from large time-varying fields, by using anovel Persistent
Time-Octree (PTOT) indexingstructure. Previously, the Persistent Octree
(POT) was proposed toperform isosurface extraction and view-dependent
filtering, whichcombines the advantages of the interval tree (for
optimal searches ofactive cells) and of the Branch-On-Need Octree (BONO,
forview-dependent filtering), but it only works for steady-state(i.e.,
single time step) data. For time-varying fields, a 4D versionof POT,
4D-POT, was proposed for 4D isocontour slicing, where slicingon the time
domain gives all active cells in the queried timestep and isovalue.
However, such slicing is not output sensitiveand thus the searching is
sub-optimal. Moreover, it was notknown how to support view-dependent
filtering in addition totime-domain slicing.In this paper, we develop a
novel Persistent Time-Octree (PTOT) indexing structure, which has the
advantages of POT and performs 4Disocontour slicing on the time domain
with an output-sensitiveand optimal searching. In addition, when we
query the sameisovalue q over m consecutive time steps, there is
noadditional searching overhead (except for reporting the
additionalactive cells) compared to querying just the first time step.
Suchsearching performance for finding active cells is
asymptoticallyoptimal, with asymptotically optimal space and
preprocessing time aswell. Moreover, our PTOT supports view-dependent
filtering in addition to time-domain slicing. We propose a simple and
effectiveout-of-core scheme, where we integrate our PTOT with
implicitoccluders, batched occlusion queries and batched CUDA
computingtasks, so that we can greatly reduce the I/O cost as well
asincrease the amount of data being concurrently computed in GPU.This
results in an efficient algorithm for isosurface extraction
withview-dependent filtering utilizing a state-of-the-art programmable
GPUfor time-varying fields larger than main memory. Our experiments
ondatasets as large as 192GB (with 4GB per time step) having no morethan
870MB of memory footprint in both preprocessing and run-timephases
demonstrate the efficacy of our new technique. expand
|
|
Visual Exploration of Climate Variability Changes Using Wavelet Analysis |
|
Heike Janicke,
Michael Bottinger,
Uwe Mikolajewicz,
Gerik Scheuermann
|
|
Pages: 1375-1382 |
|
doi>10.1109/TVCG.2009.197 |
|
Full text available:
Publisher Site
|
|
Due
to its nonlinear nature, the climate system shows quite high natural
variability on different time scales, including multiyear oscillations
such as the El Ni˜no Southern Oscillation phenomenon. Beside a shift of
the mean states and of extreme ...
Due
to its nonlinear nature, the climate system shows quite high natural
variability on different time scales, including multiyear oscillations
such as the El Ni˜no Southern Oscillation phenomenon. Beside a shift of
the mean states and of extreme values of climate variables, climate
change may also change the frequency or the spatial patterns of these
natural climate variations. Wavelet analysis is a well established tool
to investigate variability in the frequency domain. However, due to the
size and complexity of the analysis results, only few time series are
commonly analyzed concurrently. In this paper we will explore different
techniques to visually assist the user in the analysis of variability
and variability changes to allow for a holistic analysis of a global
climate model data set consisting of several variables and extending
over 250 years. Our new framework and data from the IPCC AR4 simulations
with the coupled climate model ECHAM5/MPI-OM are used to explore the
temporal evolution of El Ni˜no due to climate change. expand
|
|
Interactive Coordinated Multiple-View Visualization of Biomechanical Motion Data |
|
Daniel Keefe,
Marcus Ewert,
William Ribarsky,
Remco Chang
|
|
Pages: 1383-1390 |
|
doi>10.1109/TVCG.2009.152 |
|
Full text available:
Publisher Site
|
|
We
present an interactive framework for exploring space-time and
form-function relationships in experimentally collected high-resolution
biomechanical data sets. These data describe complex 3D motions (e.g.
chewing, walking, flying) performed by animals ...
We
present an interactive framework for exploring space-time and
form-function relationships in experimentally collected high-resolution
biomechanical data sets. These data describe complex 3D motions (e.g.
chewing, walking, flying) performed by animals and humans and captured
via high-speed imaging technologies, such as biplane fluoroscopy. In
analyzing these 3D biomechanical motions, interactive 3D visualizations
are important, in particular, for supporting spatial analysis. However,
as researchers in information visualization have pointed out, 2D
visualizations can also be effective tools for multi-dimensional data
analysis, especially for identifying trends over time. Our approach,
therefore, combines techniques from both 3D and 2D visualizations.
Specifically, it utilizes a multi-view visualization strategy including a
small multiples view of motion sequences, a parallel coordinates view,
and detailed 3D inspection views. The resulting framework follows an
overview first, zoom and filter, then details-on-demand style of
analysis, and it explicitly targets a limitation of current tools,
namely, supporting analysis and comparison at the level of a collection
of motions rather than sequential analysis of a single or small number
of motions. Scientific motion collections appropriate for this style of
analysis exist in clinical work in orthopedics and physical
rehabilitation, in the study of functional morphology within
evolutionary biology, and in other contexts. An application is described
based on a collaboration with evolutionary biologists studying the
mechanics of chewing motions in pigs. Interactive exploration of data
describing a collection of more than one hundred experimentally captured
pig chewing cycles is described. expand
|
|
Interactive Visualization of Molecular Surface Dynamics |
|
Michael Krone,
Katrin Bidmon,
Thomas Ertl
|
|
Pages: 1391-1398 |
|
doi>10.1109/TVCG.2009.157 |
|
Full text available:
Publisher Site
|
|
Molecular
dynamics simulations of proteins play a growing role in various fields
such as pharmaceutical, biochemical and medical research. Accordingly,
the need for high quality visualization of these protein systems raises.
Highly interactive visualization ...
Molecular
dynamics simulations of proteins play a growing role in various fields
such as pharmaceutical, biochemical and medical research. Accordingly,
the need for high quality visualization of these protein systems raises.
Highly interactive visualization techniques are especially needed for
the analysis of time-dependent molecular simulations. Beside various
other molecular representations the surface representations are of high
importance for these applications. So far, users had to accept a
trade-off between rendering quality and performance—particularly when
visualizing trajectories of time-dependent protein data. We present a
new approach for visualizing the Solvent Excluded Surface of proteins
using a GPU ray casting technique and thus achieving interactive frame
rates even for long protein trajectories where conventional methods
based on precomputation are not applicable. Furthermore, we propose a
semantic simplification of the raw protein data to reduce the visual
complexity of the surface and thereby accelerate the rendering without
impeding perception of the protein’s basic shape. We also demonstrate
the application of our Solvent Excluded Surface method to visualize the
spatial probability density for the protein atoms over the whole period
of the trajectory in one frame, providing a qualitative analysis of the
protein flexibility. expand
|
|
Stress Tensor Field Visualization for Implant Planning in Orthopedics |
|
Christian Dick,
Joachim Georgii,
Rainer Burgkart,
Rüdiger Westermann
|
|
Pages: 1399-1406 |
|
doi>10.1109/TVCG.2009.184 |
|
Full text available:
Publisher Site
|
|
We
demonstrate the application of advanced 3D visualization techniques to
determine the optimal implant design and position in hip joint
replacement planning. Our methods take as input the physiological stress
distribution inside a patient's bone under ...
We
demonstrate the application of advanced 3D visualization techniques to
determine the optimal implant design and position in hip joint
replacement planning. Our methods take as input the physiological stress
distribution inside a patient's bone under load and the stress
distribution inside this bone under the same load after a simulated
replacement surgery. The visualization aims at showing principal stress
directions and magnitudes, as well as differences in both distributions.
By visualizing changes of normal and shear stresses with respect to the
principal stress directions of the physiological state, a comparative
analysis of the physiological stress distribution and the stress
distribution with implant is provided, and the implant parameters that
most closely replicate the physiological stress state in order to avoid
stress shielding can be determined. Our method combines volume rendering
for the visualization of stress magnitudes with the tracing of short
line segments for the visualization of stress directions. To improve
depth perception, transparent, shaded, and antialiased lines are
rendered in correct visibility order, and they are attenuated by the
volume rendering. We use a focus+context approach to visually guide the
user to relevant regions in the data, and to support a detailed stress
analysis in these regions while preserving spatial context information.
Since all of our techniques have been realized on the GPU, they can
immediately react to changes in the simulated stress tensor field and
thus provide an effective means for optimal implant selection and
positioning in a computational steering environment. expand
|
|
Visual Exploration of Nasal Airflow |
|
Stefan Zachow,
Philipp Muigg,
Thomas Hildebrandt,
Helmut Doleisch,
Hans-Christian Hege
|
|
Pages: 1407-1414 |
|
doi>10.1109/TVCG.2009.198 |
|
Full text available:
Publisher Site
|
|
Rhinologists
are often faced with the challenge of assessing nasal breathing from a
functional point of view to derive effective therapeutic interventions.
While the complex nasal anatomy can be revealed by visual inspection and
medical imaging, only ...
Rhinologists
are often faced with the challenge of assessing nasal breathing from a
functional point of view to derive effective therapeutic interventions.
While the complex nasal anatomy can be revealed by visual inspection and
medical imaging, only vague information is available regarding the
nasal airflow itself: Rhinomanometry delivers rather unspecific integral
information on the pressure gradient as well as on total flow and nasal
flow resistance. In this article we demonstrate how the understanding
of physiological nasal breathing can be improved by simulating and
visually analyzing nasal airflow, based on an anatomically correct model
of the upper human respiratory tract. In particular we demonstrate how
various Information Visualization (InfoVis) techniques, such as a highly
scalable implementation of parallel coordinates, time series
visualizations, as well as unstructured grid multi-volume rendering, all
integrated within a multiple linked views framework, can be utilized to
gain a deeper understanding of nasal breathing. Evaluation is
accomplished by visual exploration of spatio-temporal airflow
characteristics that include not only information on flow features but
also on accompanying quantities such as temperature and humidity. To our
knowledge, this is the first in-depth visual exploration of the
physiological function of the nose over several simulated breathing
cycles under consideration of a complete model of the nasal airways,
realistic boundary conditions, and all physically relevant time-varying
quantities. expand
|
|
Sampling and Visualizing Creases with Scale-Space Particles |
|
Gordon L. Kindlmann,
Raúl San José Estepar,
Stephen M. Smith,
Carl-Fredrik Westin
|
|
Pages: 1415-1424 |
|
doi>10.1109/TVCG.2009.177 |
|
Full text available:
Publisher Site
|
|
Particle
systems have gained importance as a methodology for sampling implicit
surfaces and segmented objects to improve mesh generation and shape
analysis. We propose that particle systems have a significantly more
general role in sampling structure ...
Particle
systems have gained importance as a methodology for sampling implicit
surfaces and segmented objects to improve mesh generation and shape
analysis. We propose that particle systems have a significantly more
general role in sampling structure from unsegmented data. We describe a
particle system that computes samplings of crease features (i.e. ridges
and valleys, as lines or surfaces) that effectively represent many
anatomical structures in scanned medical data. Because structure
naturally exists at a range of sizes relative to the image resolution,
computer vision has developed the theory of scale-space, which considers
an n-D image as an (n + 1)-D stack of images at different blurring
levels. Our scale-space particles move through continuous
four-dimensional scale-space according to spatial constraints imposed by
the crease features, a particle-image energy that draws particles
towards scales of maximal feature strength, and an inter-particle energy
that controls sampling density in space and scale. To make scale-space
practical for large three-dimensional data, we present a spline-based
interpolation across scale from a small number of pre-computed blurrings
at optimally selected scales. The configuration of the particle system
is visualized with tensor glyphs that display information about the
local Hessian of the image, and the scale of the particle. We use
scale-space particles to sample the complex three-dimensional branching
structure of airways in lung CT, and the major white matter structures
in brain DTI. expand
|
|
Volume Illustration of Muscle from Diffusion Tensor Images |
|
Wei Chen,
Zhicheng Yan,
Song Zhang,
John Allen Crow,
David S. Ebert,
Ronald M. McLaughlin,
Katie B. Mullins,
Robert Cooper,
Zi'ang Ding,
Jun Liao
|
|
Pages: 1425-1432 |
|
doi>10.1109/TVCG.2009.203 |
|
Full text available:
Publisher Site
|
|
Medical
illustration has demonstrated its effectiveness to depict salient
anatomical features while hiding the irrelevant details. Current
solutions are ineffective for visualizing fibrous structures such as
muscle, because typical datasets (CT or MRI) ...
Medical
illustration has demonstrated its effectiveness to depict salient
anatomical features while hiding the irrelevant details. Current
solutions are ineffective for visualizing fibrous structures such as
muscle, because typical datasets (CT or MRI) do not contain directional
details. In this paper, we introduce a new muscle illustration approach
that leverages diffusion tensor imaging (DTI) data and example-based
texture synthesis techniques. Beginning with a volumetric diffusion
tensor image, we reformulate it into a scalar field and an auxiliary
guidance vector field to represent the structure and orientation of a
muscle bundle. A muscle maskderived from the input diffusion tensor
image is used to classify the muscle structure. The guidance vector
field is further refined to remove noise and clarify structure. To
simulate the internal appearance of the muscle, we propose a new
two-dimensional example based solid texture synthesis algorithm that
builds a solid texture constrained by the guidance vector field.
Illustrating the constructed scalar field and solid texture efficiently
highlights the global appearance of the muscle as well as the local
shape and structure of the muscle fibers in an illustrative fashion. We
have applied the proposed approach to five example datasets (four pig
hearts and a pigleg), demonstrating plausible illustration and
expressiveness. expand
|
|
A Novel Interface for Interactive Exploration of DTI Fibers |
|
Wei Chen,
Zi'ang Ding,
Song Zhang,
Anna MacKay-Brandt,
Stephen Correia,
Huamin Qu,
John Allen Crow,
David F. Tate,
Zhicheng Yan,
Qunsheng Peng
|
|
Pages: 1433-1440 |
|
doi>10.1109/TVCG.2009.112 |
|
Full text available:
Publisher Site
|
|
Visual
exploration is essential to the visualization and analysis of densely
sampled 3D DTI fibers in biological speciments, due to the high
geometric, spatial, and anatomical complexity of fiber tracts. Previous
methods for DTI fiber visualization use ...
Visual
exploration is essential to the visualization and analysis of densely
sampled 3D DTI fibers in biological speciments, due to the high
geometric, spatial, and anatomical complexity of fiber tracts. Previous
methods for DTI fiber visualization use zooming, color-mapping,
selection, and abstraction to deliver the characteristics of the fibers.
However, these schemes mainly focus on the optimization of
visualization in the 3D space where cluttering and occlusion make
grasping even a few thousand fibers difficult. This paper introduces a
novel interaction method that augments the 3D visualization with a 2D
representation containing a low-dimensional embedding of the DTI fibers.
This embedding preserves the relationship between the fibers and
removes the visual clutter that is inherent in 3D renderings of the
fibers. This new interface allows the user to manipulate the DTI fibers
as both 3D curves and 2D embedded points and easily compare or validate
his or her results in both domains. The implementation of the framework
is GPU based to achieve real-time interaction. The framework was applied
to several tasks, and the results show that our method reduces the
user’s workload in recognizing 3D DTI fibers and permits quick and
accurate DTI fiber selection. expand
|
|
Parameter Sensitivity Visualization for DTI Fiber Tracking |
|
Ralph Brecheisen,
Anna Vilanova,
Bram Platel,
Bart ter Haar Romeny
|
|
Pages: 1441-1448 |
|
doi>10.1109/TVCG.2009.170 |
|
Full text available:
Publisher Site
|
|
Fiber
tracking of Diffusion Tensor Imaging (DTI) data offers a unique insight
into the three-dimensional organisation of white matter structures in
the living brain. However, fiber tracking algorithms require a number of
user-defined input parameters ...
Fiber
tracking of Diffusion Tensor Imaging (DTI) data offers a unique insight
into the three-dimensional organisation of white matter structures in
the living brain. However, fiber tracking algorithms require a number of
user-defined input parameters that strongly affect the output results.
Usually the fiber tracking parameters are set once and are then re-used
for several patient datasets. However, the stability of the chosen
parameters is not evaluated and a small change in the parameter values
can give very different results. The user remains completely unaware of
such effects. Furthermore, it is difficult to reproduce output results
between differentusers. We propose a visualization tool that allows the
user to visually explore how small variations in parameter values affect
the output of fiber tracking. With this knowledge the user cannot only
assess the stability of commonly used parameter values but also evaluate
in a more reliable way the output results between different patients.
Existing tools do not provide such information. A small user evaluation
of our tool has been done to show the potential of the technique. expand
|
|
Exploring 3D DTI Fiber Tracts with Linked 2D Representations |
|
Radu Jianu,
Cagatay Demiralp,
David Laidlaw
|
|
Pages: 1449-1456 |
|
doi>10.1109/TVCG.2009.141 |
|
Full text available:
Publisher Site
|
|
We
present a visual exploration paradigm that facilitates navigation
through complex fiber tracts by combining traditional 3D model viewing
with lower dimensional representations. To this end, we create standard
streamtube models along with two twodimensional ...
We
present a visual exploration paradigm that facilitates navigation
through complex fiber tracts by combining traditional 3D model viewing
with lower dimensional representations. To this end, we create standard
streamtube models along with two twodimensional representations, an
embedding in the plane and a hierarchical clustering tree, for a given
set of fiber tracts. We then link these three representations using both
interaction and color obtained by embedding fiber tracts into a
perceptually uniform color space. We describe an anecdotal evaluation
with neuroscientists to assess the usefulness of our method in exploring
anatomical and functional structures in the brain. Expert feedback
indicates that, while a standalone clinical use of the proposed method
would require anatomical landmarks in the lower dimensional
representations, the approach would be particularly useful in
accelerating tract bundle selection. Results also suggest that combining
traditional 3D model viewing with lower dimensional representations can
ease navigation through the complex fiber tract models, improving
exploration of the connectivity in the brain. expand
|
|
Coloring 3D Line Fields Using Boy’s Real Projective Plane Immersion |
|
Çağatay Demiralp,
John F. Hughes,
David H. Laidlaw
|
|
Pages: 1457-1464 |
|
doi>10.1109/TVCG.2009.125 |
|
Full text available:
Publisher Site
|
|
We
introduce a new method for coloring 3D line fields and show results
from its application in visualizing orientation in DTI brain data sets.
The method uses Boy’s surface, an immersion of RP2 in 3D. This coloring
method is smooth and one-to-one ...
We
introduce a new method for coloring 3D line fields and show results
from its application in visualizing orientation in DTI brain data sets.
The method uses Boy’s surface, an immersion of RP2 in 3D. This coloring
method is smooth and one-to-one except on a set of measure zero, the
double curve of Boy’s surface. expand
|
|
The Occlusion Spectrum for Volume Classification and Visualization |
|
Carlos Correa,
Kwan-Liu Ma
|
|
Pages: 1465-1472 |
|
doi>10.1109/TVCG.2009.189 |
|
Full text available:
Publisher Site
|
|
Despite
the ever-growing improvements on graphics processing units and
computational power, classifying 3D volume data remains a challenge.In
this paper, we present a new method for classifying volume data based on
the ambient occlusion of voxels. This ...
Despite
the ever-growing improvements on graphics processing units and
computational power, classifying 3D volume data remains a challenge.In
this paper, we present a new method for classifying volume data based on
the ambient occlusion of voxels. This information stems from the
observation that most volumes of a certain type, e.g., CT, MRI or flow
simulation, contain occlusion patterns that reveal the spatial structure
of their materials or features. Furthermore, these patterns appear to
emerge consistently for different data sets of the same type. We call
this collection of patterns the \emph{occlusion spectrum} of a dataset.
We show that using this occlusion spectrum leads to better
two-dimensional transfer functions that can help classify complex data
sets in terms of the spatial relationships among features. In general,
the ambient occlusion of a voxel can be interpreted as a weighted
average of the intensities in a spherical neighborhood around the voxel.
Different weighting schemes determine the ability to separate
structures of interest in the occlusion spectrum. We present a general
methodology for finding such a weighting.We show results of our approach
in 3D imaging for different applications, including brain and breast
tumor detection and the visualization of turbulent flow. expand
|
|
Structuring Feature Space: A Non-Parametric Method for Volumetric Transfer Function Generation |
|
Ross Maciejewski,
Insoo Woo,
Wei Chen,
David Ebert
|
|
Pages: 1473-1480 |
|
doi>10.1109/TVCG.2009.185 |
|
Full text available:
Publisher Site
|
|
The
use of multi-dimensional transfer functions for direct volume rendering
has been shown to be an effective means of extracting materials and
their boundaries for both scalar and multivariate data. The most common
multi-dimensional transfer function ...
The
use of multi-dimensional transfer functions for direct volume rendering
has been shown to be an effective means of extracting materials and
their boundaries for both scalar and multivariate data. The most common
multi-dimensional transfer function consists of a two-dimensional (2D)
histogram with axes representing a subset of the feature space (e.g.,
value vs. value gradient magnitude), with each entry in the 2D histogram
being the number of voxels at a given feature space pair. Users then
assign color and opacity to the voxel distributions within the given
feature space through the use of interactive widgets (e.g., box,
circular, triangular selection). Unfortunately, such tools lead users
through a trial-and-error approach as they assess which data values
within the feature space map to a given area of interest within the
volumetric space. In this work, we propose the addition of
non-parametric clustering within the transfer function feature space in
order to extract patterns and guide transfer function generation.We
apply a non-parametric kernel density estimation to group voxels of
similar features within the 2D histogram. These groups are then binned
and colored based on their estimated density, and the user may
interactively grow and shrink the binned regions to explore feature
boundaries and extract regions of interest. We also extend this scheme
to temporal volumetric data in which time steps of 2D histograms are
composited into a histogram volume. A three-dimensional (3D) density
estimation is then applied, and users can explore regions within the
feature space across time without adjusting the transfer function at
each time step. Our work enables users to effectively explore the
structures found within a feature space of the volume and provide a
context in which the user can understand how these structures relate to
their volumetric data. We provide tools for enhanced exploration and
manipulation of the transfer function, and we show that the initial
transfer function generation serves as a reasonable base for volumetric
rendering, reducing the trial-and-error overhead typically found in
transfer function design. expand
|
|
Automatic Transfer Function Generation Using Contour Tree Controlled Residue Flow Model and Color Harmonics |
|
Jianlong Zhou,
Masahiro Takatsuka
|
|
Pages: 1481-1488 |
|
doi>10.1109/TVCG.2009.120 |
|
Full text available:
Publisher Site
|
|
Transfer
functions facilitate the volumetric data visualization by assigning
optical properties to various data features and scalar values.
Automation of transfer function specifications still remains a challenge
in volume rendering. This paper presents ...
Transfer
functions facilitate the volumetric data visualization by assigning
optical properties to various data features and scalar values.
Automation of transfer function specifications still remains a challenge
in volume rendering. This paper presents an approach for automating
transfer function generations by utilizing topological attributes
derived from the contour tree of a volume. The contour tree acts as a
visual index to volume segments, and captures associated topological
attributes involved in volumetric data. A residue flow model based on
Darcy's Law is employed to control distributions of opacity between
branches of the contour tree. Topological attributes are also used to
control color selection in a perceptual color space and create harmonic
color transfer functions. The generated transfer functions can depict
inclusion relationship between structures and maximize opacity and color
differences between them. The proposed approach allows efficient
automation of transfer function generations, and exploration on the data
to be carried out based on controlling of opacity residue flow rate
instead of complex low-level transfer function parameter adjustments.
Experiments on various data sets demonstrate the practical use of our
approach in transfer function generations. expand
|
|
An interactive visualization tool for multi-channel confocal microscopy data in neurobiology research |
|
Yong Wan,
Hideo Otsuna,
Chi-Bin Chien,
Charles Hansen
|
|
Pages: 1489-1496 |
|
doi>10.1109/TVCG.2009.118 |
|
Full text available:
Publisher Site
|
|
Confocal
microscopy is widely used in neurobiology for studying the
three-dimensional structure of the nervous system. Confocal image data
are often multi-channel, with each channel resulting from a different
fluorescent dye or fluorescent protein; one ...
Confocal
microscopy is widely used in neurobiology for studying the
three-dimensional structure of the nervous system. Confocal image data
are often multi-channel, with each channel resulting from a different
fluorescent dye or fluorescent protein; one channel may have dense data,
while another has sparse; and there are often structures at several
spatial scales: subneuronal domains, neurons, and large groups of
neurons (brain regions). Even qualitative analysis can therefore require
visualization using techniques and parameters fine-tuned to a
particular dataset. Despite the plethora of volume rendering techniques
that have been available for many years, the techniques standardly used
in neurobiological research are somewhat rudimentary, such as looking at
image slices or maximal intensity projections. Thus there is a real
demand from neurobiologists, and biologists in general, for a flexible
visualization tool that allows interactive visualization of
multi-channel confocal data, with rapid fine-tuning of parameters to
reveal the three-dimensional relationships of structures of interest.
Together with neurobiologists, we have designed such a tool, choosing
visualization methods to suit the characteristics of confocal data and a
typical biologist's workflow. We use interactive volume rendering with
intuitive settings for multidimensional transfer functions, multiple
render modes and multi-views for multi-channel volume data, and
embedding of polygon data into volume data for rendering and editing. As
an example, we apply this tool to visualize confocal microscopy
datasets of the developing zebrafish visual system. expand
|
|
BrainGazer - Visual Queries for Neurobiology Research |
|
Stefan Bruckner,
Veronika Šolteszova,
Eduard Groller,
Jiří Hladůvka,
Katja Buhler,
Jai Y. Yu,
Barry J. Dickson
|
|
Pages: 1497-1504 |
|
doi>10.1109/TVCG.2009.121 |
|
Full text available:
Publisher Site
|
|
Neurobiology
investigates how anatomical and physiological relationships in the
nervous system mediate behavior. Molecular genetic techniques, applied
to species such as the common fruit fly Drosophila melanogaster, have
proven to be an important tool ...
Neurobiology
investigates how anatomical and physiological relationships in the
nervous system mediate behavior. Molecular genetic techniques, applied
to species such as the common fruit fly Drosophila melanogaster, have
proven to be an important tool in this research. Large databases of
transgenic specimens are being built and need to be analyzed to
establish models of neural information processing. In this paper we
present an approach for the exploration and analysis of neural circuits
based on such a database. We have designed and implemented
\emph{BrainGazer}, a system which integrates visualization techniques
for volume data acquired through confocal microscopy as well as
annotated anatomical structures with an intuitive approach for accessing
the available information. We focus on the ability to visually query
the data based on semantic as well as spatial relationships.
Additionally, we present visualization techniques for the concurrent
depiction of neurobiological volume data and geometric objects which aim
to reduce visual clutter. The described system is the result of an
ongoing interdisciplinary collaboration between neurobiologists and
visualization researchers. expand
|
|
Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets |
|
Won-Ki Jeong,
Johanna Beyer,
Markus Hadwiger,
Amelio Vazquez,
Hanspeter Pfister,
Ross T. Whitaker
|
|
Pages: 1505-1514 |
|
doi>10.1109/TVCG.2009.178 |
|
Full text available:
Publisher Site
|
|
Recent
advances in scanning technology provide high resolution EM (Electron
Microscopy) datasets that allow neuro-scientists to reconstruct complex
neural connections in a nervous system. However, due to the enormous
size and complexity of the resulting ...
Recent
advances in scanning technology provide high resolution EM (Electron
Microscopy) datasets that allow neuro-scientists to reconstruct complex
neural connections in a nervous system. However, due to the enormous
size and complexity of the resulting data, segmentation and
visualization of neural processes in EM data is usually a difficult and
very time-consuming task. In this paper, we present NeuroTrace, a novel
EM volume segmentation and visualization system that consists of two
parts: a semi-automatic multiphase level set segmentation with 3D
tracking for reconstruction of neural processes, and a specialized
volume rendering approach for visualization of EM volumes. It employs
view-dependent on-demand filtering and evaluation of a local histogram
edge metric, as well as on-the-fly interpolation and ray-casting of
implicit surfaces for segmented neural structures. Both methods are
implemented on the GPU for interactive performance. NeuroTrace is
designed to be scalable to large datasets and data-parallel hardware
architectures. A comparison of NeuroTrace with a commonly used manual EM
segmentation tool shows that our interactive workflow is faster and
easier to use for the reconstruction of complex neural processes. expand
|
|
Multimodal Vessel Visualization of Mouse Aorta PET/CT Scans |
|
Timo Ropinski,
Sven Hermann,
Rainer Reich,
Michael Schafers,
Klaus Hinrichs
|
|
Pages: 1515-1522 |
|
doi>10.1109/TVCG.2009.169 |
|
Full text available:
Publisher Site
|
|
In
this paper, we present a visualization system for the visual analysis
of PET/CT scans of aor tic arches of mice. The system has been designed
in close collaboration between researchers from the areas of
visualization and molecular imaging with the ...
In
this paper, we present a visualization system for the visual analysis
of PET/CT scans of aor tic arches of mice. The system has been designed
in close collaboration between researchers from the areas of
visualization and molecular imaging with the objective to get deeper
insights into the structural and molecular processes which take place
during plaque development. Understanding the development of plaques
might lead to a better and earlier diagnosis of cardiovascular diseases,
which are still the main cause of death in the western world. After
motivating our approach, we will briefly describe the multimodal data
acquisition process before explaining the visualization techniques used.
The main goal is to develop a system which suppor ts visual comparison
of the data of different species. Therefore, we have chosen a linked
multi-view approach, which amongst others integrates a specialized
straightened multipath cur ved planar reformation and a multimodal
vessel flattening technique. We have applied the visualization concepts
to multiple data sets, and we will present the results of this
investigation. expand
|
|
Quantitative Texton Sequences for Legible Bivariate Maps |
|
Colin Ware
|
|
Pages: 1523-1530 |
|
doi>10.1109/TVCG.2009.175 |
|
Full text available:
Publisher Site
|
|
Representing
bivariate scalar maps is a common but difficult visualization
problem.One solution has been to use two dimensional color schemes, but
the results are often hard to interpret and inaccurately read.An
alternative is to use a color sequence ...
Representing
bivariate scalar maps is a common but difficult visualization
problem.One solution has been to use two dimensional color schemes, but
the results are often hard to interpret and inaccurately read.An
alternative is to use a color sequence for one variable and a texture
sequence for another. This has been used, for example, in geology, but
much less studied than the two dimensional color scheme, although theory
suggests that it should lead to easier perceptual separation of
information relating to the two variables. To make a texture sequence
more clearly readable the concept of the quantitative texton sequence
(QTonS) is introduced.A QTonS is defined a sequence of small graphical
elements, called textons, where each texton represents a different
numerical value andsets of textons can be densely displayed to produce
visually differentiable textures. An experiment was carried out to
compare two bivariate color coding schemes with two schemes using QTonS
for one bivariate map component and a color sequence for the other. Two
different key designs were investigated (a key being a sequence of
colors or textures used in obtaining quantitative values from a map).
The first design used two separate keys, one for each dimension, in
order to measure how accurately subjects could independently estimate
the underlying scalar variables.The second key design was two
dimensional and intended to measure the overall integral accuracy that
could be obtained. The results show that the accuracy is substantially
higher for the QTonS/color sequence schemes.A hypothesis that
texture/color sequence combinations are better for independent judgments
of mapped quantities was supported. A second experiment probed the
limits of spatial resolution for QTonSs. expand
|
|
Continuous Parallel Coordinates |
|
Julian Heinrich,
Daniel Weiskopf
|
|
Pages: 1531-1538 |
|
doi>10.1109/TVCG.2009.131 |
|
Full text available:
Publisher Site
|
|
Typical
scientific data is represented on a grid with appropriate interpolation
or approximation schemes,defined on a continuous domain. The
visualization of such data in parallel coordinates may reveal patterns
latently contained in the data and thus ...
Typical
scientific data is represented on a grid with appropriate interpolation
or approximation schemes,defined on a continuous domain. The
visualization of such data in parallel coordinates may reveal patterns
latently contained in the data and thus can improve the understanding of
multidimensional relations. In this paper, we adopt the concept of
continuous scatterplots for the visualization of spatially continuous
input data to derive a density model for parallel coordinates. Based on
the point-line duality between scatterplots and parallel coordinates, we
propose a mathematical model that maps density from a continuous
scatterplot to parallel coordinates and present different algorithms for
both numerical and analytical computation of the resulting density
field. In addition, we show how the 2-D model can be used to
successively construct continuous parallel coordinates with an arbitrary
number of dimensions. Since continuous parallel coordinates interpolate
data values within grid cells, a scalable and dense visualization is
achieved, which will be demonstrated for typical multi-variate
scientific data. expand
|
|
VisMashup: Streamlining the Creation of Custom Visualization Applications |
|
Emanuele Santos,
Lauro Lins,
James Ahrens,
Juliana Freire,
Claudio Silva
|
|
Pages: 1539-1546 |
|
doi>10.1109/TVCG.2009.195 |
|
Full text available:
Publisher Site
|
|
Visualization
is essential for understanding the increasing volumes of digital data.
However, the process required to create insightful visualizations is
involved and time consuming. Although several visualization tools are
available, including tools ...
Visualization
is essential for understanding the increasing volumes of digital data.
However, the process required to create insightful visualizations is
involved and time consuming. Although several visualization tools are
available, including tools with sophisticated visual interfaces, they
are out of reach for users who have little or no knowledge of
visualization techniques and/or who do not have programming expertise.
In this paper, we propose VisMashup, a new framework for streamlining
the creation of customized visualization applications.Because these
applications can be customized for very specific tasks, they can hide
much of the complexity in a visualization specification and make it
easier for users to explore visualizations by manipulating a small set
of parameters. We describe the framework and how it supports the various
tasks a designer needs to carry out to develop an application, from
mining and exploring a set of visualization specifications (pipelines),
to the creation of simplified views of the pipelines, and the automatic
generation of the application and its interface.We also describe the
implementation of the system and demonstrate its use in two real
application scenarios. expand
|
|
Focus+Context Route Zooming and Information Overlay in 3D Urban Environments |
|
Huamin Qu,
Haomian Wang,
Weiwei Cui,
Yingcai Wu,
Ming-Yuen Chan
|
|
Pages: 1547-1554 |
|
doi>10.1109/TVCG.2009.144 |
|
Full text available:
Publisher Site
|
|
In
this paper we present a novel focus+context zooming technique, which
allows users to zoom into a route and its associated landmarks in a 3D
urban environment from a 45-degree bird’s-eye view. Through the creative
utilization of the empty space ...
In
this paper we present a novel focus+context zooming technique, which
allows users to zoom into a route and its associated landmarks in a 3D
urban environment from a 45-degree bird’s-eye view. Through the creative
utilization of the empty space in an urban environment, our technique
can informatively reveal the focus region and minimize distortions to
the context buildings. We first create more empty space in the 2D map by
broadening the road with an adapted seam carving algorithm. A
grid-based zooming technique is then used to enlarge the landmarks to
reclaim the created empty space and thus reduce distortions to the other
parts. Finally,an occlusion-free route visualization scheme adaptively
scales the buildings occluding the route to make the route always
visible to users. Our method can be conveniently integrated into Google
Earth and Virtual Earth to provide seamless route zooming and help users
better explore a city and plan their tours. It can also be used in
other applications such as information overlay to a virtual city. expand
|
|
Kd-Jump: a Path-Preserving Stackless Traversal for Faster Isosurface Raytracing on GPUs |
|
David M. Hughes,
Ik Soo Lim
|
|
Pages: 1555-1562 |
|
doi>10.1109/TVCG.2009.161 |
|
Full text available:
Publisher Site
|
|
Stackless
traversal techniques are often used to circumvent memory bottlenecks by
avoiding a stack and replacing return traversal with extra computation.
This paper addresses whether the stackless traversal approaches are
useful on newer hardware and ...
Stackless
traversal techniques are often used to circumvent memory bottlenecks by
avoiding a stack and replacing return traversal with extra computation.
This paper addresses whether the stackless traversal approaches are
useful on newer hardware and technology (such as CUDA). To this end, we
present a novel stackless approach for implicit kd-trees, which exploits
the benefits of index-based node traversal, without incurring extra
node visitation. This approach, which we term Kd-Jump, enables the
traversal to immediately return to the next valid node, like a stack,
without incurring extra node visitation (kd-restart). Also, Kd-Jump does
not require global memor y (stack) at all and only requires a small
matrix in fast constant-memory. We report that Kd-Jump outperforms a
stack by 10 to 20% and kd-restar t by 100%. We also present a Hybrid
Kd-Jump, which utilizes a volume stepper for leaf testing and a run-time
depth threshold to define where kd-tree traversal stops and
volume-stepping occurs. By using both methods, we gain the benefits of
empty space removal, fast texture-caching and realtime ability to
determine the best threshold for current isosurface and view direction. expand
|
|
Mapping High-Fidelity Volume Rendering for Medical Imaging to CPU, GPU and Many-Core Architectures |
|
Mikhail Smelyanskiy,
David Holmes,
Jatin Chhugani,
Alan Larson,
Douglas M. Carmean,
Dennis Hanson,
Pradeep Dubey,
Kurt Augustine,
Daehyun Kim,
Alan Kyker,
Victor W. Lee,
Anthony D. Nguyen,
Larry Seiler,
Richard Robb
|
|
Pages: 1563-1570 |
|
doi>10.1109/TVCG.2009.164 |
|
Full text available:
Publisher Site
|
|
Medical
volumetric imaging requires high fidelity, high performance rendering
algorithms. We motivate and analyze new volumetric rendering algorithms
that are suited to modern parallel processing architectures. First, we
describe the three major categories ...
Medical
volumetric imaging requires high fidelity, high performance rendering
algorithms. We motivate and analyze new volumetric rendering algorithms
that are suited to modern parallel processing architectures. First, we
describe the three major categories of volume rendering algorithms and
confirm through an imaging scientist-guided evaluation that ray-casting
is the most acceptable. We describe a thread- and data-parallel
implementation of ray-casting that makes it amenable to key
architectural trends of three modern commodity parallel architectures:
multi-core, GPU, and an upcoming many-core Intel®architecture code-named
Larrabee. We achieve more than an order of magnitude performance
improvement on a number of large 3D medical datasets. We further
describe a data compression scheme that significantly reduces
data-transfer overhead. This allows our approach to scale well to large
numbers of Larrabee cores. expand
|
|
Volume Ray Casting with Peak Finding and Differential Sampling |
|
Aaron Knoll,
Younis Hijazi,
Rolf Westerteiger,
Mathias Schott,
Charles Hansen,
Hans Hagen
|
|
Pages: 1571-1578 |
|
doi>10.1109/TVCG.2009.204 |
|
Full text available:
Publisher Site
|
|
Direct
volume rendering and isosurfacing are ubiquitous rendering techniques
in scientific visualization, commonly employed in imaging 3D data from
simulation and scan sources. Conventionally, these methods have been
treated as separate modalities, necessitating ...
Direct
volume rendering and isosurfacing are ubiquitous rendering techniques
in scientific visualization, commonly employed in imaging 3D data from
simulation and scan sources. Conventionally, these methods have been
treated as separate modalities, necessitating different sampling
strategies and rendering algorithms. In reality, an isosurface is a
special case of a transfer function, namely a Dirac impulse at a given
isovalue. However, ar tifact-free rendering of discrete isosurfaces in a
volume rendering framework is an elusive goal, requiring either
infinite sampling or smoothing of the transfer function. While
preintegration approaches solve the most obvious deficiencies in
handling shar p transfer functions, ar tifacts can still result,
limiting classification. In this paper, we introduce a method for
rendering such features by explicitly solving for isovalues within the
volume rendering integral. In addition, we present a sampling strategy
inspired by ray differentials that automatically matches the frequency
of the image plane, resulting in fewer ar tifacts near the eye and
better overall performance. These techniques exhibit clear advantages
over standard uniform ray casting with and without preintegration, and
allow for high-quality interactive volume rendering with shar p C0
transfer functions. expand
|
|
Interactive Volume Rendering of Functional Representations in Quantum Chemistry |
|
Yun Jang,
Ugo Varetto
|
|
Pages: 1579-5186 |
|
doi>10.1109/TVCG.2009.158 |
|
Full text available:
Publisher Site
|
|
Simulation
and computation in chemistry studies have been improved as
computational power has increased over decades. Many types of chemistry
simulation results are available, from atomic level bonding to
volumetric representations of electron density. ...
Simulation
and computation in chemistry studies have been improved as
computational power has increased over decades. Many types of chemistry
simulation results are available, from atomic level bonding to
volumetric representations of electron density. However, tools for the
visualization of the results from quantum chemistry computations are
still limited to showing atomic bonds and isosurfaces or isocontours
corresponding to certain isovalues. In this work, we study the
volumetric representations of the results from quantum chemistry
computations, and evaluate and visualize the representations directly on
the GPU without resampling the result in grid structures. Our
visualization tool handles the direct evaluation of the approximated
wavefunctions described as a combination of Gaussian-like primitive
basis functions. For visualizations, we use a slice based volume
rendering technique with a 2D transfer function, volume clipping, and
illustrative rendering in order to reveal and enhance the quantum
chemistry structure. Since there is no need of resampling the volume
from the functional representations, two issues, data transfer and
resampling resolution, can be ignored, therefore, it is possible to
interactively explore large amount of different information in the
computation results. expand
|
|
GL4D: A GPU-based Architecture for Interactive 4D Visualization |
|
Alan Chu,
Chi-Wing Fu,
Andrew Hanson,
Pheng-Ann Heng
|
|
Pages: 1587-1594 |
|
doi>10.1109/TVCG.2009.147 |
|
Full text available:
Publisher Site
|
|
This
paper describes GL4D, an interactive system for visualizing 2-manifolds
and 3-manifolds embedded in four Euclidean dimensions and illuminated
by 4D light sources.It is a tetrahedron-based rendering pipeline that
projects geometry into volume images, ...
This
paper describes GL4D, an interactive system for visualizing 2-manifolds
and 3-manifolds embedded in four Euclidean dimensions and illuminated
by 4D light sources.It is a tetrahedron-based rendering pipeline that
projects geometry into volume images, an exact parallel to the
conventional triangle-based rendering pipeline for 3D graphics.Novel
features include GPU-based algorithms for real-time 4D occlusion
handling and transparency compositing; we thus enable a previously
impossible level of quality and interactivity for exploring lit 4D
objects.The 4D tetrahedrons are stored in GPU memory as vertex buffer
objects, and the vertex shader is used to perform per-vertex 4D
modelview transformations and 4D-to-3D projection.The geometry shader
extension is utilized to slice the projected tetrahedrons and rasterize
the slices into individual 2D layers of voxel fragments. Finally, the
fragment shader performs per-voxel operations such as lighting and alpha
blending with previously computed layers.We account for 4D voxel
occlusion along the 4D-to-3D projection ray by supporting a multi-pass
back-to-front fragment composition along the projection ray; to
accomplish this, we exploit a new adaptation of the dual depth peeling
technique to produce correct volume image data and to simultaneously
render the resulting volume data using 3D transfer functions into the
final 2D image.Previous CPU implementations of the rendering of
4D-embedded 3-manifolds could not perform either the 4D depth-buffered
projection or manipulation of the volume-rendered image in real-time; in
particular, the dual depth peeling algorithm is a novel GPU-based
solution to the real-time 4D depth-buffering problem.GL4D is implemented
as an integrated OpenGL-style API library, so that the underlying
shader operations are as transparent as possible to the user. expand
|
|
Decoupling Illumination from Isosurface Generation Using 4D Light Transport |
|
David C. Banks,
Kevin Beason
|
|
Pages: 1595-1602 |
|
doi>10.1109/TVCG.2009.137 |
|
Full text available:
Publisher Site
|
|
One
way to provide global illumination for the scientist who performs an
interactive sweep through a 3D scalar dataset is to pre-compute global
illumination, resample the radiance onto a 3D grid, then use it as a 3D
texture. The basic approach of repeatedly ...
One
way to provide global illumination for the scientist who performs an
interactive sweep through a 3D scalar dataset is to pre-compute global
illumination, resample the radiance onto a 3D grid, then use it as a 3D
texture. The basic approach of repeatedly extracting isosurfaces,
illuminating them, and then building a 3D illumination grid suffers from
the non-uniform sampling that arises from coupling the sampling of
radiance with the sampling of isosurfaces. We demonstrate how the
illumination step can be decoupled from the isosurface extraction step
by illuminating the entire 3D scalar function as a 3-manifold in
4-dimensional space. By reformulating light transport in a higher
dimension, one can sample a 3D volume without requiring the radiance
samples to aggregate along individual isosurfaces in the pre-computed
illumination grid. expand
|
|
Supercubes: A High-Level Primitive for Diamond Hierarchies |
|
Kenneth Weiss,
Leila De Floriani
|
|
Pages: 1603-1610 |
|
doi>10.1109/TVCG.2009.186 |
|
Full text available:
Publisher Site
|
|
Volumetric
datasets are often modeled using a multiresolution approach based on a
nested decomposition of the domain into a polyhedral mesh.Nested
tetrahedral meshes generated through the longest edge bisection rule are
commonly used to decompose regular ...
Volumetric
datasets are often modeled using a multiresolution approach based on a
nested decomposition of the domain into a polyhedral mesh.Nested
tetrahedral meshes generated through the longest edge bisection rule are
commonly used to decompose regular volumetric datasets since they
produce highly adaptive crack-free representations.Efficient
representations for such models have been achieved by clustering the set
of tetrahedra sharing a common longest edge into a structure called a
diamond. The alignment and orientation of the longest edge can be used
to implicitly determine the geometry of a diamond and its relations to
the other diamonds within the hierarchy.We introduce the supercube as a
high-level primitive within such meshes that encompasses all unique
types of diamonds.A supercube is a coherent set of edges corresponding
to three consecutive levels of subdivision.Diamonds are uniquely
characterized by the longest edge of the tetrahedra forming them and are
clustered in supercubes through the association of the longest edge of a
diamond with a unique edge in a supercube. Supercubesare thus a compact
and highly efficient means of associating information with a subset of
the vertices,edges and tetrahedra of the meshes generated through
longest edge bisection.We demonstrate the effectiveness of the supercube
representation when encoding multiresolution diamond hierarchies built
on a subset of the points of a regular grid.We also show how supercubes
can be used to efficiently extract meshes from diamond hierarchies and
to reduce the storage requirements of such variable-resolution meshes. expand
|
|
High-Quality, Semi-Analytical Volume Rendering for AMR Data |
|
Pages: 1611-1618 |
|
doi>10.1109/TVCG.2009.149 |
|
Full text available:
Publisher Site
|
|
This
paper presents a pipeline for high quality volume rendering of adaptive
mesh refinement (AMR) datasets. We introduce a new method allowing high
quality visualization of hexahedral cells in this context; this method
avoids artifacts like discontinuities ...
This
paper presents a pipeline for high quality volume rendering of adaptive
mesh refinement (AMR) datasets. We introduce a new method allowing high
quality visualization of hexahedral cells in this context; this method
avoids artifacts like discontinuities in the isosurfaces. To achieve
this, we choose the number and placement of sampling points over the
cast rays according to the analytical properties of the reconstructed
signal inside each cell. We extend our method to handle volume shading
of such cells. We propose an interpolation scheme that guarantees
continuity between adjacent cells of different AMR levels. We introduce
an efficient hybrid CPU-GPU mesh traversal technique. We present an
implementation of our AMR visualization expand
|
|
TVCG Vis/InfoVis 2009 Author Index |
|
Pages: xxviv-xxvv |
|
doi>10.1109/TVCG.2009.192 |
|
Full text available:
Publisher Site
|
|
TVCG Vis/InfoVis 2009 Author Index
TVCG Vis/InfoVis 2009 Author Index expand
|