|
Vis/InfoVis 2006 pre-pages |
|
Page: vispre |
|
doi>10.1109/TVCG.2006.191 |
|
Available formats:
Publisher Site
|
|
These
pre-pages to the issue contain a table of contents, a list of
supporting organizations, a message from the Editor-in-Chief, the
preface, committee and reviewer listings, 2005 visualization awards, and
the keynote and capstone addressess for Vis ...
These
pre-pages to the issue contain a table of contents, a list of
supporting organizations, a message from the Editor-in-Chief, the
preface, committee and reviewer listings, 2005 visualization awards, and
the keynote and capstone addressess for Vis and InfoVis. expand
|
|
ASK-GraphView: A Large Scale Graph Visualization System |
|
James Abello,
Frank van Ham,
Neeraj Krishnan
|
|
Pages: 669-676 |
|
doi>10.1109/TVCG.2006.120 |
|
Available formats:
Publisher Site
|
|
We
describe ASK-GraphView, a node-link-based graph visualization system
that allows clustering and interactive navigation of large graphs,
ranging in size up to 16 million edges. The system uses a scalable
architecture and a series of increasingly sophisticated ...
We
describe ASK-GraphView, a node-link-based graph visualization system
that allows clustering and interactive navigation of large graphs,
ranging in size up to 16 million edges. The system uses a scalable
architecture and a series of increasingly sophisticated clustering
algorithms to construct a hierarchy on an arbitrary, weighted undirected
input graph. By lowering the interactivity requirements we can scale to
substantially bigger graphs. The user is allowed to navigate this
hierarchy in a top down manner by interactively expanding individual
clusters. ASK-GraphView also provides facilities for filtering and
coloring, annotation and cluster labeling. expand
|
|
MatrixExplorer: a Dual-Representation System to Explore Social Networks |
|
Nathalie Henry,
Jean-Daniel Fekete
|
|
Pages: 677-684 |
|
doi>10.1109/TVCG.2006.160 |
|
Available formats:
Publisher Site
|
|
MatrixExplorer
is a network visualization system that uses two representations:
node-link diagrams and matrices.Its design comes from a list of
requirements formalized after several interviews and a participatory
design session conducted with social ...
MatrixExplorer
is a network visualization system that uses two representations:
node-link diagrams and matrices.Its design comes from a list of
requirements formalized after several interviews and a participatory
design session conducted with social science researchers.Although
matrices are commonly used in social networks analysis, very few systems
support the matrix-based representations to visualize and analyze
networks. MatrixExplorer provides several novel features to support the
exploration of social networks with a matrix-based representation, in
addition to the standard interactive filtering and clustering functions.
It provides tools to reorder (layout) matrices, to annotate and compare
findings across different layouts and find consensus among several
clusterings.MatrixExplorer also supports Node-link diagram views which
are familiar to most users and remain a convenient way to publish or
communicate exploration results.Matrix and node-link representations are
kept synchronized at all stages of the exploration process. expand
|
|
Visual Analysis of Multivariate State Transition Graphs |
|
A. Johannes Pretorius,
Jarke J. Van Wijk
|
|
Pages: 685-692 |
|
doi>10.1109/TVCG.2006.192 |
|
Available formats:
Publisher Site
|
|
We
present a new approach for the visual analysis of state transition
graphs. We deal with multivariate graphs where a number of attributes
are associated with every node. Our method provides an interactive
attribute-based clustering facility. Clustering ...
We
present a new approach for the visual analysis of state transition
graphs. We deal with multivariate graphs where a number of attributes
are associated with every node. Our method provides an interactive
attribute-based clustering facility. Clustering results in metric,
hierarchical and relational data, represented in a single visualization.
To visualize hierarchically structured quantitative data, we introduce a
novel technique: the bar tree. We combine this with a node-link diagram
to visualize the hierarchy and an arc diagram to visualize relational
data. Our method enables the user to gain significant insight into large
state transition graphs containing tens of thousands of nodes. We
illustrate the effectiveness of our approach by applying it to a
real-world use case. The graph we consider models the behavior of an
industrial wafer stepper and contains 55 043 nodes and 289 443 edges. expand
|
|
Balancing Systematic and Flexible Exploration of Social Networks |
|
Adam Perer,
Ben Shneiderman
|
|
Pages: 693-700 |
|
doi>10.1109/TVCG.2006.122 |
|
Available formats:
Publisher Site
|
|
Social
network analysis (SNA) has emerged as a powerful method for
understanding the importance of relationships in networks.However,
interactive exploration of networks is currently challenging because:
(1) it is difficult to find patterns and comprehend ...
Social
network analysis (SNA) has emerged as a powerful method for
understanding the importance of relationships in networks.However,
interactive exploration of networks is currently challenging because:
(1) it is difficult to find patterns and comprehend the structure of
networks with many nodes and links, and (2) current systems are often a
medley of statistical methods and overwhelming visual output which
leaves many analysts uncertain about how to explore in an orderly
manner.This results in exploration that is largely opportunistic.Our
contributions are techniques to help structural analysts understand
social networks more effectively.We present SocialAction, a system that
uses attribute ranking and coordinated views to help users
systematically examine numerous SNA measures.Users can (1) flexibly
iterate through visualizations of measures to gain an overview, filter
nodes, and find outliers, (2) aggregate networks using link structure,
find cohesive subgroups, and focus on communities of interest, and (3)
untangle networks by viewing different link types separately, or find
patterns across different link types using a matrix overview.For each
operation, a stable node layout is maintained in the network
visualization so users can make comparisons.SocialAction offers analysts
a strategy beyond opportunism, as it provides systematic, yet flexible,
techniques for exploring social networks. expand
|
|
Multi-Scale Banking to 45 Degrees |
|
Jeffrey Heer,
Maneesh Agrawala
|
|
Pages: 701-708 |
|
doi>10.1109/TVCG.2006.163 |
|
Available formats:
Publisher Site
|
|
In
his text Visualizing Data, William Cleveland demonstrates how the
aspect ratio of a line chart can affect an analyst's perception of
trends in the data. Cleveland proposes an optimization technique for
computing the aspect ratio such that the average ...
In
his text Visualizing Data, William Cleveland demonstrates how the
aspect ratio of a line chart can affect an analyst's perception of
trends in the data. Cleveland proposes an optimization technique for
computing the aspect ratio such that the average absolute orientation of
line segments in the chart is equal to 45 degrees. This technique,
called banking to 45 degrees, is designed to maximize the
discriminability of the orientations of the line segments in the chart.
In this paper, we revisit this classic result and describe two new
extensions. First, we propose alternate optimization criteria designed
to further improve the visual perception of line segment orientations.
Second, we develop multi-scale banking, a technique that combines
spectral analysis with banking to 45 degrees . Our technique
automatically identifies trends at various frequency scales and then
generates a banked chart for each of these scales. We demonstrate the
utility of our techniques in a range of visualization tools and analysis
examples. expand
|
|
Measuring Data Abstraction Quality in Multiresolution Visualizations |
|
Qingguang Cui,
Matthew Ward,
Elke Rundensteiner,
Jing Yang
|
|
Pages: 709-716 |
|
doi>10.1109/TVCG.2006.161 |
|
Available formats:
Publisher Site
|
|
Data
abstraction techniques are widely used in multiresolution visualization
systems to reduce visual clutter and facilitate analysis from overview
to detail. However, analysts are usually unaware of how well the
abstracted data represent the original ...
Data
abstraction techniques are widely used in multiresolution visualization
systems to reduce visual clutter and facilitate analysis from overview
to detail. However, analysts are usually unaware of how well the
abstracted data represent the original dataset, which can impact the
reliability of results gleaned from the abstractions. In this paper, we
define two data abstraction quality measures for computing the degree to
which the abstraction conveys the original dataset: the Histogram
Difference Measure and the Nearest Neighbor Measure. They have been
integrated within XmdvTool, a public-domain multiresolution
visualization system for multivariate data analysis that supports
sampling as well as clustering to simplify data. Several interactive
operations are provided, including adjusting the data abstraction level,
changing selected regions, and setting the acceptable data abstraction
quality level. Conducting these operations, analysts can select an
optimal data abstraction level. Also, analysts can compare different
abstraction methods using the measures to see how well relative data
density and outliers are maintained, and then select an abstraction
method that meets the requirement of their analytic tasks. expand
|
|
Enabling Automatic Clutter Reduction in Parallel Coordinate Plots |
|
Geoffrey Ellis,
Alan Dix
|
|
Pages: 717-724 |
|
doi>10.1109/TVCG.2006.138 |
|
Available formats:
Publisher Site
|
|
We
have previously shown that random sampling is an effective clutter
reduction technique and that a sampling lens can facilitate
focus+context viewing of particular regions. This demands an efficient
method of estimating the overlap or occlusion of ...
We
have previously shown that random sampling is an effective clutter
reduction technique and that a sampling lens can facilitate
focus+context viewing of particular regions. This demands an efficient
method of estimating the overlap or occlusion of large numbers of
intersecting lines in order to automatically adjust the sampling rate
within the lens. This paper proposes several ways for measuring
occlusion in parallel coordinate plots. An empirical study into the
accuracy and efficiency of the occlusion measures show that a
probabilistic approach combined with a ÔbinningÕ technique is very fast
and yet approaches the accuracy of the more expensive ÔtrueÕ complete
measurement. expand
|
|
Topographic Visualization of Prefix Propagation in the Internet |
|
Pier Francesco Cortese,
Giuseppe Di Battista,
Antonello Moneta,
Maurizio Patrignani,
Maurizio Pizzonia
|
|
Pages: 725-732 |
|
doi>10.1109/TVCG.2006.185 |
|
Available formats:
Publisher Site
|
|
We
propose a new metaphor for the visualization of prefixes propagation in
the Internet.Such a metaphor is based on the concept of topographic map
and allows to put in evidencethe relative importance of the Internet
Service Providers (ISPs) involved ...
We
propose a new metaphor for the visualization of prefixes propagation in
the Internet.Such a metaphor is based on the concept of topographic map
and allows to put in evidencethe relative importance of the Internet
Service Providers (ISPs) involved in the routing of the prefix.Based on
the new metaphor we propose an algorithm for computing layouts and
experiment with such algorithm on a test suite taken from the real
Internet. The paper extends the visualization approachof the BGPlay
service, which is an Internet routing monitoring tool widely used by ISP
operators. expand
|
|
Network Visualization by Semantic Substrates |
|
Ben Shneiderman,
Aleks Aris
|
|
Pages: 733-740 |
|
doi>10.1109/TVCG.2006.166 |
|
Available formats:
Publisher Site
|
|
Networks
have remained a challenge for information visualization
designersbecause of the complex issues of node and link layout coupled
with the rich set of tasksthat users present.This paper offers a
strategy based on two principles:(1) layouts are ...
Networks
have remained a challenge for information visualization
designersbecause of the complex issues of node and link layout coupled
with the rich set of tasksthat users present.This paper offers a
strategy based on two principles:(1) layouts are based on user-defined
semantic substrates, which are non-overlappingregions in which node
placement is based on node attributes, (2) users interactivelyadjust
sliders to control link visibility to limit clutter and thus ensure
comprehensibilityof source and destination. Scalability is further
facilitated by user control of which nodesare visible. We illustrate our
semantic substrates approach as implemented in NVSS 1.0with legal
precedent data for up to 1122 court cases in three regions with 7645
legalcitations. expand
|
|
Hierarchical Edge Bundles: Visualization of Adjacency Relations in Hierarchical Data |
|
Danny Holten
|
|
Pages: 741-748 |
|
doi>10.1109/TVCG.2006.147 |
|
Available formats:
Publisher Site
|
|
A compound
graph is a frequently encountered type of data set. Relations are given
between items, and a hierarchy is defined on the items as well. We
present a new method for visualizing such compound graphs. Our approach
is based on visually bundling ...
A compound
graph is a frequently encountered type of data set. Relations are given
between items, and a hierarchy is defined on the items as well. We
present a new method for visualizing such compound graphs. Our approach
is based on visually bundling the adjacency edges, i.e.,
non-hierarchical edges, together. We realize this as follows. We assume
that the hierarchy is shown via a standard tree visualization method.
Next, we bend each adjacency edge, modeled as a B-spline curve, toward
the polyline defined by the path via the inclusion edges from one node
to another. This hierarchical bundling reduces visual clutter and also
visualizes implicit adjacency edges between parent nodes that are the
result of explicit adjacency edges between their respective child nodes.
Furthermore, hierarchical edge bundling is a generic method which can
be used in conjunction with existing tree visualization techniques. We
illustrate our technique by providing example visualizations and discuss
the results based on an informal evaluation provided by potential users
of such visualizations. expand
|
|
Visualization of Geo-spatial Point Sets via Global Shape Transformation and Local Pixel Placement |
|
Christian Panse,
Mike Sips,
Daniel Keim,
Stephen North
|
|
Pages: 749-756 |
|
doi>10.1109/TVCG.2006.198 |
|
Available formats:
Publisher Site
|
|
In
many applications, data is collected and indexed by geo-spatial
location. Discovering interesting patterns through visualization is an
important way of gaining insight about such data. A previously proposed
approach is to apply local placement functions ...
In
many applications, data is collected and indexed by geo-spatial
location. Discovering interesting patterns through visualization is an
important way of gaining insight about such data. A previously proposed
approach is to apply local placement functions such as PixelMaps that
transform the input data set into a solution set that preserves certain
constraints while making interesting patterns more obvious and avoid
data loss from overplotting. In experience, this family of spatial
transformations can reveal fine structures in large point sets, but it
is sometimes difficult to relate those structures to basic geographic
features such as cities and regional boundaries. Recent information
visualization research has addressed other types of transformation
functions that make spatially-transformed maps with recognizable shapes.
These types of spatial-transformation are called global shape
functions. In particular, cartogram-based map distortion has been
studied.On the other hand, cartogram-based distortion does not handle
point sets readily. In this study, we present a framework that allows
the user to specify a global shape function and a local placement
function. We combine cartogram-based layout (global shape) with
PixelMaps (local placement), obtaining some of the benefits of each
toward improved exploration of dense geo-spatial data sets. expand
|
|
Worldmapper: The World as You've Never Seen it Before |
|
Danny Dorling,
Anna Barford,
Mark Newman
|
|
Pages: 757-764 |
|
doi>10.1109/TVCG.2006.202 |
|
Available formats:
Publisher Site
|
|
This
paper describes the Worldmapper Project, which makes use of novel
visualization techniques to represent a broad variety of social and
economic data about the countries of the world.The goal of the project
is to use the map projections known as cartograms ...
This
paper describes the Worldmapper Project, which makes use of novel
visualization techniques to represent a broad variety of social and
economic data about the countries of the world.The goal of the project
is to use the map projections known as cartograms to depict comparisons
and relations between different territories, and its execution raises
many interesting design challenges that were not all apparent at the
outset.We discuss the approaches taken towards these challenges, some of
which may have considerably broad application. We conclude by
commenting on the positive initial response to the worldmapper images
published on the web, which we believe is due, at least in part, to the
particular effectiveness of the cartogram as a tool for communicating
quantitative geographic data. expand
|
|
Spatial Analysis of News Sources |
|
Andrew Mehler,
Yunfan Bao,
Xin Li,
Yue Wang,
Steven Skiena
|
|
Pages: 765-772 |
|
doi>10.1109/TVCG.2006.179 |
|
Available formats:
Publisher Site
|
|
People
in different places talk about different things. This interest
distribution is reflected by the newspaper articles circulated in a
particular area. We use data from our large-scale newspaper analysis
system (Lydia) to make entity datamaps, a spatial ...
People
in different places talk about different things. This interest
distribution is reflected by the newspaper articles circulated in a
particular area. We use data from our large-scale newspaper analysis
system (Lydia) to make entity datamaps, a spatial visualization of the
interest in a given named entity. Our goal is to identify entities which
display regional biases. We develop a model of estimating the frequency
of reference of an entity in any given city from the reference
frequency centered in surrounding cities, and techniques for evaluating
the spatial significance of this distribution. expand
|
|
Dynamic Map Labeling |
|
Ken Been,
Eli Daiches,
Chee Yap
|
|
Pages: 773-780 |
|
doi>10.1109/TVCG.2006.136 |
|
Available formats:
Publisher Site
|
|
We
address the problem of filtering, selecting and placing labels on a
dynamic map, which is characterized by continuous zooming and panning
capabilities.This consists of two interrelated issues.The first is to
avoid label popping and other artifacts ...
We
address the problem of filtering, selecting and placing labels on a
dynamic map, which is characterized by continuous zooming and panning
capabilities.This consists of two interrelated issues.The first is to
avoid label popping and other artifacts that cause confusion and
interrupt navigation, and the second is to label at interactive speed.In
most formulations the static map labeling problem is $NP$-hard, and a
fast approximation might have $O(n \log n)$ complexity.Even this is too
slow during interaction, when the number of labels shown can be several
orders of magnitude less than the number in the map. In this paper we
introduce a set of desiderata for ``consistent'' dynamic map labeling,
which has qualities desirable for navigation.We develop a new framework
for dynamic labeling that achieves the desiderata and allows for fast
interactive display by moving all of the selection and placement
decisions into the preprocessing phase.This framework is general enough
to accommodate a variety of selection and placement algorithms. It does
not appear possible to achieve our desiderata using previous
frameworks.Prior to this paper, there were no formal models of dynamic
maps or of dynamic labels; our paper introduces both. We formulate a
general optimization problem for dynamic map labeling and give a
solution to a simple version of the problem. The simple version is based
on label priorities and a versatile and intuitive class of dynamic
label placements we call ``invariant point placements''.Despite these
restrictions, our approach gives a useful and practical solution.Our
implementation is incorporated into the G-Vis system which is a
full-detail dynamic map of the continental USA.This demo is available
through any browser. expand
|
|
Visualization of Barrier Tree Sequences |
|
Christian Heine,
Gerik Scheuermann,
Christoph Flamm,
Ivo L. Hofacker,
Peter F. Stadler
|
|
Pages: 781-788 |
|
doi>10.1109/TVCG.2006.196 |
|
Available formats:
Publisher Site
|
|
Dynamical
models that explain the formation of spatial structures of RNA
molecules have reached a complexity that requires novel visualization
methods that help to analyze the validity of these models. Here, we
focus on the visualization of so-called ...
Dynamical
models that explain the formation of spatial structures of RNA
molecules have reached a complexity that requires novel visualization
methods that help to analyze the validity of these models. Here, we
focus on the visualization of so-called folding landscapes of a growing
RNA molecule. Folding landscapes describe the energy of a molecule as a
function of its spatial configuration; thus they are huge and high
dimensional. Their most salient features, however, are encapsulated by
their so-called barrier tree that reflects the local minima and their
connecting saddle points. For each length of the growing RNA chain there
exists a folding landscape. We visualize the sequence of folding
landscapes by an animation of the corresponding barrier trees. To
generate the animation, we adapt the foresight layout with tolerance
algorithm for general dynamic graph layout problems. Since it is very
general, we give a detailed description of each phase: constructing a
supergraph for the trees, layout of that supergraph using a modified DOT
algorithm, and presentation techniques for the final animation. expand
|
|
Visualizing Business Data with Generalized Treemaps |
|
Roel Vliegen,
Jarke J. van Wijk,
Erik-Jan van der Linden
|
|
Pages: 789-796 |
|
doi>10.1109/TVCG.2006.200 |
|
Available formats:
Publisher Site
|
|
Business
data is often presented using simple business graphics.These familiar
visualizations are effective for providing overviews,but fall short for
the presentation of large amounts of detailedinformation. Treemaps can
provide such detail, but are ...
Business
data is often presented using simple business graphics.These familiar
visualizations are effective for providing overviews,but fall short for
the presentation of large amounts of detailedinformation. Treemaps can
provide such detail, but are often noteasy to understand. We present how
standard treemap algorithms canbe adapted such that the results mimic
familiar business graphics. Specifically, we present the use of
different layout algorithms perlevel, a number of variations of the
squarified algorithm, the useof variable borders, and the use of
non-rectangular shapes. Thecombined use of these leads to histograms,
pie charts and a varietyof other styles. expand
|
|
FacetMap: A Scalable Search and Browse Visualization |
|
Greg Smith,
Mary Czerwinski,
Brian Meyers,
Daniel Robbins,
George Robertson,
Desney S. Tan
|
|
Pages: 797-804 |
|
doi>10.1109/TVCG.2006.142 |
|
Available formats:
Publisher Site
|
|
The
dominant paradigm for searching and browsing large data stores is
text-based: presenting a scrollable list of search results in response
to textual search term input. While this works well for the Web, there
is opportunity for improvement in the ...
The
dominant paradigm for searching and browsing large data stores is
text-based: presenting a scrollable list of search results in response
to textual search term input. While this works well for the Web, there
is opportunity for improvement in the domain of personal information
stores, which tend to have more heterogeneous data and richer metadata.
In this paper, we introduce FacetMap, an interactive, query-driven
visualization, generalizable to a wide range of metadata-rich data
stores. FacetMap uses a visual metaphor for both input (selection of
metadata facets as filters) and output. Results of a user study provide
insight into tradeoffs between FacetMap's graphical approach and the
traditional text-oriented approach. expand
|
|
Visual Exploration of Complex Time-Varying Graphs |
|
Gautam Kumar,
Michael Garland
|
|
Pages: 805-812 |
|
doi>10.1109/TVCG.2006.193 |
|
Available formats:
Publisher Site
|
|
Many
graph drawing and visualization algorithms, such as force-directed
layout and line-dot rendering, work very well on relatively small and
sparse graphs. However, they often produce extremely tangled results and
exhibit impractical running times for ...
Many
graph drawing and visualization algorithms, such as force-directed
layout and line-dot rendering, work very well on relatively small and
sparse graphs. However, they often produce extremely tangled results and
exhibit impractical running times for highly non-planar graphs with
large edge density. And very few graph layout algorithms support dynamic
time-varying graphs; applying them independently to each frame produces
distracting temporally incoherent visualizations. We have developed a
new visualization technique based on a novel approach to hierarchically
structuring dense graphs via stratification.Using this structure, we
formulate a hierarchical force-directed layout algorithm that is both
efficient and produces quality graph layouts.The stratification of the
graph also allows us to present views of the data that abstract away
many small details of its structure. Rather than displaying all edges
and nodes at once, resulting in a convoluted rendering, we present an
interactive tool that filters edges and nodes using the graph hierarchy
and allows users to drill down into the graph for details. Our layout
algorithm also accommodates time-varying graphs in a natural way,
producing a temporally coherent animation that can be used to analyze
and extract trends from dynamic graph data. For example, we demonstrate
the use of our method to explore financial correlation data for the U.S.
stock market in the period from 1990 to 2005.The user can easily
analyze the time-varying correlation graph of the market, uncovering
information such as market sector trends, representative stocks for
portfolio construction, and the interrelationship of stocks over time. expand
|
|
Smashing Peacocks Further: Drawing Quasi-Trees from Biconnected Components |
|
Daniel Archambault,
Tamara Munzner,
David Auber
|
|
Pages: 813-820 |
|
doi>10.1109/TVCG.2006.177 |
|
Available formats:
Publisher Site
|
|
Quasi-trees,
namely graphs with tree-like structure, appear in many application
domains, including bioinformatics and computer networks.Our new SPF
approach exploits the structure of these graphs with a two-level
approach to drawing, where the graph ...
Quasi-trees,
namely graphs with tree-like structure, appear in many application
domains, including bioinformatics and computer networks.Our new SPF
approach exploits the structure of these graphs with a two-level
approach to drawing, where the graph is decomposed into a tree of
biconnected components. The low-level biconnected components are drawn
with a force-directed approach that uses a spanning tree skeleton as a
starting point for the layout. The higher-level structure of the graph
is a true tree with meta-nodes of variable size that contain each
biconnected component. That tree is drawn with a new area-aware variant
of a tree drawing algorithm that handles high-degree nodes gracefully,
at the cost of allowing edge-node overlaps. SPF performs an order of
magnitude faster than the best previous approaches, while producing
drawings of commensurate or improved quality. expand
|
|
IPSep-CoLa: An Incremental Procedure for Separation Constraint Layout of Graphs |
|
Tim Dwyer,
Yehuda Koren,
Kim Marriott
|
|
Pages: 821-828 |
|
doi>10.1109/TVCG.2006.156 |
|
Available formats:
Publisher Site
|
|
We
extend the popular force-directed approach to network (or graph) layout
to allow separation constraints, which enforce a minimum horizontal or
vertical separation between selected pairs of nodes. This simple class
of linear constraints is expressive ...
We
extend the popular force-directed approach to network (or graph) layout
to allow separation constraints, which enforce a minimum horizontal or
vertical separation between selected pairs of nodes. This simple class
of linear constraints is expressive enough to satisfy a wide variety of
application-specific layout requirements, including: layout of directed
graphs to better show flow; layout with non-overlapping node labels; and
layout of graphs with grouped nodes (called clusters). In the stress
majorization force-directed layout process, separation constraints can
be treated as a quadratic programming problem. We give an incremental
algorithm based on gradient projection for efficiently solving this
problem. The algorithm is considerably faster than using generic
constraint optimization techniques and is comparable in speed to
unconstrained stress majorization. We demonstrate the utility of our
technique with sample data from a number of practical applications
including gene-activation networks, terrorist networks and visualization
of high-dimensional data. expand
|
|
User Interaction with Scatterplots on Small Screens - A Comparative Evaluation of Geometric-Semantic Zoom and Fisheye Distortion |
|
Thorsten Buering,
Jens Gerken,
Harald Reiterer
|
|
Pages: 829-836 |
|
doi>10.1109/TVCG.2006.187 |
|
Available formats:
Publisher Site
|
|
Existing
information-visualization techniques that target small screens are
usually limited to exploring a few hundred items. In this article we
present a scatterplot tool for Personal Digital Assistants that allows
the handling of many thousands of ...
Existing
information-visualization techniques that target small screens are
usually limited to exploring a few hundred items. In this article we
present a scatterplot tool for Personal Digital Assistants that allows
the handling of many thousands of items. The application's scalability
is achieved by incorporating two alternative interaction techniques: a
geometric-semantic zoom that provides smooth transition between overview
and detail, and a fisheye distortion that displays the focus and
context regions of the scatterplot in a single view. A user study with
24 participants was conducted to compare the usability and efficiency of
both techniques when searching a book database containing 7500 items.
The study was run on a pen-driven Wacom board simulating a PDA
interface. While the results showed no significant difference in
task-completion times, a clear majority of 20 users preferred the
fisheye view over the zoom interaction. In addition, other dependent
variables such as user satisfaction and subjective rating of orientation
and navigation support revealed a preference for the fisheye
distortion. These findings partly contradict related research and
indicate that, when using a small screen, users place higher value on
the ability to preserve navigational context than they do on the ease of
use of a simplistic, metaphor-based interaction style. expand
|
|
The Perceptual Scalability of Visualization |
|
Beth Yost,
Chris North
|
|
Pages: 837-844 |
|
doi>10.1109/TVCG.2006.184 |
|
Available formats:
Publisher Site
|
|
Larger,
higher resolution displays can be used to increase the scalability of
information visualizations. But just how much can scalability increase
using larger displays before hitting human perceptual or cognitive
limits? Are the same visualization ...
Larger,
higher resolution displays can be used to increase the scalability of
information visualizations. But just how much can scalability increase
using larger displays before hitting human perceptual or cognitive
limits? Are the same visualization techniques that are good on a single
monitor also the techniques that are best when they are scaled up using
large, high-resolution displays? To answer these questions we performed a
controlled experiment on user performance time, accuracy, and
subjective workload when scaling up data quantity with different
space-time-attribute visualizations using a large, tiled display. Twelve
college students used small multiples, embedded bar matrices, and
embedded time-series graphs either on a 2 megapixel (Mp) display or with
data scaled up using a 32 Mp tiled display. Participants performed
various overview and detail tasks on geospatially-referenced
multidimensional time-series data. Results showed that current designs
are perceptually scalable because they result in a decrease in task
completion time when normalized per number of data attributes along with
no decrease in accuracy. It appears that, for the visualizations
selected for this study, the relative comparison between designs is
generally consistent between display sizes. However, results also
suggest that encoding is more important on a smaller display while
spatial grouping is more important on a larger display. Some suggestions
for designers are provided based on our experience designing
visualizations for large displays. expand
|
|
Complex Logarithmic Views for Small Details in Large Contexts |
|
Joachim Bottger,
Michael Balzer,
Oliver Deussen
|
|
Pages: 845-852 |
|
doi>10.1109/TVCG.2006.126 |
|
Available formats:
Publisher Site
|
|
Commonly
known detail in context techniques for the two-dimensional Euclidean
space enlarge details and shrink their context using mapping functions
that introduce geometrical compression. This makes it difficult or even
impossible to recognize shapes ...
Commonly
known detail in context techniques for the two-dimensional Euclidean
space enlarge details and shrink their context using mapping functions
that introduce geometrical compression. This makes it difficult or even
impossible to recognize shapes for large differences in magnification
factors. In this paper we propose to use the complex logarithm and the
complex root functions to show very small details even in very large
contexts. These mappings are conformal, which means they only locally
rotate and scale, thus keeping shapes intact and recognizable. They
allow showing details that are orders of magnitude smaller than their
surroundings in combination with their context in one seamless
visualization. We address the utilization of this universal technique
for the interaction with complex two-dimensional data considering the
exploration of large graphs and other examples. expand
|
|
Software Design Patterns for Information Visualization |
|
Jeffrey Heer,
Maneesh Agrawala
|
|
Pages: 853-860 |
|
doi>10.1109/TVCG.2006.178 |
|
Available formats:
Publisher Site
|
|
Despite
a diversity of software architectures supporting information
visualization, it is often difficult to identify, evaluate, and re-apply
the design solutions implemented within such frameworks. One popular
and effective approach for addressing such ...
Despite
a diversity of software architectures supporting information
visualization, it is often difficult to identify, evaluate, and re-apply
the design solutions implemented within such frameworks. One popular
and effective approach for addressing such difficulties is to capture
successful solutions in design patterns, abstract descriptions of
interacting software components that can be customized to solve design
problems within a particular context. Based upon a review of existing
frameworks and our own experiences building visualization software, we
present a series of design patterns for the domain of information
visualization. We discuss the structure, context of use, and
interrelations of patterns spanning data representation, graphics, and
interaction. By representing design knowledge in a reusable form, these
patterns can be used to facilitate software design, implementation, and
evaluation, and improve developer education and communication. expand
|
|
A Pipeline for Computer Aided Polyp Detection |
|
Wei Hong,
Feng Qiu,
Arie kaufman
|
|
Pages: 861-868 |
|
doi>10.1109/TVCG.2006.112 |
|
Available formats:
Publisher Site
|
|
We
present a novel pipeline for computer-aided detection (CAD) of colonic
polyps by integrating texture and shape analysis with volume rendering
and conformal colon flattening. Using our automatic method, the 3D polyp
detection problem is converted into ...
We
present a novel pipeline for computer-aided detection (CAD) of colonic
polyps by integrating texture and shape analysis with volume rendering
and conformal colon flattening. Using our automatic method, the 3D polyp
detection problem is converted into a 2D pattern recognition problem.
The colon surface is first segmented and extracted from the CT data set
of the patient's abdomen, which is then mapped to a 2D rectangle using
conformal mapping. This flattened image is rendered using a direct
volume rendering technique with a translucent electronic biopsy transfer
function. The polyps are detected by a 2D clustering method on the
flattened image. The false positives are further reduced by analyzing
the volumetric shape and texture features. Compared with shape based
methods, our method is much more efficient without the need of computing
curvature and other shape parameters for the whole colon surface. The
final detection results are stored in the 2D image, which can be easily
incorporated into a virtual colonoscopy (VC) system to highlight the
polyp locations. The extracted colon surface mesh can be used to
accelerate the volumetric ray casting algorithm used to generate the VC
endoscopic view. The proposed automatic CAD pipeline is incorporated
into an interactive VC system, with a goal of helping radiologists
detect polyps faster and with higher accuracy. expand
|
|
Full Body Virtual Autopsies using a State-of-the-art Volume Rendering Pipeline |
|
Patric Ljung,
Calle Winskog,
Anders Persson,
Claes Lundstrom,
Anders Ynnerman
|
|
Pages: 869-876 |
|
doi>10.1109/TVCG.2006.146 |
|
Available formats:
Publisher Site
|
|
This
paper presents a procedure for virtual autopsies based on interactive
3D visualizations of large scale, high resolution data from CT-scans of
human cadavers. The procedure is described using examples from forensic
medicine and the added value and ...
This
paper presents a procedure for virtual autopsies based on interactive
3D visualizations of large scale, high resolution data from CT-scans of
human cadavers. The procedure is described using examples from forensic
medicine and the added value and future potential of virtual autopsies
is shown from a medical and forensic perspective. Based on the technical
demands of the procedure state-of-the-art volume rendering techniques
are applied and refined to enable real-time, full body virtual autopsies
involving gigabyte sized data on standard GPUs. The techniques applied
include transfer function based data reduction using level-of-detail
selection and multi-resolution rendering techniques. The paper also
describes a data management component for large, out-of-core data sets
and an extension to the GPU-based raycaster for efficient dual TF
rendering.Detailed benchmarks of the pipeline are presented using data
sets from forensic cases. expand
|
|
Real-Time Illustration of Vascular Structures |
|
Felix Ritter,
Christian Hansen,
Volker Dicken,
Olaf Konrad,
Bernhard Preim,
Heinz-Otto Peitgen
|
|
Pages: 877-884 |
|
doi>10.1109/TVCG.2006.172 |
|
Available formats:
Publisher Site
|
|
We
present real-time vascular visualization methods, which extend on
illustrative rendering techniques to particularly accentuate spatial
depth and to improve the perceptive separation of important vascular
properties such as branching level and supply ...
We
present real-time vascular visualization methods, which extend on
illustrative rendering techniques to particularly accentuate spatial
depth and to improve the perceptive separation of important vascular
properties such as branching level and supply area. The resulting
visualization can and has already been used for direct projection on a
patientÕs organ in the operation theater where the varying absorption
and reflection characteristics of the surface limit the use of color.
The important contributions of our work are a GPU-based hatching
algorithm for complex tubular structures that emphasizes shape and depth
as well as GPU-accelerated shadow-like depth indicators, which enable
reliable comparisons of depth distances in a static monoscopic 3D
visualization. In addition, we verify the expressiveness of our
illustration methods in a large, quantitative study with 160 subjects. expand
|
|
Lines of Curvature for Polyp Detection in Virtual Colonoscopy |
|
Lingxiao Zhao,
Charl Botha,
Javier Bescos,
Roel Truyen,
Frans Vos,
Frits Post
|
|
Pages: 885-892 |
|
doi>10.1109/TVCG.2006.158 |
|
Available formats:
Publisher Site
|
|
Computer-aided
diagnosis (CAD) is a helpful addition to laborious visual inspection
for preselection of suspected colonic polyps in virtual colonoscopy.
Most of the previous work on automatic polyp detection makes use of
indicators based on the scalar ...
Computer-aided
diagnosis (CAD) is a helpful addition to laborious visual inspection
for preselection of suspected colonic polyps in virtual colonoscopy.
Most of the previous work on automatic polyp detection makes use of
indicators based on the scalar curvature of the colon wall and can
result in many false-positive detections. Our work tries to reduce the
number of false-positive detections in the preselection of polyp
candidates.Polyp surface shape can be characterized and visualized using
lines of curvature. In this paper, we describe techniques for
generating and rendering lines of curvature on surfaces and we show that
these lines can be used as part of a polyp detection approach. We have
adapted existing approaches on explicit triangular surface meshes, and
developed a new algorithm on implicit surfaces embedded in 3D volume
data. The visualization of shaded colonic surfaces can be enhanced by
rendering the derived lines of curvature on these surfaces.Features
strongly correlated with true-positive detections were calculated on
lines of curvature and used for the polyp candidate selection. We
studied the performance of these features on 5 data sets that included
331 pre-detected candidates, of which 50 sites were true polyps. The
winding angle had a significant discriminating power for true-positive
detections, which was demonstrated by a Wilcoxon rank sum test with
p<0.001. The median winding angle and inter-quartile range (IQR) for
true polyps were 7.817 and 6.770-9.288 compared to 2.954 and 1.995-3.749
for false-positive detections. expand
|
|
Outlier-Preserving Focus+Context Visualization in Parallel Coordinates |
|
Matej Novotny,
Helwig Hauser
|
|
Pages: 893-900 |
|
doi>10.1109/TVCG.2006.170 |
|
Available formats:
Publisher Site
|
|
Focus+context
visualization integrates a visually accentuated representation of
selected data items in focus (more details, more opacity, etc.) with a
visually deemphasized representation of the rest of the data, i.e., the
context. The role of context ...
Focus+context
visualization integrates a visually accentuated representation of
selected data items in focus (more details, more opacity, etc.) with a
visually deemphasized representation of the rest of the data, i.e., the
context. The role of context visualization is to provide an overview of
the data for improved user orientation and improved navigation. A good
overview comprises the representation of both outliers and trends. Up to
now, however, context visualization not really treated outliers
sufficiently. In this paper we present a new approach to focus+context
visualization in parallel coordinates which is truthful to outliers in
the sense that small-scale features are detected before visualization
and then treated specially during context visualization. Generally, we
present a solution which enables context visualization at several levels
of abstraction, both for the representation of outliers and trends. We
introduce outlier detection and context generation to parallel
coordinates on the basis of a binned data representation. This leads to
an output-oriented visualization approach which means that only those
parts of the visualization process are executed which actually affect
the final rendering. Accordingly, the performance of this solution is
much more dependent on the visualization size than on the data size
which makes it especially interesting for large datasets. Previous
approaches are outperformed, the new solution was successfully applied
to datasets with up to 3 million data records and up to 50 dimensions. expand
|
|
Composite Rectilinear Deformation for Stretch and Squish Navigation |
|
James Slack,
Tamara Munzner
|
|
Pages: 901-908 |
|
doi>10.1109/TVCG.2006.127 |
|
Available formats:
Publisher Site
|
|
We
present the first scalable algorithm that supports the composition of
successive rectilinear deformations.Earlier systems that provided
stretch and squish navigation could only handle small datasets.More
recent work featuring rubber sheet navigation ...
We
present the first scalable algorithm that supports the composition of
successive rectilinear deformations.Earlier systems that provided
stretch and squish navigation could only handle small datasets.More
recent work featuring rubber sheet navigation for large datasets has
focused on rendering and on application-specific issues. However, no
algorithm has yet been presented for carrying out such navigation
methods; our paper addresses this problem. For maximum flexibility with
large datasets, a stretch and squish navigation algorithm should allow
for millions of potentially deformable regions. However, typical usage
only changes the extents of a small subset k of these n regions at a
time. The challenge is to avoid computations that are linear in n,
because a single deformation can affect the absolute screen-space
location of every deformable region. We provide an O(k log n) algorithm
that supports any application that can lay out a dataset on a generic
grid, and show an implementation that allows navigation of trees and
gene sequences with millions of items in sub-millisecond time. expand
|
|
Multi-variate, Time Varying, and Comparative Visualization with Contextual Cues |
|
Jonathan Woodring,
Han-Wei Shen
|
|
Pages: 909-916 |
|
doi>10.1109/TVCG.2006.164 |
|
Available formats:
Publisher Site
|
|
Time-varying,
multi-variate, and comparative data sets are not easily visualized due
to the amount of data that is presented to the user at once. By
combining several volumes together with different operators into one
visualized volume, the user is able ...
Time-varying,
multi-variate, and comparative data sets are not easily visualized due
to the amount of data that is presented to the user at once. By
combining several volumes together with different operators into one
visualized volume, the user is able to compare values from different
data sets in space over time, run, or field without having to mentally
switch between different renderings of individual data sets. In this
paper, we propose using a volume shader where the user is given the
ability to easily select and operate on many data volumes to create
comparison relationships. The user specifies an expression with set and
numerical operations and her data to see relationships between data
fields. Furthermore, we render the contextual information of the volume
shader by converting it to a volume tree. We visualize the different
levels and nodes of the volume tree so that the user can see the results
of suboperations. This gives the user a deeper understanding of the
final visualization, by seeing how the parts of the whole are
operationally constructed. expand
|
|
Multifield-Graphs: An Approach to Visualizing Correlations in Multifield Scalar Data |
|
Natascha Sauber,
Holger Theisel,
Hans-Peter Seidel
|
|
Pages: 917-924 |
|
doi>10.1109/TVCG.2006.165 |
|
Available formats:
Publisher Site
|
|
We
present an approach to visualizingcorrelations in 3D multifield scalar
data. The core of our approach is the computation of correlation fields,
which are scalar fields containing the local correlations of subsets of
the multiple fields.While the visualization ...
We
present an approach to visualizingcorrelations in 3D multifield scalar
data. The core of our approach is the computation of correlation fields,
which are scalar fields containing the local correlations of subsets of
the multiple fields.While the visualization of the correlation fields
can be done using standard 3D volume visualization techniques, their
huge number makes selection and handling a challenge. We introduce the
Multifield-Graph to give an overview of which multiple fields correlate
and to show the strength of their correlation. This information guides
the selection of informative correlation fields for visualization. We
use our approach to visually analyze a number of real andsynthetic
multifield datasets. expand
|
|
Saliency-guided Enhancement for Volume Visualization |
|
Youngmin Kim,
Amitabh Varshney
|
|
Pages: 925-932 |
|
doi>10.1109/TVCG.2006.174 |
|
Available formats:
Publisher Site
|
|
Recent
research in visual saliency has established a computational measure of
perceptual importance. In this paper we present a visual-saliency-based
operator to enhance selected regions of a volume. We show how we use
such an operator on a user-specified ...
Recent
research in visual saliency has established a computational measure of
perceptual importance. In this paper we present a visual-saliency-based
operator to enhance selected regions of a volume. We show how we use
such an operator on a user-specified saliency field to compute an
emphasis field. We further discuss how the emphasis field can be
integrated into the visualization pipeline through its modifications of
regional luminance and chrominance. Finally, we validate our work using
an eye-tracking-based user study and show that our new saliency
enhancement operator is more effective at eliciting viewer attention
than the traditional Gaussian enhancement operator. expand
|
|
Importance-Driven Focus of Attention |
|
Ivan Viola,
Miquel Feixas,
Mateu Sbert,
Meister Eduard Groller
|
|
Pages: 933-940 |
|
doi>10.1109/TVCG.2006.152 |
|
Available formats:
Publisher Site
|
|
This
paper introduces a concept for automatic focusing on features within a
volumetric data set. The user selects a focus, i.e., object of interest,
from a set of pre-defined features. Our system automatically determines
the most expressive view on this ...
This
paper introduces a concept for automatic focusing on features within a
volumetric data set. The user selects a focus, i.e., object of interest,
from a set of pre-defined features. Our system automatically determines
the most expressive view on this feature. A characteristic viewpoint is
estimated by a novel information-theoretic framework which is based on
the mutual information measure. Viewpoints change smoothly by switching
the focus from one feature to another one. This mechanism is controlled
by changes in the importance distribution among features in the volume.
The highest importance is assigned to the feature in focus. Apart from
viewpoint selection, the focusing mechanism also steers visual emphasis
by assigning a visually more prominent representation. To allow a clear
view on features that are normally occluded by other parts of the
volume, the focusing for example incorporates cut-away views.} expand
|
|
ClearView: An Interactive Context Preserving Hotspot Visualization Technique |
|
Jens Kruger,
Jens Schneider,
Rudiger Westermann
|
|
Pages: 941-948 |
|
doi>10.1109/TVCG.2006.124 |
|
Available formats:
Publisher Site
|
|
Volume
rendered imagery often includes a barrage of 3D information like shape,
appearance and topology of complex structures, and it thus quickly
overwhelms the user. In particular, when focusing on a specific region a
user cannot observe the relationship ...
Volume
rendered imagery often includes a barrage of 3D information like shape,
appearance and topology of complex structures, and it thus quickly
overwhelms the user. In particular, when focusing on a specific region a
user cannot observe the relationship between various structures unless
he has a mental picture of the entire data. In this paper we present
ClearView, a GPU-based, interactive framework for texture-based volume
ray-casting that allows users which do not have the visualization skills
for this mental exercise to quickly obtain a picture of the data in a
very intuitive and user-friendly way. ClearView is designed to enable
the user to focus on particular areas in the data while preserving
context information without visual clutter. ClearView does not require
additional feature volumes as it derives any features in the data from
image information only. A simple point-and-click interface enables the
user to interactively highlight structures in the data. ClearView
provides an easy to use interface to complex volumetric data as it only
uses transparency in combination with a few specific shaders to convey
focus and context information. expand
|
|
Visualization Tools for Vorticity Transport Analysis in Incompressible Flow |
|
Filip Sadlo,
Ronald Peikert,
Mirjam Sick
|
|
Pages: 949-956 |
|
doi>10.1109/TVCG.2006.199 |
|
Available formats:
Publisher Site
|
|
Vortices
are undesirable in many applications while indispensable in others. It
is therefore of common interest to understand their mechanisms of
creation. This paper aims at analyzing the transport of vorticity inside
incompressible flow. The analysis ...
Vortices
are undesirable in many applications while indispensable in others. It
is therefore of common interest to understand their mechanisms of
creation. This paper aims at analyzing the transport of vorticity inside
incompressible flow. The analysis is based on the vorticity equation
and is performed along pathlines which are typically started in upstream
direction from vortex regions. Different methods for the quantitative
and explorative analysis of vorticity transport are presented and
applied to CFD simulations of water turbines. Simulation quality is
accounted for by including the errors of meshing and convergence into
analysis and visualization. The obtained results are discussed and
interpretations with respect to engineering questions are given. expand
|
|
Vortex Visualization for Practical Engineering Applications |
|
Monika Jankun-Kelly,
Ming Jiang,
David Thompson,
Raghu Machiraju
|
|
Pages: 957-964 |
|
doi>10.1109/TVCG.2006.201 |
|
Available formats:
Publisher Site
|
|
In
order to understand complex vortical flows in large data sets, we must
be able to detect and visualize vortices in an automated fashion. In
this paper, we present a feature-based vortex detection and
visualization technique that is appropriate for ...
In
order to understand complex vortical flows in large data sets, we must
be able to detect and visualize vortices in an automated fashion. In
this paper, we present a feature-based vortex detection and
visualization technique that is appropriate for large computational
fluid dynamics data sets computed on unstructured meshes. In particular,
we focus on the application of this technique to visualization of the
flow over a serrated wing and the flow field around a spinning missile
with dithering canards. We have developed a core line extraction
technique based on the observation that vortex cores coincide with local
extrema in certain scalar fields. We also have developed a novel
technique to handle complex vortex topology that is based on k-means
clustering. These techniques facilitate visualization of vortices in
simulation data that may not be optimally resolved or sampled. Results
are included that highlight the strengths and weaknesses of our
approach. We conclude by describing how our approach can be improved to
enhance robustness and expand its range of applicability. expand
|
|
An Advanced Evenly-Spaced Streamline Placement Algorithm |
|
Zhanping Liu,
Robert Moorhead,
Joe Groner
|
|
Pages: 965-972 |
|
doi>10.1109/TVCG.2006.116 |
|
Available formats:
Publisher Site
|
|
This
paper presents an advanced evenly-spaced streamline placement algorithm
for fast, high-quality, and robust layout of flow lines. A fourth-order
Runge-Kutta integrator with adaptive step size and error control is
employed for rapid accurate streamline ...
This
paper presents an advanced evenly-spaced streamline placement algorithm
for fast, high-quality, and robust layout of flow lines. A fourth-order
Runge-Kutta integrator with adaptive step size and error control is
employed for rapid accurate streamline advection. Cubic Hermite
polynomial interpolation with large sample-spacing is adopted to create
fewer evenly-spaced samples along each streamline to reduce the amount
of distance checking. We propose two methods to enhance placement
quality. Double queues are used to prioritizetopological seeding and to
favor long streamlines to minimize discontinuities. Adaptive distance
control based on the local flow variance is explored to reduce cavities.
Furthermore, we propose a universal, effective, fast, and robust loop
detection strategy to address closed and spiraling streamlines. Our
algorithm is an order-of-magnitude faster than Jobard and Lefer's
algorithm [8] with better placement quality and over 5 times faster than
Mebarki et al.'s algorithm [9] with comparable placement quality, but
with a more robust solution to loop detection. expand
|
|
Fine-grained Visualization Pipelines and Lazy Functional Languages |
|
David Duke,
Malcolm Wallace,
Rita Borgo,
Colin Runciman
|
|
Pages: 973-980 |
|
doi>10.1109/TVCG.2006.145 |
|
Available formats:
Publisher Site
|
|
The
pipeline model in visualization has evolved from a conceptual model of
data processing into a widely used architecture for implementing
visualization systems. In the process, a number of capabilities have
been introduced, including streaming of data ...
The
pipeline model in visualization has evolved from a conceptual model of
data processing into a widely used architecture for implementing
visualization systems. In the process, a number of capabilities have
been introduced, including streaming of data in chunks, distributed
pipelines, and demand-driven processing. Visualization systems have
invariably built on stateful programming technologies, and these
capabilities have had to be implemented explicitly within the lower
layers of a complex hierarchy of services. The good news for developers
is that applications built on top of this hierarchy can access these
capabilities without concern for how they are implemented. The bad news
is that by freezing capabilities into low-level services expressive
power and flexibility is lost. In this paper we express visualization
systems in a programming language that more naturally supports this kind
of processing model. Lazy functional languages support fine-grained
demand-driven processing, a natural form of streaming, and pipeline-like
function composition for assembling applications. The technology thus
appears well suited to visualization applications. Using surface
extraction algorithms as illustrative examples, and the lazy functional
language Haskell, we argue the benefits of clear and concise expression
combined with fine-grained, demand-driven computation. Just as
visualization provides insight into data, functional abstraction
provides new insight into visualization. expand
|
|
A Novel Visualization Model for Web Search Results |
|
Tien Nguyen,
Jun Zhang
|
|
Pages: 981-988 |
|
doi>10.1109/TVCG.2006.111 |
|
Available formats:
Publisher Site
|
|
This
paper presents an interactive visualization system, named WebSearchViz,
for visualizing the Web search results and facilitating users'
navigation and exploration.The metaphor in our model is the solar system
with its planets and asteroids revolving ...
This
paper presents an interactive visualization system, named WebSearchViz,
for visualizing the Web search results and facilitating users'
navigation and exploration.The metaphor in our model is the solar system
with its planets and asteroids revolving around the sun. Location,
color, movement, and spatial distance of objects in the visual space are
used to represent the semantic relationships between a query and
relevant Web pages. Especially, the movement of objects and their speeds
add a new dimension to the visual space, illustrating the degree of
relevance among a query and Web search results in the context of users'
subjects of interest. By interacting with the visual space, users are
able to observe the semantic relevance between a query and a resulting
Web page with respect to their subjects of interest, context
information, or concern. Users' subjects of interest can be dynamically
changed, redefined, added, or deleted from the visual space. expand
|
|
A Trajectory-Preserving Synchronization Method for Collaborative Visualization |
|
Lewis W. F. Li,
Frederick W. B. Li,
Rynson W. H. Lau
|
|
Pages: 989-996 |
|
doi>10.1109/TVCG.2006.114 |
|
Available formats:
Publisher Site
|
|
In
the past decade, a lot of research work has been conductedto support
collaborative visualization among remote users over thenetworks,
allowing them to visualize and manipulate shared data forproblem
solving. There are many applications of collaborativevisualization, ...
In
the past decade, a lot of research work has been conductedto support
collaborative visualization among remote users over thenetworks,
allowing them to visualize and manipulate shared data forproblem
solving. There are many applications of collaborativevisualization, such
as oceanography, meteorology and medical science. Tofacilitate user
interaction, a critical system requirement forcollaborative
visualization is to ensure that remote users will perceivea synchronized
view of the shared data. Failing this requirement, theuser¡¦s ability
in performing the desirable collaborative tasks will beaffected. In this
paper, we propose a synchronization method to supportcollaborative
visualization. It considers how interaction with dynamicobjects is
perceived by application participants under the existence ofnetwork
latency, and remedies the motion trajectory of the dynamicobjects. It
also handles the false positive and false negative collisiondetection
problems. The new method is particularly well designed forhandling
content changes due to unpredictable user interventions orobject
collisions. We demonstrate the effectiveness of our methodthrough a
number of experiments. expand
|
|
Concurrent Visualization in a Production Supercomputing Environment |
|
David Ellsworth,
Bryan Green,
Chris Henze,
Patrick Moran,
Timothy Sandstrom
|
|
Pages: 997-1004 |
|
doi>10.1109/TVCG.2006.128 |
|
Available formats:
Publisher Site
|
|
We
describe a concurrent visualization pipeline designed for operation in a
production supercomputing environment.The facility was initially
developed on the NASA Ames "Columbia" supercomputer for a massively
parallel forecast model (GEOS4).During the ...
We
describe a concurrent visualization pipeline designed for operation in a
production supercomputing environment.The facility was initially
developed on the NASA Ames "Columbia" supercomputer for a massively
parallel forecast model (GEOS4).During the 2005 Atlantic hurricane
season, GEOS4 was run 4 times a day under tight time constraints so that
its output could be included in an ensemble prediction that was made
available to forecasters at the National Hurricane Center.Given this
time-critical context, we designed a configurable concurrent pipeline to
visualize multiple global fields without significantly affecting the
runtime model performance or reliability.We use MPEG compression of the
accruing images to facilitate live low-bandwidth distribution of
multiple visualization streams to remote sites.We also describe the use
of our concurrent visualization framework with a global ocean
circulation model, which provides a 864-fold increase in the temporal
resolution of practically achievable animations.In both the atmospheric
and oceanic circulation models, the application scientists gained new
insights into their model dynamics, due to the high temporal resolution
animations attainable. expand
|
|
Scalable WIM: Effective Exploration in Large-scale Astrophysical Environments |
|
Yinggang Li,
Chi-Wing Fu,
Andrew Hanson
|
|
Pages: 1005-1012 |
|
doi>10.1109/TVCG.2006.176 |
|
Available formats:
Publisher Site
|
|
Navigating
through large-scale virtual environments such as simulations of the
astrophysical Universe is difficult. The huge spatial range of
astronomical models and the dominance of empty space make it hard for
users to travel across cosmological scales ...
Navigating
through large-scale virtual environments such as simulations of the
astrophysical Universe is difficult. The huge spatial range of
astronomical models and the dominance of empty space make it hard for
users to travel across cosmological scales effectively, and the problem
of wayfinding further impedes the user's ability to acquire reliable
spatial knowledge of astronomical contexts. We introduce a new form of
technique called the scalable world-in-miniature (WIM) map as a unifying
interface to facilitate travel and wayfinding in the virtual
environment spanning gigantic spatial scales: Scale controls enable
smooth, rapid transitions among widely separated regions;
logarithmically mapped miniature spaces offer a global overview mode
when the full context is too large; 3D landmarks represented in the WIM
are enhanced by scale, positional, and directional cues to augment
spatial context awareness; a series of navigation models are
incorporated into the scalable WIM to improve the performance of travel
tasks posed by the unique characteristics of virtual cosmic exploration.
The scalable WIM user interface supports an improved physical
navigation experience and assists pragmatic cognitive understanding of a
visualization context that incorporates the features of large-scale
astronomy. expand
|
|
Using
Visual Cues of Contact to Improve Interactive Manipulation of Virtual
Objects in Industrial Assembly/Maintenance Simulations |
|
Jean Sreng,
Anatole Lecuyer,
Christine Megard,
Claude Andriot
|
|
Pages: 1013-1020 |
|
doi>10.1109/TVCG.2006.189 |
|
Available formats:
Publisher Site
|
|
This
paper describes a set of visual cues of contact designed to improve the
interactive manipulation of virtual objects inindustrial
assembly/maintenance simulations. These visual cues display information
of proximity, contact and effort between virtualobjects ...
This
paper describes a set of visual cues of contact designed to improve the
interactive manipulation of virtual objects inindustrial
assembly/maintenance simulations. These visual cues display information
of proximity, contact and effort between virtualobjects when the user
manipulates a part inside a digital mock-up. The set of visual cues
encloses the apparition of glyphs (arrow, disk, or sphere) when the
manipulated object is close or in contactwith another part of the
virtual environment. Light sources can also be added at the level of
contact points. A filtering technique isproposed to decrease the number
of glyphs displayed at the same time. Various effects -such as change in
color, change in size, anddeformation of shape- can be applied to the
glyphs as a function of proximity with other objects or amplitude of the
contact forces. A preliminary evaluation was conducted to gather the
subjective preference of a group of participants during the simulation
of anautomotive assembly operation. The collected questionnaires showed
that participants globally appreciated our visual cues of contact.The
changes in color appeared to be preferred concerning the display of
distances and proximity information. Size changes anddeformation effects
appeared to be preferred in terms of perception of contact forces
between the parts. Last, light sources wereselected to focus the
attention of the user on the contact areas. expand
|
|
High-Level User Interfaces for Transfer Function Design with Semantics |
|
Christof Rezk Salama,
Maik Keller,
Peter Kohlmann
|
|
Pages: 1021-1028 |
|
doi>10.1109/TVCG.2006.148 |
|
Available formats:
Publisher Site
|
|
Many
sophisticated techniques for the visualization of volumetric data such
as medical data have been published. While existing techniques are
mature from a technical point of view, managing the complexity of visual
parameters is still difficult for ...
Many
sophisticated techniques for the visualization of volumetric data such
as medical data have been published. While existing techniques are
mature from a technical point of view, managing the complexity of visual
parameters is still difficult for nonexpert users. To this end, this
paper presents new ideas to facilitate the specification of optical
properties for direct volume rendering. We introduce an additional level
of abstraction for parametric models of transfer functions. The
proposed framework allows visualization experts to design high-level
transfer function models which can intuitively be used by non-expert
users. The results are user interfaces which provide semantic
information for specialized visualization problems. The proposed method
is based on principal component analysis as well as on concepts borrowed
from computer animation. expand
|
|
LOD Map - A Visual Interface for Navigating Multiresolution Volume Visualization |
|
Chaoli Wang,
Han-Wei Shen
|
|
Pages: 1029-1036 |
|
doi>10.1109/TVCG.2006.159 |
|
Available formats:
Publisher Site
|
|
In
multiresolution volume visualization, a visual representation of
level-of-detail (LOD) quality is important for us to examine, compare,
and validate different LOD selection algorithms. While traditional
methods rely on ultimate images for quality ...
In
multiresolution volume visualization, a visual representation of
level-of-detail (LOD) quality is important for us to examine, compare,
and validate different LOD selection algorithms. While traditional
methods rely on ultimate images for quality measurement, we introduce
the LOD map - an alternative representation of LOD quality and a visual
interface for navigating multiresolution data exploration. Our measure
for LOD quality is based on the formulation of entropy from information
theory. The measure takes into account the distortion and contribution
of multiresolution data blocks. A LOD map is generated through the
mapping of key LOD ingredients to a treemap representation. The ordered
treemap layout is used for relative stable update of the LOD map when
the view or LOD changes. This visual interface not only indicates the
quality of LODs in an intuitive way, but also provides immediate
suggestions for possible LOD improvement through visually-striking
features. It also allows us to compare different views and perform
rendering budget control. A set of interactive techniques is proposed to
make the LOD adjustment a simple and easy task. We demonstrate the
effectiveness and efficiency of our approach on large scientific and
medical data sets. expand
|
|
Analyzing Complex FTMS Simulations: a Case Study in High-Level Visualization of Ion Motions |
|
Wojciech Burakiewicz,
Robert van Liere
|
|
Pages: 1037-1044 |
|
doi>10.1109/TVCG.2006.118 |
|
Available formats:
Publisher Site
|
|
Current
practice in particle visualization renders particle position data
directly onto the screen as points or glyphs. Using a camera placed at a
fixed position, particle motions can be visualized by rendering
trajectories or by animations. Applying ...
Current
practice in particle visualization renders particle position data
directly onto the screen as points or glyphs. Using a camera placed at a
fixed position, particle motions can be visualized by rendering
trajectories or by animations. Applying such direct techniques to large,
time dependent particle data sets often results in cluttered images in
which the dynamic properties of the underlying system are difficult to
interpret.In this case study we take an alternative approach to the
visualization of ion motions. Instead of rendering ion position data
directly, we first extract meaningful motion information from the ion
position data and then map this information onto geometric primitives.
Our goal is to produce high-level visualizations that reflect the
physicists' way of thinking about ion dynamics. Parameterized geometric
icons are defined to encode motion information of clusters of related
ions. In addition, a parameterized camera control mechanism is used to
analyze relative instead of only absolute ion motions. We apply the
techniques to simulations of Fourier transform mass spectrometry (FTMS)
experiments. The data produced by such simulations can amount to
$5\cdot10^4$ ions and $10^5$ timesteps. This paper discusses the
requirements, design and informal evaluation of the implemented system. expand
|
|
Detection and Visualization of Defects in 3D Unstructured Models of Nematic Liquid Crystals |
|
Ketan Mehta,
T. J. Jankun-Kelly
|
|
Pages: 1045-1052 |
|
doi>10.1109/TVCG.2006.133 |
|
Available formats:
Publisher Site
|
|
A
method for the semi-automatic detection and visualization of defects in
models of nematic liquid crystals (NLCs) is introduced; this method is
suitable for unstructured models, a previously unsolved problem. The
detected defects---also known as \emph{disclinations}---are ...
A
method for the semi-automatic detection and visualization of defects in
models of nematic liquid crystals (NLCs) is introduced; this method is
suitable for unstructured models, a previously unsolved problem. The
detected defects---also known as \emph{disclinations}---are regions were
the alignment of the liquid crystal rapidly changes over space; these
defects play a large role in the physical behavior of the NLC substrate.
Defect detection is based upon a measure of total angular change of
crystal orientation (the \emph{director}) over a node neighborhood via
the use of a nearest neighbor path. Visualizations based upon the
detection algorithm clearly identifies complete defect regions as
opposed to incomplete visual descriptions provided by cutting-plane and
isosurface approaches. The introduced techniques are currently in use by
scientists studying the dynamics of defect change. expand
|
|
Understanding the Structure of the Turbulent Mixing Layer in Hydrodynamic Instabilities |
|
D. Laney,
P. -T. Bremer,
A. Mascarenhas,
P. Miller,
V. Pascucci
|
|
Pages: 1053-1060 |
|
doi>10.1109/TVCG.2006.186 |
|
Available formats:
Publisher Site
|
|
When
a heavy fluid is placed above a light fluid, tiny vertical
perturbations in the interface create a characteristic structure of
rising bubbles and falling spikes known as Rayleigh-Taylor instability.
Rayleigh-Taylor instabilities have received much ...
When
a heavy fluid is placed above a light fluid, tiny vertical
perturbations in the interface create a characteristic structure of
rising bubbles and falling spikes known as Rayleigh-Taylor instability.
Rayleigh-Taylor instabilities have received much attention over the past
half-century because of their importance in understanding many natural
and man-made phenomena, ranging from the rate of formation of heavy
elements in supernovae to the design of capsules for Inertial
Confinement Fusion. We present a new approach to analyze Rayleigh-Taylor
instabilities in which we extract a hierarchical segmentation of the
mixing envelope surface to identify bubbles and analyze analogous
segmentations of fields on the original interface plane. We compute
meaningful statistical information that reveals the evolution of
topological features and corroborates the observations made by
scientists. We also use geometric tracking to follow the evolution of
single bubbles and highlight merge/split events leading to the formation
of the large and complex structures characteristic of the later stages.
In particular we (i) Provide a formal definition of a bubble; (ii)
Segment the envelope surface to identify bubbles; (iii) Provide a
multi-scale analysis technique to produce statistical measures of bubble
growth; (iv) Correlate bubble measurements with analysis of fields on
the interface plane; (v) Track the evolution of individual bubbles over
time. Our approach is based on the rigorous mathematical foundations of
Morse theory and can be applied to a more general class of applications. expand
|
|
Hub-based Simulation and Graphics Hardware Accelerated Visualization for Nanotechnology Applications |
|
Wei Qiao,
Michael McLennan,
Rick Kennell,
David Ebert,
Gerhard Klimeck
|
|
Pages: 1061-1068 |
|
doi>10.1109/TVCG.2006.150 |
|
Available formats:
Publisher Site
|
|
The
Network for Computational Nanotechnology (NCN) has developed a science
gateway at nanoHUB.org for nanotechnology education and research. Remote
users can browse through online seminars and courses, and launch
sophisticated nanotechnology simulation ...
The
Network for Computational Nanotechnology (NCN) has developed a science
gateway at nanoHUB.org for nanotechnology education and research. Remote
users can browse through online seminars and courses, and launch
sophisticated nanotechnology simulation tools, all within their web
browser. Simulations are supported by a middleware that can route
complex jobs to grid supercomputing resources.But what is truly unique
about the middleware is the way that it uses hardware accelerated
graphics to support both problem setup and result visualization. This
paper describes the design and integration of a remote visualization
framework into the nanoHUB for interactive visual analytics of
nanotechnology simulations. Our services flexibly handle a variety of
nanoscience simulations, render them utilizing graphics hardware
acceleration in a scalable manner, and deliver them seamlessly through
the middleware to the user. Rendering is done only on-demand, as needed,
so each graphics hardware unit can simultaneously support many user
sessions. Additionally, a novel node distribution scheme further
improves our system's scalability. Our approach is not only efficient
but also cost-effective. Only a half-dozen render nodes are
anticipatedto support hundreds of active tool sessions on the nanoHUB.
Moreover, this architecture and visual analytics environment provides
capabilities that can serve many areas of scientific simulation and
analysis beyond nanotechnology with its ability to interactively analyze
and visualize multivariate scalar and vector fields. expand
|
|
Feature Aligned Volume Manipulation for Illustration and Visualization |
|
Carlos Correa,
Deborah Silver,
Min Chen
|
|
Pages: 1069-1076 |
|
doi>10.1109/TVCG.2006.144 |
|
Available formats:
Publisher Site
|
|
In
this paper we describe a GPU-based technique for creating illustrative
visualization through interactive manipulation of volumetric models. It
is partly inspired by medical illustrations, where it is common to
depict cuts and deformation in order ...
In
this paper we describe a GPU-based technique for creating illustrative
visualization through interactive manipulation of volumetric models. It
is partly inspired by medical illustrations, where it is common to
depict cuts and deformation in order to provide a better understanding
of anatomical and biological structures or surgical processes, and
partly motivated by the need for a real-time solution that supports the
specification and visualization of such illustrative manipulation. We
propose two new feature-aligned techniques, namely surface alignment and
segment alignment, and compare them with the axis-aligned techniques
which was reported in previous work on volume manipulation. We also
present a mechanism for defining features using texture volumes, and
methods for computing correct normals for the deformed volume in respect
to different alignments. We describe a GPU-based implementation to
achieve real-time performance of the techniques and a collection of
manipulation operators including peelers, retractors, pliers and
dilators which are adaptations of the metaphors and tools used in
surgical procedures and medical illustrations. Our approach is directly
applicable in medical and biological illustration, and we demonstrate
how it works as an interactive tool for focus+context visualization, as
well as a generic technique for volume graphics. expand
|
|
Exploded Views for Volume Data |
|
Stefan Bruckner,
M. Eduard Groller
|
|
Pages: 1077-1084 |
|
doi>10.1109/TVCG.2006.140 |
|
Available formats:
Publisher Site
|
|
Exploded
views are an illustration technique where an object is partitioned into
several segments. These segments are displaced to reveal otherwise
hidden detail. In this paper we apply the concept of exploded views to
volumetric data in order to solve ...
Exploded
views are an illustration technique where an object is partitioned into
several segments. These segments are displaced to reveal otherwise
hidden detail. In this paper we apply the concept of exploded views to
volumetric data in order to solve the general problem of occlusion. In
many cases an object of interest is occluded by other structures. While
transparency or cutaways can be used to reveal a focus object, these
techniques remove parts of the context information. Exploded views, on
the other hand, do not suffer from this drawback. Our approach employs a
force-based model: the volume is divided into a part configuration
controlled by a number of forces and constraints. The focus object
exerts an explosion force causing the parts to arrange according to the
given constraints. We show that this novel and flexible approach allows
for a wide variety of explosion-based visualizations including
view-dependent explosions. Furthermore, we present a high-quality
GPU-based volume ray casting algorithm for exploded views which allows
rendering and interaction at several frames per second. expand
|
|
Caricaturistic Visualization |
|
Peter Rautek,
Ivan Viola,
M. Eduard Groller
|
|
Pages: 1085-1092 |
|
doi>10.1109/TVCG.2006.123 |
|
Available formats:
Publisher Site
|
|
Exploded
views are an illustration technique where an object is partitioned into
several segments. These segments are displaced to reveal otherwise
hidden detail. In this paper we apply the concept of exploded views to
volumetric data in order to solve ...
Exploded
views are an illustration technique where an object is partitioned into
several segments. These segments are displaced to reveal otherwise
hidden detail. In this paper we apply the concept of exploded views to
volumetric data in order to solve the general problem of occlusion. In
many cases an object of interest is occluded by other structures. While
transparency or cutaways can be used to reveal a focus object, these
techniques remove parts of the context information. Exploded views, on
the other hand, do not suffer from this drawback. Our approach employs a
force-based model: the volume is divided into a part configuration
controlled by a number of forces and constraints. The focus object
exerts an explosion force causing the parts to arrange according to the
given constraints. We show that this novel and flexible approach allows
for a wide variety of explosion-based visualizations including
view-dependent explosions. Furthermore, we present a high-quality
GPU-based volume ray casting algorithm for exploded views which allows
rendering and interaction at several frames per second. expand
|
|
Visual Signatures in Video Visualization |
|
Min Chen,
Ralf Botchen,
Rudy Hashim,
Daniel Weiskopf,
Thomas Ertl,
Ian Thornton
|
|
Pages: 1093-1100 |
|
doi>10.1109/TVCG.2006.194 |
|
Available formats:
Publisher Site
|
|
Video
visualization is a computation process that extracts meaningful
information from original video data sets and conveys the extracted
information to users in appropriate visual representations. This paper
presents a broad treatment of the subject, ...
Video
visualization is a computation process that extracts meaningful
information from original video data sets and conveys the extracted
information to users in appropriate visual representations. This paper
presents a broad treatment of the subject, following a typical research
pipeline involving concept formulation, system development, a
path-finding user study, and a field trial with real application data.
In particular, we have conducted a fundamental study on the
visualization of motion events in videos. We have, for the first time,
deployed flow visualization techniques in video visualization. We have
compared the effectiveness of different abstract visual representations
of videos. We have conducted a user study to examine whether users are
able to learn to recognize visual signatures of motions, and to assist
in the evaluation of different visualization techniques. We have applied
our understanding and the developed techniques to a set of application
video clips. Our study has demonstrated that video visualization is both
technically feasible and cost-effective. It has provided the first set
of evidence confirming that ordinary users can be accustomed to the
visual features depicted in video visualizations, and can learn to
recognize visual signatures of a variety of motion events. expand
|
|
Asynchronous Distributed Calibration for Scalable and Reconfigurable Multi-Projector Displays |
|
Ezekiel S. Bhasker,
Pinaki Sinha,
Aditi Majumder
|
|
Pages: 1101-1108 |
|
doi>10.1109/TVCG.2006.121 |
|
Available formats:
Publisher Site
|
|
Centralized
techniques have been used until now when automatically calibrating
(both geometrically and photometrically) large high-resolution displays
created by tiling multiple projectors in a 2D array. A centralized
server managed all the projectors ...
Centralized
techniques have been used until now when automatically calibrating
(both geometrically and photometrically) large high-resolution displays
created by tiling multiple projectors in a 2D array. A centralized
server managed all the projectors and also the camera(s) used to
calibrate the display. In this paper, we propose an asynchronous
distributed calibration methodology via a display unit called the
plug-and-play projector (PPP). The PPP consists of a projector, camera,
computation and communication unit, thus creating a self-sufficient
module that enables an asynchronous distributed architecture for
multi-projector displays. We present a single-program-multiple-data
(SPMD) calibration algorithm that runs on each PPP and achieves a truly
scalable and reconfigurable display without any input from the user. It
instruments novel capabilities like adding/removing PPPs from the
display dynamically, detecting faults, and reshaping the display to a
reasonable rectangular shape to react to the addition/removal/faults. To
the best of our knowledge, this is the first attempt to realize a
completely asynchronous and distributed calibration architecture and
methodology for multi-projector displays. expand
|
|
Dynamic View Selection for Time-Varying Volumes |
|
Guangfeng Ji,
Han-Wei Shen
|
|
Pages: 1109-1116 |
|
doi>10.1109/TVCG.2006.137 |
|
Available formats:
Publisher Site
|
|
Animation
is an effective way to show how time-varying phenomena evolve over
time. A key issue of generating a good animation is to select ideal
views through which the user can perceive the maximum amount of
information from the time-varying dataset. ...
Animation
is an effective way to show how time-varying phenomena evolve over
time. A key issue of generating a good animation is to select ideal
views through which the user can perceive the maximum amount of
information from the time-varying dataset. In this paper, we first
propose an improved view selection method for static data. The method
measures the quality of a static view by analyzing the opacity, color
and curvature distributions of the corresponding volume rendering images
from the given view. Our view selection metric prefers an even opacity
distribution with a larger projection area, a larger area of salient
features' colors with an even distribution among the salient features,
and more perceived curvatures. We use this static view selection method
and a dynamic programming approach to select time-varying views. The
time-varying view selection maximizes the information perceived from the
time-varying dataset based on the constraints that the time-varying
view should show smooth changes of direction and near-constant speed. We
also introduce a method that allows the user to generate a smooth
transition between any two views in a given time step, with the
perceived information maximized as well. By combining the static and
dynamic view selection methods, the users are able to generate a
time-varying view that shows the maximum amount of information from a
time-varying data set. expand
|
|
Enhancing Depth Perception in Translucent Volumes |
|
Marta Kersten,
James Stewart,
Niko Troje,
Randy Ellis
|
|
Pages: 1117-1124 |
|
doi>10.1109/TVCG.2006.139 |
|
Available formats:
Publisher Site
|
|
We
present empirical studies that consider the effects of stereopsis and
simulated aerial perspective on depth perception in translucent
volumes.We consider a purely absorptive lighting model, in which light
is not scattered or reflected, but is simply ...
We
present empirical studies that consider the effects of stereopsis and
simulated aerial perspective on depth perception in translucent
volumes.We consider a purely absorptive lighting model, in which light
is not scattered or reflected, but is simply absorbed as it passes
through the volume.A purely absorptive lighting model is used, for
example, when rendering digitally reconstructed radiographs (DRRs),
which are synthetic X--ray images reconstructed from CT volumes.Surgeons
make use of DRRs in planning and performing operations, so an
improvement of depth perception in DRRs may help diagnosis and surgical
planning. expand
|
|
Texturing of Layered Surfaces for Optimal Viewing |
|
Alethea Bair,
Donald H. House,
Colin Ware
|
|
Pages: 1125-1132 |
|
doi>10.1109/TVCG.2006.183 |
|
Available formats:
Publisher Site
|
|
This
paper is a contribution to the literature on perceptually optimal
visualizations of layered three-dimensional surfaces. Specifically, we
develop guidelines for generating texture patterns, which, when tiled on
two overlapped surfaces, minimize confusion ...
This
paper is a contribution to the literature on perceptually optimal
visualizations of layered three-dimensional surfaces. Specifically, we
develop guidelines for generating texture patterns, which, when tiled on
two overlapped surfaces, minimize confusion in depth-discrimination and
maximize the ability to localize distinct features. We design a
parameterized texture space and explore this texture space using a
"human in the loop" experimental approach. Subjects are asked to rate
their ability to identify Gaussian bumps on both upper and lower
surfaces of noisy terrain fields. Their ratings direct a genetic
algorithm, which selectively searches the texture parameter space to
find fruitful areas. Data collected from these experiments are analyzed
to determine what combinations of parameters work well and to develop
texture generation guidelines. Data analysis methods include ANOVA,
linear discriminant analysis, decision trees, and parallel coordinates.
To confirm the guidelines, we conduct a post-analysis experiment, where
subjects rate textures following our guidelines against textures
violating the guidelines. Across all subjects, textures following the
guidelines consistently produce high rated textures on an absolute
scale, and are rated higher than those that did not follow the
guidelines. expand
|
|
Subjective Quantification of Perceptual Interactions among some 2D Scientific Visualization Methods |
|
Daniel Acevedo,
David Laidlaw
|
|
Pages: 1133-1140 |
|
doi>10.1109/TVCG.2006.180 |
|
Available formats:
Publisher Site
|
|
We
present an evaluation of a parameterized set of 2D icon-based
visualization methods where we quantified how perceptual interactions
among visual elements affect efficient data exploration. During the
experiment, subjects quantified three different ...
We
present an evaluation of a parameterized set of 2D icon-based
visualization methods where we quantified how perceptual interactions
among visual elements affect efficient data exploration. During the
experiment, subjects quantified three different design factors for each
method: the spatial resolution it could represent, the number of data
values it could display at each point, and the degree to which it is
visually linear. The class of visualization methods includes
Poisson-disk distributed icons where icon size, icon spacing, and icon
brightness can be set to a constant or coupled to data values from a 2D
scalar field. By only coupling one of those visual components to data,
we measured filtering interference for all three design factors.
Filtering interference characterizes how different levels of the
constant visual elements affect the evaluation of the data-coupled
element. Our novel experimental methodology allowed us to generalize
this perceptual information, gathered using ad-hoc artificial datasets,
onto quantitative rules for visualizing real scientific datasets. This
work also provides a framework for evaluating visualizations of
multi-valued data that incorporate additional visual cues, such as icon
orientation or color. expand
|
|
Occlusion-Free Animation of Driving Routes for Car Navigation Systems |
|
Shigeo Takahashi,
Kenichi Yoshida,
Kenji Shimada,
Tomoyuki Nishita
|
|
Pages: 1141-1148 |
|
doi>10.1109/TVCG.2006.167 |
|
Available formats:
Publisher Site
|
|
This
paper presents a method for occlusion-free animation of geographical
landmarks, and its application to a new type of car navigation system in
which driving routes of interest are always visible in mountain
areas.This is achieved by animating a nonperspective ...
This
paper presents a method for occlusion-free animation of geographical
landmarks, and its application to a new type of car navigation system in
which driving routes of interest are always visible in mountain
areas.This is achieved by animating a nonperspective image where
geographical landmarks such as mountain tops and roads are rendered as
if they are seen from different viewpoints.The contribution of this
paper lies in formulating the nonperspective navigation as an inverse
problem of continuously deforming a 3D terrain surface from the 2D
screen arrangement of the geographical landmarks.The present approach
provides a perceptually reasonable compromise between the navigation
clarity and visual realism where the corresponding nonperspective view
is fully augmented by assigning appropriate textures and shading effects
to the terrain surface according to its geometry.An eye tracking
experiment are conducted to prove that the present approach actually
exhibits visually-pleasing navigation frames where users can clearly
recognize the shape of the driving route without occlusion, together
with the spatial configuration of geographical landmarks in its
neighborhood. expand
|
|
Interactive Visualization of Intercluster Galaxy Structures in the Horologium-Reticulum Supercluster |
|
Jameson Miller,
Cory Quammen,
Matthew Fleenor
|
|
Pages: 1149-1156 |
|
doi>10.1109/TVCG.2006.155 |
|
Available formats:
Publisher Site
|
|
We
present GyVe, an interactive visualization tool for understanding
structure in sparse three-dimensional (3D) point data. The scientific
goal driving the tool's development is to determine the presence of
filaments and voids as defined by inferred ...
We
present GyVe, an interactive visualization tool for understanding
structure in sparse three-dimensional (3D) point data. The scientific
goal driving the tool's development is to determine the presence of
filaments and voids as defined by inferred 3D galaxy positions within
the Horologium-Reticulum supercluster (HRS). GyVe provides visualization
techniques tailored to examine structures defined by the intercluster
galaxies. Specific techniques include: interactive user control to move
between a global overview and local viewpoints, labelled axes and curved
drop lines to indicate positions in the astronomical RA-DEC-cz
coordinate system, torsional rocking and stereo to enhance 3D
perception, and geometrically distinct glyphs to show potential
correlation between intercluster galaxies and known clusters. We discuss
the rationale for each design decision and review the success of the
techniques in accomplishing the scientific goals. In practice, GyVe has
been useful for gaining intuition about structures that were difficult
to perceive with 2D projection techniques alone. For example, during
their initial session with GyVe, our collaborators quickly confirmed
scientific conclusions regarding the large-scale structure of the HRS
previously obtained over months of study with 2D projections and
statistical techniques. Further use of GyVe revealed the spherical shape
of voids and showed that a presumed filament was actually two
disconnected structures. expand
|
|
An Atmospheric Visual Analysis and Exploration System |
|
Yuyan Song,
Jing Ye,
Nikolai Svakhine,
Sonia Lasher-Trapp, Mike Baldwin,
David Ebert
|
|
Pages: 1157-1164 |
|
doi>10.1109/TVCG.2006.117 |
|
Available formats:
Publisher Site
|
|
Meteorological
research involves the analysis of multi-field, multi-scale, and
multi-source data sets. In order to better understand these data sets,
models and measurements at different resolutions must be analyzed.
Unfortunately, traditional atmospheric ...
Meteorological
research involves the analysis of multi-field, multi-scale, and
multi-source data sets. In order to better understand these data sets,
models and measurements at different resolutions must be analyzed.
Unfortunately, traditional atmospheric visualization systems only
provide tools to view a limited number of variables and small segments
of the data. These tools are often restricted to two-dimensional contour
or vector plots or three-dimensional isosurfaces. The meteorologist
must mentally synthesize the data from multiple plots to glean the
information needed to produce a coherent picture of the weather
phenomenon of interest. In order to provide better tools to
meteorologists and reduce system limitations, we have designed an
integrated atmospheric visual analysis and exploration system for
interactive analysis of weather data sets. Our system allows for the
integrated visualization of 1D, 2D, and 3D atmospheric data sets in
common meteorological grid structures and utilizes a variety of
rendering techniques. These tools provide meteorologists with new
abilities to analyze their data and answer questions on regions of
interest, ranging from physics-based atmospheric rendering to
illustrative rendering containing particles and glyphs.In this paper, we
will discuss the use and performance of our visual analysis for two
important meteorological applications. The first application is warm
rain formation in small cumulus clouds. Here, our three-dimensional,
interactive visualization of modeled drop trajectories within spatially
correlated fields from a cloud simulation has provided researchers with
new insight. Our second application is improving and validating severe
storm models, specifically the Weather Research and Forecasting (WRF)
model. This is done through correlative visualization of WRF model and
experimental Doppler storm data. expand
|
|
Visualization of Fibrous and Thread-like Data |
|
Zeki Melek,
David Mayerich,
Cem Yuksel,
John Keyser
|
|
Pages: 1165-1172 |
|
doi>10.1109/TVCG.2006.197 |
|
Available formats:
Publisher Site
|
|
Thread-like
structures are becoming more common in modern volumetric data sets as
our ability to image vascular and neural tissue at higher resolutions
improves. The thread-like structures of neurons and micro-vessels pose a
unique problem in visualization ...
Thread-like
structures are becoming more common in modern volumetric data sets as
our ability to image vascular and neural tissue at higher resolutions
improves. The thread-like structures of neurons and micro-vessels pose a
unique problem in visualization since they tend to be densely packed in
small volumes of tissue. This makes it difficult for an observer to
interpret useful patterns from the data or trace individual fibers. In
this paper we describe several methods for dealing with large amounts of
thread-like data, such as data sets collected using Knife-Edge Scanning
Microscopy (KESM) and Serial Block-Face Scanning Electron Microscopy
(SBF-SEM). These methods allow us to collect volumetric data from
embedded samples of whole-brain tissue. The neuronal and microvascular
data that we acquire consists of thin, branching structures extending
over very large regions. Traditional visualization schemes are not
sufficient to make sense of the large, dense, complex structures
encountered. In this paper, we address three methods to allow a user to
explore a fiber network effectively. We describe interactive techniques
for rendering large sets of neurons using self-orienting surfaces
implemented on the GPU.We also present techniques for rendering fiber
networks in a way that provides useful information about flow and
orientation. Third, a global illumination framework is used to create
high-quality visualizations that emphasize the underlying fiber
structure. Implementation details, performance, and advantages and
disadvantages of each approach are discussed. expand
|
|
Comparative Visualization for Wave-based and Geometric Acoustics |
|
Eduard Deines,
Martin Bertram,
Jan Mohring,
Jevgenij Jegorovs,
Frank Michel,
Hans Hagen,
Gregory M. Nielson
|
|
Pages: 1173-1180 |
|
doi>10.1109/TVCG.2006.125 |
|
Available formats:
Publisher Site
|
|
We
present a comparative visualization of the acoustic simulation results
obtained by two different approaches that were combined into a single
simulation algorithm. The rst method solves the wave equation on a
volume grid based on nite elements. The ...
We
present a comparative visualization of the acoustic simulation results
obtained by two different approaches that were combined into a single
simulation algorithm. The rst method solves the wave equation on a
volume grid based on nite elements. The second method, phonon tracing,
is a geometric approach that we have previously developed for
interactive simulation, visualization and modeling of room acoustics.
Geometric approaches of this kind are more efcient than FEM in the high
and medium frequency range. For low frequencies they fail to represent
diffraction, which on the other hand can be simulated properly by means
of FEM. When combining both methods we need to calibrate them properly
and estimate in which frequency range they provide comparable results.
For this purpose we use an acoustic metric called gain and display the
resulting error. Furthermore we visualize interference patterns, since
these depend not only on diffraction, but also exhibit phase-dependent
amplication and neutralization effects. expand
|
|
Hybrid Visualization for White Matter Tracts using Triangle Strips and Point Sprites |
|
Dorit Merhof,
Markus Sonntag,
Frank Enders,
Christopher Nimsky,
Peter Hastreiter,
Guenther Greiner
|
|
Pages: 1181-1188 |
|
doi>10.1109/TVCG.2006.151 |
|
Available formats:
Publisher Site
|
|
Diffusion
tensor imaging is of high value in neurosurgery, providing information
about the location of white matter tracts in the human brain. For their
reconstruction, streamline techniques commonly referred to as fiber
tracking model the underlying ...
Diffusion
tensor imaging is of high value in neurosurgery, providing information
about the location of white matter tracts in the human brain. For their
reconstruction, streamline techniques commonly referred to as fiber
tracking model the underlying fiber structures and have therefore gained
interest. To meet the requirements of surgical planning and to overcome
the visual limitations of line representations, a new real-time
visualization approach of high visual quality is introduced. For this
purpose, textured triangle strips and point sprites are combined in a
hybrid strategy employing GPU programming. The triangle strips follow
the fiber streamlines and are textured to obtain a tube-like appearance.
A vertex program is used to orient the triangle strips towards the
camera. In order to avoid triangle flipping in case of fiber segments
where the viewing and segment direction are parallel, a correct visual
representation is achieved in these areas by chains of point sprites. As
a result, a high quality visualization similar to tubes is provided
allowing for interactive multimodal inspection. Overall, the presented
approach is faster than existing techniques of similar visualization
quality and at the same time allows for real-time rendering of dense
bundles encompassing a high number of fibers, which is of high
importance for diagnosis and surgical planning. expand
|
|
Analyzing Vortex Breakdown Flow Structures by Assignment of Colors to Tensor Invariants |
|
Markus Rutten,
Min S. Chong
|
|
Pages: 1189-1196 |
|
doi>10.1109/TVCG.2006.119 |
|
Available formats:
Publisher Site
|
|
Topological
methods are often used to describe flow structures in fluid dynamics
and topological flow field analysis usually relies on the invariants of
the associated tensor fields. A visual impression of the local
properties of tensor fields is often ...
Topological
methods are often used to describe flow structures in fluid dynamics
and topological flow field analysis usually relies on the invariants of
the associated tensor fields. A visual impression of the local
properties of tensor fields is often complex and the search of a
suitable technique for achieving this is an ongoing topic in
visualization. This paper introduces and assesses a method of
representing the topological properties of tensor fields and their
respective flow patterns with the use of colors. First, a tensor norm is
introduced, which preserves the properties of the tensor and assigns
the tensor invariants to values of the RGB color space. Secondly, the
RGB colors of the tensor invariants are transferred to corresponding hue
values as an alternative color representation. The vectorial tensor
invariants field is reduced to a scalar hue field and visualization of
iso-surfaces of this hue value field allows us to identify locations
with equivalent flow topology. Additionally highlighting by the maximum
of the eigenvalue difference field reflects the magnitude of the
structural change of the flow. The method is applied on a vortex
breakdown flow structure inside a cylinder with a rotating lid. expand
|
|
Superellipsoid-based, Real Symmetric Traceless Tensor Glyphs Motivated by Nematic Liquid Crystal Alignment Visualization |
|
T. J. Jankun-Kelly,
Ketan Mehta
|
|
Pages: 1197-1204 |
|
doi>10.1109/TVCG.2006.181 |
|
Available formats:
Publisher Site
|
|
A
glyph-based method for visualizing the nematic liquid crystal alignment
tensor is introduced. Unlike previous approaches, the glyph is based
upon physically-linked metrics, not offsets of the eigenvalues. These
metrics, combined with a set of superellipsoid ...
A
glyph-based method for visualizing the nematic liquid crystal alignment
tensor is introduced. Unlike previous approaches, the glyph is based
upon physically-linked metrics, not offsets of the eigenvalues. These
metrics, combined with a set of superellipsoid shapes, communicate both
the strength of the crystal's uniaxial alignment and the amount of
biaxiality. With small modifications, our approach can visualize any
real symmetric traceless tensor. expand
|
|
High-Quality Extraction of Isosurfaces from Regular and Irregular Grids |
|
John Schreiner,
Carlos Scheidegger,
Claudio Silva
|
|
Pages: 1205-1212 |
|
doi>10.1109/TVCG.2006.149 |
|
Available formats:
Publisher Site
|
|
Isosurfaces
are ubiquitous in many fields, including visualization, graphics, and
vision. They are often the main computational component of important
processing pipelines (e.g. , surface reconstruction), and are heavily
used in practice. The classical ...
Isosurfaces
are ubiquitous in many fields, including visualization, graphics, and
vision. They are often the main computational component of important
processing pipelines (e.g. , surface reconstruction), and are heavily
used in practice. The classical approach to compute isosurfaces is to
apply the Marching Cubes algorithm, which although robust and simple to
implement, generates surfaces that require additional processing steps
to improve triangle quality and mesh size. An important issue is that in
some cases, the surfaces generated by Marching Cubes are irreparably
damaged, and important details are lost which can not be recovered by
subsequent processing. The main motivation of this work is to develop a
technique capable of constructing high-quality and high-fidelity
isosurfaces.We propose a new advancing front technique that is capable
of creating high-quality isosurfaces from regular and irregular
volumetric datasets. Our work extends the guidance field framework of
Schreiner et al. to implicit surfaces, and improves it in significant
ways. In particular, we describe a set of sampling conditions that
guarantee that surface features will be captured by the algorithm. We
also describe an efficient technique to compute a minimal guidance
field, which greatly improves performance. Our experimental results show
that our technique can generate high-quality meshes from complex
datasets. expand
|
|
Mesh Layouts for Block-Based Caches |
|
Sung-Eui Yoon,
Peter Lindstrom
|
|
Pages: 1213-1220 |
|
doi>10.1109/TVCG.2006.162 |
|
Available formats:
Publisher Site
|
|
Current
computer architectures employ caching to improve the performance of a
wide variety of applications.One of the main characteristics of such
cache schemes is the use of block fetching whenever an uncached data
element is accessed.To maximize the ...
Current
computer architectures employ caching to improve the performance of a
wide variety of applications.One of the main characteristics of such
cache schemes is the use of block fetching whenever an uncached data
element is accessed.To maximize the benefit of the block fetching
mechanism, we present novel cache-aware and cache-oblivious layouts of
surface and volume meshes that improve the performance of interactive
visualization and geometric processing algorithms. Based on a general
I/O model, we derive new cache-aware and cache-oblivious metrics that
have high correlations with the number of cache misses when accessing a
mesh.In addition to guiding the layout process, our metrics can be used
to quantify the quality of a layout, e.g. for comparing different
layouts of the same mesh and for determining whether a given layout is
amenable to significant improvement. We show that layouts of
unstructured meshes optimized for our metrics result in improvements
over conventional layouts in the performance of visualization
applications such as isosurface extraction and view-dependent rendering.
Moreover, we improve upon recent cache-oblivious mesh layouts in terms
of performance, applicability, and accuracy. expand
|
|
Out-of-Core Remeshing of Large Polygonal Meshes |
|
Minsu Ahn,
Igor Guskov,
Seungyong Lee
|
|
Pages: 1221-1228 |
|
doi>10.1109/TVCG.2006.169 |
|
Available formats:
Publisher Site
|
|
We
propose an out-of-core method for creating semi-regular surface
representations from large input surface meshes. Our approach is based
on a streaming implementation of the MAPS remesher of Lee et al. Our
remeshing procedure consists of two stages. ...
We
propose an out-of-core method for creating semi-regular surface
representations from large input surface meshes. Our approach is based
on a streaming implementation of the MAPS remesher of Lee et al. Our
remeshing procedure consists of two stages. First, a simplification
process is used to obtain the base domain. During simplification, we
maintain the mapping information between the input and the simplified
meshes. The second stage of remeshing uses the mapping information to
produce samples of the output semi-regular mesh. The out-of-core
operation of our method is enabled by the synchronous streaming of a
simplified mesh and the mapping information stored at the original
vertices. The synchronicity of two streaming buffers is maintained using
a specially designed write strategy for each buffer. Experimental
results demonstrate the remeshing performance of the proposed method, as
well as other applications that use the created mapping between the
simplified and the original surface representations. expand
|
|
Interactive Point-Based Rendering of Higher-Order Tetrahedral Data |
|
Yuan Zhou,
Michael Garland
|
|
Pages: 1229-1236 |
|
doi>10.1109/TVCG.2006.154 |
|
Available formats:
Publisher Site
|
|
Computational
simulations frequently generate solutions defined over very large
tetrahedral volume meshes containing many millions of elements.
Furthermore, such solutions may often be expressed using non-linear
basis functions.Certain solution techniques, ...
Computational
simulations frequently generate solutions defined over very large
tetrahedral volume meshes containing many millions of elements.
Furthermore, such solutions may often be expressed using non-linear
basis functions.Certain solution techniques, such as discontinuous
Galerkin methods, may even produce non-conforming meshes.Such data is
difficult to visualize interactively, as it is far too large to fit in
memory and many common data reduction techniques, such as mesh
simplification, cannot be applied to non-conforming meshes.We introduce a
point-based visualization system for interactive rendering of large,
potentially non-conforming, tetrahedral meshes. We propose methods for
adaptively sampling points from non-linear solution data and for
decimating points at run time to fit GPU memory limits. Because these
are streaming processes, memory consumption is independent of the input
size. We also present an order-independent point rendering method that
can efficiently render volumes on the order of 20 million tetrahedra at
interactive rates. expand
|
|
Ambient Occlusion and Edge Cueing for Enhancing Real Time Molecular Visualization |
|
Marco Tarini,
Paolo Cignoni,
Claudio Montani
|
|
Pages: 1237-1244 |
|
doi>10.1109/TVCG.2006.115 |
|
Available formats:
Publisher Site
|
|
The
paper presents a set of combined techniques to enhance the real-time
visualization of simple or complex molecules (up to order of $10^6$
atoms) space fill mode. The proposed approach includes an innovative
technique for efficient computation and ...
The
paper presents a set of combined techniques to enhance the real-time
visualization of simple or complex molecules (up to order of $10^6$
atoms) space fill mode. The proposed approach includes an innovative
technique for efficient computation and storage of ambient occlusion
terms, a small set of GPU accelerated procedural impostors for
space-fill and ball-and-stick rendering, and novel edge-cueing
techniques. As a result, the user's understanding of the
three-dimensional structure under inspection is strongly increased (even
for still images), while the rendering still occurs in real time. expand
|
|
Fast and Efficient Compression of Floating-Point Data |
|
Peter Lindstrom,
Martin Isenburg
|
|
Pages: 1245-1250 |
|
doi>10.1109/TVCG.2006.143 |
|
Available formats:
Publisher Site
|
|
Large
scale scientific simulation codes typically run on a cluster of CPUs
that write/read time steps to/from a single file system. As data sets
are constantly growing in size, this increasingly leads to I/O
bottlenecks. When the rate at which data is ...
Large
scale scientific simulation codes typically run on a cluster of CPUs
that write/read time steps to/from a single file system. As data sets
are constantly growing in size, this increasingly leads to I/O
bottlenecks. When the rate at which data is produced exceeds the
available I/O bandwidth, the simulation stalls and the CPUs are idle.
Data compression can alleviate this problem by using some CPU cycles to
reduce the amount of data needed to be transfered. Most compression
schemes, however, are designed to operate offline and seek to maximize
compression, notthroughput.Furthermore, they often require quantizing
floating-point values onto a uniform integer grid, which disqualifies
their use in applications where exact values must be retained. We
propose a simple scheme for lossless, online compression of
floating-point data that transparently integrates into the I/O ofmany
applications.A plug-in scheme for data-dependent prediction makes our
scheme applicable to a wide variety of data used invisualization, such
as unstructured meshes, point sets, images, and voxel grids. We achieve
state-of-the-art compression rates and speeds, the latter in part due to
an improved entropy coder. We demonstrate that this significantly
accelerates I/O throughput in real simulation runs. Unlike previous
schemes, our method also adapts well to variable-precision
floating-point and integer data. expand
|
|
Visualization and Analysis of Large Data Collections: a Case Study Applied to Confocal Microscopy Data |
|
Wim de Leeuw,
Pernette Verschure,
Robert van Liere
|
|
Pages: 1251-1258 |
|
doi>10.1109/TVCG.2006.195 |
|
Available formats:
Publisher Site
|
|
In
this paper we propose an approach in which interactive visualization
and analysis are combined with batch tools for the processing of large
data collections. Large and heterogeneous data collections are difficult
to analyze and pose specific problems ...
In
this paper we propose an approach in which interactive visualization
and analysis are combined with batch tools for the processing of large
data collections. Large and heterogeneous data collections are difficult
to analyze and pose specific problems to interactive visualization.
Application of the traditional interactive processing and visualization
approaches as well as batch processing encounter considerable drawbacks
for such large and heterogeneous data collections due to the amount and
type of data. Computing resources are not sufficient for interactive
exploration of the data and automated analysis has the disadvantage that
the user has only limited control and feedback on the analysis
process.In our approach, an analysis procedure with features and
attributes of interest for the analysis is defined interactively. This
procedure is used for off-line processing of large collections of data
sets. The results of the batch process along with ``visual summaries''
are used for further analysis. Visualization is not only used for the
presentation of the result, but also as a tool to monitor the validity
and quality of the operations performed during the batch process.
Operations such as feature extraction and attribute calculation of the
collected data sets are validated by visual inspection. This approach is
illustrated by an extensive case study, in which a collection of
confocal microscopy data sets is analyzed. expand
|
|
On Histograms and Isosurface Statistics |
|
Hamish Carr,
Duffy Brian,
Denby Brian
|
|
Pages: 1259-1266 |
|
doi>10.1109/TVCG.2006.168 |
|
Available formats:
Publisher Site
|
|
In
this paper, we show that histograms represent spatial function
distributions with a nearest neighbour interpolation. We confirm that
this results in systematic underrepresentation of transitional features
of the data, and provide new insight why this ...
In
this paper, we show that histograms represent spatial function
distributions with a nearest neighbour interpolation. We confirm that
this results in systematic underrepresentation of transitional features
of the data, and provide new insight why this occurs. We further show
that isosurface statistics, which use higher quality interpolation, give
better representations of the function distribution. We also use our
experimentally collected isosurface statistics to resolve some questions
as to the formal complexity of isosurfaces. expand
|
|
Interactive Point-based Isosurface Exploration and High-quality Rendering |
|
Haitao Zhang,
Arie Kaufman
|
|
Pages: 1267-1274 |
|
doi>10.1109/TVCG.2006.153 |
|
Available formats:
Publisher Site
|
|
We
present an efficient point-based isosurface exploration system with
high quality rendering. Our system incorporates two point-based
isosurface extraction and visualization methods: edge splatting and the
edge kernel method. In a volume, two neighboring ...
We
present an efficient point-based isosurface exploration system with
high quality rendering. Our system incorporates two point-based
isosurface extraction and visualization methods: edge splatting and the
edge kernel method. In a volume, two neighboring voxels define an edge.
The intersection points between the active edges and the isosurface are
used for exact isosurface representation. The point generation is
incorporated in the GPU-based hardware-accelerated rendering, thus
avoiding any overhead when changing the isovalue in the exploration. We
call this method edge splatting. In order to generate high quality
isosurface rendering regardless of the volume resolution and the view,
we introduce an edge kernel method. The edge kernel upsamples the
isosurface by subdividing every active cell of the volume data. Enough
sample points are generated to preserve the exact shape of the
isosurface defined by the trilinear interpolation of the volume data. By
employing these two methods, we can achieve interactive isosurface
exploration with high quality rendering. expand
|
|
Using Difference Intervals for Time-Varying Isosurface Visualization |
|
Kenneth W. Waters,
Christopher S. Co,
Kenneth I. Joy
|
|
Pages: 1275-1282 |
|
doi>10.1109/TVCG.2006.188 |
|
Available formats:
Publisher Site
|
|
We
present a novel approach to out-of-core time-varying isosurface
visualization. We attempt to interactively visualize time-varying
datasets which are too large to fit into main memory using a technique
which is dramatically different from existing ...
We
present a novel approach to out-of-core time-varying isosurface
visualization. We attempt to interactively visualize time-varying
datasets which are too large to fit into main memory using a technique
which is dramatically different from existing algorithms. Inspired by
video encoding techniques, we examine the data differences between time
steps to extract isosurface information. We exploit span space
extraction techniques to retrieve operations necessary to update
isosurface geometry from neighboring time steps. Because only the
changes between time steps need to be retrieved from disk, I/O bandwidth
requirements are minimized. We apply temporal compression to further
reduce disk access and employ a point-based previewing technique that is
refined in idle interaction cycles. Our experiments on computational
simulation data indicate that this method is an extremely viable
solution to large time-varying isosurface visualization. Our work
advances the state-of-the-art by enabling all isosurfaces to be
represented by a compact set of operations. expand
|
|
Isosurface Extraction and Spatial Filtering using Persistent Octree (POT) |
|
Qingmin Shi,
Joseph JaJa
|
|
Pages: 1283-1290 |
|
doi>10.1109/TVCG.2006.157 |
|
Available formats:
Publisher Site
|
|
We
propose a novel Persistent OcTree (POT) indexing structure for
accelerating isosurface extraction and spatial filtering from volumetric
data. This data structure efficiently handles a wide range of
visualization problems such as the generation of ...
We
propose a novel Persistent OcTree (POT) indexing structure for
accelerating isosurface extraction and spatial filtering from volumetric
data. This data structure efficiently handles a wide range of
visualization problems such as the generation of view-dependent
isosurfaces, ray tracing, and isocontour slicing for high dimensional
data. POT can be viewed as a hybrid data structure between the interval
tree and the Branch-On-Need Octree (BONO) in the sense that it achieves
the asymptotic bound of the interval tree for identifying the active
cells corresponding to an isosurface and is more efficient than BONO for
handling spatial queries. We encode a compact octree for each isovalue.
Each such octree contains only the corresponding active cells, in such a
way that the combined structure has linear space. The inherent
hierarchical structure associated with the active cells enables very
fast filtering of the active cells based on spatial constraints. We
demonstrate the effectiveness of our approach by performing
view-dependent isosurfacing on a wide variety of volumetric data sets
and 4D isocontour slicing on the time-varying Richtmyer-Meshkov
instability dataset. expand
|
|
Scalable Data Servers for Large Multivariate Volume Visualization |
|
Markus Glatter,
Jian Huang,
Jinzhu Gao,
Colin Mollenhour
|
|
Pages: 1291-1298 |
|
doi>10.1109/TVCG.2006.175 |
|
Available formats:
Publisher Site
|
|
Volumetric
datasets with multiple variables on each voxel over multiple time steps
are often complex, especially when considering the exponentially large
attribute space formed by the variables in combination with the spatial
and temporal dimensions. ...
Volumetric
datasets with multiple variables on each voxel over multiple time steps
are often complex, especially when considering the exponentially large
attribute space formed by the variables in combination with the spatial
and temporal dimensions. It is intuitive, practical, and thus often
desirable, to interactively select a subset of the data from within that
high-dimensional value space for efficient visualization. This approach
is straightforward to implement if the dataset is small enough to be
stored entirely in-core. However, to handle datasets sized at hundreds
of gigabytes and beyond, this simplistic approach becomes infeasible and
thus, more sophisticated solutions are needed. In this work, we
developed a system that supports efficient visualization of an arbitrary
subset, selected by range-queries, of a large multivariate time-varying
dataset. By employing specialized data structures and schemes of data
distribution, our system can leverage a large number of networked
computers as parallel data servers, and guarantees a near optimal
load-balance. We demonstrate our system of scalable data servers using
two large time-varying simulation datasets. expand
|
|
Distributed Shared Memory for Roaming Large Volumes |
|
Laurent Castanie,
Christophe Mion,
Xavier Cavin,
Bruno Levy
|
|
Pages: 1299-1306 |
|
doi>10.1109/TVCG.2006.135 |
|
Available formats:
Publisher Site
|
|
We
present a cluster-based volume rendering system for roaming very large
volumes. This system allows to move a gigabyte-sized probe inside a
total volume of several tens or hundreds of gigabytes in real-time.
While the size of the probe is limited by ...
We
present a cluster-based volume rendering system for roaming very large
volumes. This system allows to move a gigabyte-sized probe inside a
total volume of several tens or hundreds of gigabytes in real-time.
While the size of the probe is limited by the total amount of texture
memory on the cluster, the size of the total data set has no theoretical
limit. The cluster is used as a distributed graphics processing unit
that both aggregates graphics power and graphics memory. A
hardware-accelerated volume renderer runs in parallel on the cluster
nodes and the final image compositing is implemented using a pipelined
sort-last rendering algorithm. Meanwhile, volume bricking and volume
paging allow efficient data caching. On each rendering node, a
distributed hierarchical cache system implements a global software-based
distributed shared memory on the cluster. In case of a cache miss, this
system first checks page residency on the other cluster nodes instead
of directly accessing local disks. Using two Gigabit Ethernet network
interfaces per node, we accelerate data fetching by a factor of 4
compared to directly accessing local disks. The system also implements
asynchronous disk access and texture loading, which makes it possible to
overlap data loading, volume slicing and rendering for optimal volume
roaming. expand
|
|
Progressive Volume Rendering of Large Unstructured Grids |
|
Steven P. Callahan,
Louis Bavoil,
Valerio Pascucci,
Claudio T. Silva
|
|
Pages: 1307-1314 |
|
doi>10.1109/TVCG.2006.171 |
|
Available formats:
Publisher Site
|
|
We
describe a new progressive technique that allows real-time rendering of
extremely large tetrahedral meshes.Our approach uses a client-server
architecture to incrementally stream portions of the mesh from a server
to a client which refines the quality ...
We
describe a new progressive technique that allows real-time rendering of
extremely large tetrahedral meshes.Our approach uses a client-server
architecture to incrementally stream portions of the mesh from a server
to a client which refines the quality of the approximate rendering until
it converges to a full quality rendering. The results of previous steps
are re-used in each subsequent refinement, thus leading to an efficient
rendering.Our novel approach keeps very little geometry on the client
and works by refining a set of rendered images at each step.Our
interactive representation of the dataset is efficient, light-weight,
and high quality.We present a framework for the exploration of large
datasets stored on a remote server with a thin client that is capable of
rendering and managing full quality volume visualizations. expand
|
|
Representing Higher-Order Singularities in Vector Fields on Piecewise Linear Surfaces |
|
Wan-Chiu Li,
Bruno Vallet,
Nicolas Ray,
Bruno Levy
|
|
Pages: 1315-1322 |
|
doi>10.1109/TVCG.2006.173 |
|
Available formats:
Publisher Site
|
|
Accurately
representing higher-order singularities of vector fields defined on
piecewise linear surfaces is a non-trivial problem. In this work, we
introduce a concise yet complete interpolation scheme of vector fields
on arbitrary triangulated surfaces. ...
Accurately
representing higher-order singularities of vector fields defined on
piecewise linear surfaces is a non-trivial problem. In this work, we
introduce a concise yet complete interpolation scheme of vector fields
on arbitrary triangulated surfaces. The scheme enablesarbitrary
singularities to be represented at vertices} The representation can be
considered as a facet-based "encoding" of vector fields on piecewise
linear surfaces. The vector field is described in polar coordinates over
each facet, with a facet edge being chosen as the reference to define
the angle. An integer called theperiod jump is associated to each edge
of the triangulation to remove the ambiguity when interpolating the
direction of the vector fieldbetween two facets that share an edge. To
interpolate the vector field, we first linearlyinterpolate the angle of
rotation of the vectors along the edges of the facet graph. Then, we use
a variant ofNielson's side-vertex scheme to interpolate the vector
field over the entire surface. With our representation, we remove the
bound imposed on the complexity of singularities that a vertex can
represent by its connectivity. This boundis a limitation generally
exists in vertex-based linear schemes. Furthermore, using our data
structure, the index of a vertex of a vector field can be combinatorily
determined.We show the simplicity of the interpolation scheme with a
GPU-accelerated algorithm for a LIC-based visualization of the
so-defined vector fields, operating in image space. We demonstrate the
algorithm applied to various vector fields on curved surfaces. expand
|
|
Techniques for the Visualization of Topological Defect Behavior in Nematic Liquid Crystals |
|
Vadim Slavin,
Robert Pelcovits,
George Loriot,
Andrew Callan-Jones,
David Laidlaw
|
|
Pages: 1323-1328 |
|
doi>10.1109/TVCG.2006.182 |
|
Available formats:
Publisher Site
|
|
We
present visualization tools for analyzing molecular simulations of
liquid crystal (LC) behavior. The simulation data consists of terabytes
of data describing the position and orientation of every molecule in the
simulated system over time. Condensed ...
We
present visualization tools for analyzing molecular simulations of
liquid crystal (LC) behavior. The simulation data consists of terabytes
of data describing the position and orientation of every molecule in the
simulated system over time. Condensed matter physicists study the
evolution of topological defects in these data, and our visualization
tools focus on that goal. We first convert the discrete simulation data
to a sampled version of a continuous second-order tensor field and then
use combinations of visualization methods to simultaneously display
combinations of contractions of the tensor data, providing an
interactive environment for exploring these complicated data. The
system, built using AVS, employs colored cutting planes, colored
isosurfaces, and colored integral curves to display fields of tensor
contractions including Westin's scalar cl, cp, and cs metrics and the
principal eigenvector. Our approach has been in active use in the
physics lab for over a year. It correctly displays structures already
known; it displays the data in a spatially and temporally smoother way
than earlier approaches, avoiding confusing grid effects and
facilitating the study of multiple time steps; it extends the use of
tools developed for visualizing diffusion tensor data, re-interpreting
them in the context of molecular simulations; and it has answered
long-standing questions regarding the orientation of molecules around
defects and the conformational changes of the defects expand
|
|
Diffusion Tensor Visualization with Glyph Packing |
|
Gordon Kindlmann,
Carl-Fredrik Westin
|
|
Pages: 1329-1336 |
|
doi>10.1109/TVCG.2006.134 |
|
Available formats:
Publisher Site
|
|
A
common goal of multivariate visualization is to enable data inspection
at discrete points, while also illustrating the larger-scale continuous
structures.In diffusion tensor visualization, glyphs are typically used
to meet the first goal, and methods ...
A
common goal of multivariate visualization is to enable data inspection
at discrete points, while also illustrating the larger-scale continuous
structures.In diffusion tensor visualization, glyphs are typically used
to meet the first goal, and methods suchas texture synthesis or fiber
tractography can address the second.We adapt particle systems originally
developed for surface modeling and anisotropic mesh generation to
enhance the utility of glyph-based tensor visualizations.By carefully
distributing glyphs throughout the field (either on a slice, or in the
volume) into a dense packing, using potential energy profiles shaped by
the local tensor value, we remove undue visual emphasis of the regular
sampling grid of the data, and the underlying continuous features become
more apparent.The method is demonstrated on a DT-MRI scan of a patient
with a brain tumor. expand
|
|
Extensions of the Zwart-Powell Box Spline for Volumetric Data Reconstruction on the Cartesian Lattice |
|
Alireza Entezari,
Torsten Moller
|
|
Pages: 1337-1344 |
|
doi>10.1109/TVCG.2006.141 |
|
Available formats:
Publisher Site
|
|
In
this article we propose a box spline and its variants for
reconstructing volumetric data sampled on the Cartesian lattice. In
particular we present a tri-variate box spline reconstruction kernel
that is superior to tensor product reconstruction schemes ...
In
this article we propose a box spline and its variants for
reconstructing volumetric data sampled on the Cartesian lattice. In
particular we present a tri-variate box spline reconstruction kernel
that is superior to tensor product reconstruction schemes in terms of
recovering the proper Cartesian spectrum of the underlying function.
This box spline produces a $C^2$ reconstruction that can be considered
as a three dimensional extension of the well known Zwart-Powell element
in 2D. While its smoothness and approximation power are equivalent to
those of the tri-cubic B-spline, we illustrate the superiority of this
reconstruction on functions sampled on the Cartesian lattice and
contrast it to tensor product B-splines. Our construction is validated
through a Fourier domain analysis of the reconstruction behavior of this
box spline.Moreover, we present a stable method for evaluation of this
box spline by means of a decomposition. Through a convolution, this
decomposition reduces the problem to evaluation of a four directional
box spline that we previously published in its explicit closed form [8]. expand
|
|
A Generic and Scalable Pipeline for GPU Tetrahedral Grid Rendering |
|
Joachim Georgii,
Rudiger Westermann
|
|
Pages: 1345-1352 |
|
doi>10.1109/TVCG.2006.110 |
|
Available formats:
Publisher Site
|
|
Recent
advances in algorithms and graphics hardware have opened the
possibility to render tetrahedral grids at interactive rates on
commodity PCs. This paper extends on this work in that it presents a
direct volume rendering method for such grids which ...
Recent
advances in algorithms and graphics hardware have opened the
possibility to render tetrahedral grids at interactive rates on
commodity PCs. This paper extends on this work in that it presents a
direct volume rendering method for such grids which supports both
current and upcoming graphics hardware architectures, large and
deformable grids, as well as different rendering options. At the core of
our method is the idea to perform the sampling of tetrahedral elements
along the view rays entirely in local barycentric coordinates. Then,
sampling requires minimum GPU memory and texture access operations, and
it maps efficiently onto a feed-forward pipeline of multiple stages
performing computation and geometry construction. We propose to spawn
rendered elements from one single vertex. This makes the method amenable
to upcoming Direct3D 10 graphics hardware which allows to create
geometry on the GPU. By only modifying the algorithm slightly it can be
used to render per-pixel iso-surfaces and to perform tetrahedral cell
projection. As our method neither requires any pre-processing nor an
intermediate grid representation it can efficiently deal with dynamic
and large 3D meshes. expand
|
|
A Spectral Analysis of Function Composition and its Implications for Sampling in Direct Volume Visualization |
|
Steven Bergner,
Torsten Moller,
Daniel Weiskopf,
David J. Muraki
|
|
Pages: 1353-1360 |
|
doi>10.1109/TVCG.2006.113 |
|
Available formats:
Publisher Site
|
|
In
this paper we investigate the effects of function composition in the
form g(f(x)) = h(x) by means of a spectral analysis of h. We decompose
the spectral description of h(x) into a scalar product of the spectral
description of g(x) and a term that ...
In
this paper we investigate the effects of function composition in the
form g(f(x)) = h(x) by means of a spectral analysis of h. We decompose
the spectral description of h(x) into a scalar product of the spectral
description of g(x) and a term that solely depends on f(x) and that is
independent of g(x). We then use the method of stationary phase to
derive the essential maximum frequency of g(f(x)) bounding the main
portion of the energy of its spectrum. This limit is the product of the
maximum frequency of g(x) and the maximum derivative of f(x). This leads
to a proper sampling of the composition h of the two functions g and f.
We apply our theoretical results to a fundamental open problem in
volume rendering---the proper sampling of the rendering integral after
the application of a transfer function. In particular, we demonstrate
how the sampling criterion can be incorporated in adaptive ray
integration, visualization with multi-dimensional transfer functions,
and pre-integrated volume rendering. expand
|
|
Vis/InfoVis 2006 back matter |
|
Page: visback |
|
doi>10.1109/TVCG.2006.190 |
|
Available formats:
Publisher Site
|
|
The back matter to this issue contains the cover image credits and the author index.
The back matter to this issue contains the cover image credits and the author index. expand
|