|
PrePages |
|
Pages: i-xxv |
|
doi>10.1109/TVCG.2008.156 |
|
Available formats:
Publisher Site
|
|
Prepages from Vis/InfoVis 2008
Prepages from Vis/InfoVis 2008 expand
|
|
A Framework of Interaction Costs in Information Visualization |
|
Heidi Lam
|
|
Pages: 1149-1156 |
|
doi>10.1109/TVCG.2008.109 |
|
Available formats:
Publisher Site
|
|
Interaction
cost is an important but poorly understood factor in visualization
design. We propose a framework of interaction costs inspired by Norman’s
Seven Stages of Action to facilitate study. From 484 papers, we
collected 61 interaction-related ...
Interaction
cost is an important but poorly understood factor in visualization
design. We propose a framework of interaction costs inspired by Norman’s
Seven Stages of Action to facilitate study. From 484 papers, we
collected 61 interaction-related usability problems reported in 32 user
studies and placed them into our framework of seven costs: (1) Decision
costs to form goals; (2) System-power costs to form system operations;
(3) Multiple input mode costs to form physical sequences; (4)
Physical-motion costs to execute sequences; (5) Visual-cluttering costs
to perceive state; (6) View-change costs to interpret perception; (7)
State-change costs to evaluate interpretation. We also suggested ways to
narrow the gulfs of execution (2–4) and evaluation (5–7) based on
collected reports. Our framework suggests a need to consider decision
costs (1) as the gulf of goal formation. expand
|
|
Balloon Focus: a Seamless Multi-Focus+Context Method for Treemaps |
|
Ying Tu,
Han-Wei Shen
|
|
Pages: 1157-1164 |
|
doi>10.1109/TVCG.2008.114 |
|
Available formats:
Publisher Site
|
|
The
treemap is one of the most popular methods for visualizing hierarchical
data. When a treemap contains a large number of items, inspecting or
comparing a few selected items in a greater level of detail becomes very
challenging. In this paper, we present ...
The
treemap is one of the most popular methods for visualizing hierarchical
data. When a treemap contains a large number of items, inspecting or
comparing a few selected items in a greater level of detail becomes very
challenging. In this paper, we present a seamless multi-focus and
context technique, called Balloon Focus, that allows the user to
smoothly enlarge multiple treemap items served as the foci, while
maintaining a stable treemap layout as the context. Our method has
several desirable features. First, this method is quite general and
hence can be used with different treemap layout algorithms. Second, as
the foci are enlarged, the relative positions among all items are
preserved. Third, the foci are placed in a way that the remaining space
is evenly distributed back to the non-focus treemap items. When Balloon
Focus maximizes the possible zoom factor for the focus items, these
features ensure that the treemap will maintain a consistent appearance
and avoid any abrupt layout changes. In our algorithm, a DAG (Directed
Acyclic Graph) is used to maintain the positional constraints, and an
elastic model is employed to govern the placement of the treemap items.
We demonstrate a treemap visualization system that integrates data
query, manual focus selection, and our novel multi-focus+context
technique, Balloon Focus, together. A user study was conducted. Results
show that with Balloon Focus, users can better perform the tasks of
comparing the values and the distribution of the foci. expand
|
|
Multi-Focused Geospatial Analysis Using Probes |
|
Thomas Butkiewicz,
Wenwen Dou,
Zachary Wartell,
William Ribarsky,
Remco Chang
|
|
Pages: 1165-1172 |
|
doi>10.1109/TVCG.2008.149 |
|
Available formats:
Publisher Site
|
|
Traditional
geospatial information visualizations often present views that restrict
the user to a single perspective.When zoomed out, local trends and
anomalies become suppressed and lost; when zoomed in for local
inspection, spatial awareness and comparison ...
Traditional
geospatial information visualizations often present views that restrict
the user to a single perspective.When zoomed out, local trends and
anomalies become suppressed and lost; when zoomed in for local
inspection, spatial awareness and comparison between regions become
limited.In our model, coordinated visualizations are integrated within
individual probe interfaces, which depict the local data in user-defined
regions-of-interest.Our probe concept can be incorporated into a
variety of geospatial visualizations to empower users with the ability
to observe, coordinate, and compare data across multiple local
regions.It is especially useful when dealing with complex simulations or
analyses where behavior in various localities differs from other
localities and from the system as a whole.We illustrate the
effectiveness of our technique over traditional interfaces by
incorporating it within three existing geospatial visualization systems:
an agent-based social simulation, a census data exploration tool, and
an 3D GIS environment for analyzing urban change over time.In each case,
the probe-based interaction enhances spatial awareness, improves
inspection and comparison capabilities, expands the range of scopes, and
facilitates collaboration among multiple users. expand
|
|
Distributed Cognition as a Theoretical Framework for Information Visualization |
|
Zhicheng Liu,
Nancy Nersessian,
John Stasko
|
|
Pages: 1173-1180 |
|
doi>10.1109/TVCG.2008.121 |
|
Available formats:
Publisher Site
|
|
Even
though information visualization (InfoVis) research has matured in
recent years, it is generally acknowledged that the field still lacks
supporting, encompassing theories. In this paper, we argue that the
distributed cognition framework can be used ...
Even
though information visualization (InfoVis) research has matured in
recent years, it is generally acknowledged that the field still lacks
supporting, encompassing theories. In this paper, we argue that the
distributed cognition framework can be used to substantiate the
theoretical foundation of InfoVis. We highlight fundamental assumptions
and theoretical constructs of the distributed cognition approach, based
on the cognitive science literature and a real life scenario. We then
discuss how the distributed cognition framework can have an impact on
the research directions and methodologies we take as InfoVis
researchers. Our contributions are as follows. First, we highlight the
view that cognition is more an emergent property of interaction than a
property of the human mind. Second, we argue that a reductionist
approach to study the abstract properties of isolated human minds may
not be useful in informing InfoVis design. Finally we propose to make
cognition an explicit research agenda, and discuss the implications on
how we perform evaluation and theory building. expand
|
|
EMDialog: Bringing Information Visualization into the Museum |
|
Uta Hinrichs,
Holly Schmidt,
Sheelagh Carpendale
|
|
Pages: 1181-1188 |
|
doi>10.1109/TVCG.2008.127 |
|
Available formats:
Publisher Site
|
|
Digital
information displays are becoming more common in public spaces such as
museums, galleries, and libraries. However, the public nature of these
locations requires special considerations concerning the design of
information visualization in terms ...
Digital
information displays are becoming more common in public spaces such as
museums, galleries, and libraries. However, the public nature of these
locations requires special considerations concerning the design of
information visualization in terms of visual representations and
interaction techniques. We discuss the potential for, and challenges of,
information visualization in the museum context based on our practical
experience with EMDialog, an interactive information presentation that
was part of the Emily Carr exhibition at the Glenbow Museum in Calgary.
EMDialog visualizes the diverse and multi-faceted discourse about this
Canadian artist with the goal to both inform and provoke discussion. It
provides a visual exploration environment that offers interplay
between two integrated visualizations, one for information access along
temporal, and the other along contextual dimensions. We describe the
results of an observational study we conducted at the museum that
revealed the different ways visitors approached and interacted with
EMDialog, as well as how they perceived this form of information
presentation in the museum context. Our results include the need to
present information in a manner sufficiently attractive to draw
attention and the importance of rewarding passive observation as well as
both short- and longer term information exploration. expand
|
|
Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation |
|
Jeffrey Heer,
Jock Mackinlay,
Chris Stolte,
Maneesh Agrawala
|
|
Pages: 1189-1196 |
|
doi>10.1109/TVCG.2008.137 |
|
Available formats:
Publisher Site
|
|
Interactive
history tools, ranging from basic undo and redo to branching timelines
of user actions, facilitate iterative forms of interaction. In this
paper, we investigate the design of history mechanisms for information
visualization. We present a ...
Interactive
history tools, ranging from basic undo and redo to branching timelines
of user actions, facilitate iterative forms of interaction. In this
paper, we investigate the design of history mechanisms for information
visualization. We present a design space analysis of both architectural
and interface issues, identifying design decisions and associated
trade-offs. Based on this analysis, we contribute a design study of
graphical history tools for Tableau, a database visualization system.
These tools record and visualize interaction histories, support data
analysis and communication of findings, and contribute novel mechanisms
for presenting, managing, and exporting histories. Furthermore, we have
analyzed aggregated collections of history sessions to evaluate Tableau
usage. We describe additional tools for analyzing users’ history logs
and how they have been applied to study usage patterns in Tableau. expand
|
|
Who Votes For What? A Visual Query Language for Opinion Data |
|
Geoffrey Draper,
Richard Riesenfeld
|
|
Pages: 1197-1204 |
|
doi>10.1109/TVCG.2008.187 |
|
Available formats:
Publisher Site
|
|
Surveys
and opinion polls are extremely popular in the media, especially in the
months preceding a general election.However, the available tools for
analyzing poll results often require specialized training. Hence, data
analysis remains out of reach ...
Surveys
and opinion polls are extremely popular in the media, especially in the
months preceding a general election.However, the available tools for
analyzing poll results often require specialized training. Hence, data
analysis remains out of reach for many casual computer users. Moreover,
the visualizations used to communicate the results of surveys are
typically limited to traditional statistical graphics like bar graphs
and pie charts, both of which are fundamentally noninteractive. We
present a simple interactive visualization that allows users to
construct queries on large tabular data sets, and view the results in
real time. The results of two separate user studies suggest that our
interface lowers the learning curve for naive users, while still
providing enough analytical power to discover interesting correlations
in the data. expand
|
|
VisGets: Coordinated Visualizations for Web-based Information Exploration and Discovery |
|
Marian Dörk,
Sheelagh Carpendale,
Christopher Collins,
Carey Williamson
|
|
Pages: 1205-1212 |
|
doi>10.1109/TVCG.2008.175 |
|
Available formats:
Publisher Site
|
|
In
common Web-based search interfaces, it can be difficult to formulate
queries that simultaneously combine temporal, spatial, and topical data
filters. We investigate how coordinated visualizations can enhance
search and exploration of information on ...
In
common Web-based search interfaces, it can be difficult to formulate
queries that simultaneously combine temporal, spatial, and topical data
filters. We investigate how coordinated visualizations can enhance
search and exploration of information on the World Wide Web by easing
the formulation of these types of queries. Drawing from visual
information seeking and exploratory search, we introduce \emph{VisGets}
-- interactive query visualizations of Web-based information that
operate with online information within a Web browser. VisGets provide
the information seeker with visual overviews of Web resources and offer a
way to visually filter the data. Our goal is to facilitate the
construction of dynamic search queries that combine filters from more
than one data dimension. We present a prototype information exploration
system featuring three linked VisGets (temporal, spatial, and topical),
and used it to visually explore news items from online RSS feeds. expand
|
|
Vispedia: Interactive Visual Exploration of Wikipedia Data via Search-Based Integration |
|
Bryan Chan,
Leslie Wu,
Justin Talbot,
Mike Cammarano,
Pat Hanrahan
|
|
Pages: 1213-1220 |
|
doi>10.1109/TVCG.2008.178 |
|
Available formats:
Publisher Site
|
|
Wikipedia
is an example of the collaborative, semi-structured data sets emerging
on the Web.These data sets have large, non-uniform schema that require
costly data integration into structured tables before visualization can
begin. We present Vispedia, ...
Wikipedia
is an example of the collaborative, semi-structured data sets emerging
on the Web.These data sets have large, non-uniform schema that require
costly data integration into structured tables before visualization can
begin. We present Vispedia, a Web-based visualization system that
reduces the cost of this data integration.
Users can browse Wikipedia, select an interesting data table, then use a
search interface to discover, integrate, and visualize additional
columns of data drawn from multiple Wikipedia articles. This interaction
is supported by a fast path search algorithm over DBpedia, a semantic
graph extracted from Wikipedia's hyperlink structure.Vispedia can also
export the augmented data tables produced for use in traditional
visualization systems. We believe that these techniques begin to address
the "long tail" of visualization by allowing a wider audience to
visualize a broader class of data. We evaluated this system in a
first-use formative lab study. Study participants were able to quickly
create effective visualizations for a diverse set of domains, performing
data integration as needed.
expand
|
|
The Word Tree, an Interactive Visual Concordance |
|
Martin Wattenberg,
Fernanda B. Viégas
|
|
Pages: 1221-1228 |
|
doi>10.1109/TVCG.2008.172 |
|
Available formats:
Publisher Site
|
|
We
introduce the Word Tree, a new visualization and information-retrieval
technique aimed at text documents. A word tree is a graphical version of
the traditional "keyword-in-context" method, and enables rapid querying
and exploration of bodies of text. ...
We
introduce the Word Tree, a new visualization and information-retrieval
technique aimed at text documents. A word tree is a graphical version of
the traditional "keyword-in-context" method, and enables rapid querying
and exploration of bodies of text. In this paper we describe the design
of the technique, along with some of the technical issues that arise in
its implementation. In addition, we discuss the results of several
months of public deployment of word trees on Many Eyes, which provides a
window onto the ways in which users obtain value from the
visualization. expand
|
|
HiPP: A Novel Hierarchical Point Placement Strategy and its Application to the Exploration of Document Collections |
|
Fernando V. Paulovich,
Rosane Minghim
|
|
Pages: 1229-1236 |
|
doi>10.1109/TVCG.2008.138 |
|
Available formats:
Publisher Site
|
|
Point
placement strategies aim at mapping data pointsrepresented in higher
dimensions to bi-dimensional spaces and arefrequently used to visualize
relationships amongst data instances.They have been valuable tools for
analysis and exploration of datasets ...
Point
placement strategies aim at mapping data pointsrepresented in higher
dimensions to bi-dimensional spaces and arefrequently used to visualize
relationships amongst data instances.They have been valuable tools for
analysis and exploration of datasets of various kinds. Many conventional
techniques, however, do notbehave well when the number of dimensions is
high, such as in thecase of documents collections. Later approaches
handle thatshortcoming, but may cause too much clutter to allow
flexibleexploration to take place. In this work we present a
novelhierarchical point placement technique that is capable of
dealingwith these problems. While good grouping and separation of data
withhigh similarity is maintained without increasing computation
cost,its hierarchical structure lends itself both to exploration
invarious levels of detail and to handling data in subsets,
improvinganalysis capability and also allowing manipulation of larger
datasets. expand
|
|
Particle-based labeling: Fast point-feature labeling without obscuring other visual features |
|
Martin Luboschik,
Heidrun Schumann,
Hilko Cords
|
|
Pages: 1237-1244 |
|
doi>10.1109/TVCG.2008.152 |
|
Available formats:
Publisher Site
|
|
In
many information visualization techniques, labels are an essential part
to communicate the visualized data. To preserve the expressiveness of
the visual representation, a placed label should neither occlude other
labels nor visual representatives ...
In
many information visualization techniques, labels are an essential part
to communicate the visualized data. To preserve the expressiveness of
the visual representation, a placed label should neither occlude other
labels nor visual representatives (e.g., icons, lines) that communicate
crucial information. Optimal, non-overlapping labeling is an NP-hard
problem. Thus, only a few approaches achieve a fast non-overlapping
labeling in highly interactive scenarios like information visualization.
These approaches generally target the point-feature label placement
(PFLP) problem, solving only label-label conflicts.
This paper presents a new, fast, solid and flexible 2D labeling approach
for the PFLP problem that additionally respects other visual elements
and the visual extent of labeled features. The results (number of placed
labels, processing time) of our particle-based method compare favorably
to those of existing techniques. Although the esthetic quality of
non-real-time approaches may not be achieved with our method, it
complies with practical demands and thus supports the interactive
exploration of information spaces. In contrast to the known adjacent
techniques, the flexibility of our technique enables labeling of dense
point clouds by the use of nonoccluding
distant labels. Our approach is independent of the underlying
visualization technique, which enables us to demonstrate the application
of our labeling method within different information visualization
scenarios. expand
|
|
Stacked Graphs – Geometry & Aesthetics |
|
Lee Byron,
Martin Wattenberg
|
|
Pages: 1245-1252 |
|
doi>10.1109/TVCG.2008.166 |
|
Available formats:
Publisher Site
|
|
In
February 2008, the New York Times published an unusual chart of box
office revenues for 7500 movies over 21 years. The chart was based on a
similar visualization, developed by the first author, that displayed
trends in music listening. This paper ...
In
February 2008, the New York Times published an unusual chart of box
office revenues for 7500 movies over 21 years. The chart was based on a
similar visualization, developed by the first author, that displayed
trends in music listening. This paper describes the design decisions and
algorithms behind these graphics, and discusses the reaction on the
Web. We suggest that this type of complex layered graph is effective for
displaying large data sets to a mass audience. We provide a
mathematical analysis of how this layered graph relates to traditional
stacked graphs and to techniques such as ThemeRiver, showing how each
method is optimizing a different “energy function”. Finally, we discuss
techniques for coloring and ordering the layers of such graphs.
Throughout the paper, we emphasize the interplay between considerations
of aesthetics and legibility.
expand
|
|
Cerebral: Visualizing Multiple Experimental Conditions on a Graph with Biological Context |
|
Aaron Barsky,
Tamara Munzner,
Jennifer Gardy,
Robert Kincaid
|
|
Pages: 1253-1260 |
|
doi>10.1109/TVCG.2008.117 |
|
Available formats:
Publisher Site
|
|
Systems
biologists use interaction graphs to model the behavior of biological
systems at the molecular level. In an iterative process, such biologists
obser ve the reactions of living cells under various experimental
conditions, view the results in the ...
Systems
biologists use interaction graphs to model the behavior of biological
systems at the molecular level. In an iterative process, such biologists
obser ve the reactions of living cells under various experimental
conditions, view the results in the context of the interaction graph,
and then propose changes to the graph model. These graphs ser ve as a
form of dynamic knowledge representation of the biological system being
studied and evolve as new insight is gained from the experimental data.
While numerous graph layout and drawing packages are available, these
tools did not fully meet the needs of our immunologist collaborators. In
this paper, we describe the data information display needs of these
immunologists and translate them into design decisions. These decisions
led us to create Cerebral, a system that uses a biologically guided
graph layout and incor porates experimental data directly into the graph
display. Small multiple views of different experimental conditions and a
data-driven parallel coordinates view enable correlations between
experimental conditions to be analyzed at the same time that the data is
viewed in the graph context. This combination of coordinated views
allows the biologist to view the data from many different perspectives
simultaneously. To illustrate the typical analysis tasks performed, we
analyze two datasets using Cerebral. Based on feedback from our
collaboratorsweconcludethat Cerebral is a valuable tool for analyzing
experimental data in the context of an interaction graph model.
expand
|
|
The Shaping of Information by Visual Metaphors |
|
Caroline Ziemkiewicz,
Robert Kosara
|
|
Pages: 1269-1276 |
|
doi>10.1109/TVCG.2008.171 |
|
Available formats:
Publisher Site
|
|
The
nature of an information visualization can be considered to lie in the
visual metaphors it uses to structure information. The process of
understanding a visualization therefore involves an interaction between
these external visual metaphors and the ...
The
nature of an information visualization can be considered to lie in the
visual metaphors it uses to structure information. The process of
understanding a visualization therefore involves an interaction between
these external visual metaphors and the user's internal knowledge
representations. To investigate this claim, we conducted an experiment
to test the effects of visual metaphor and verbal metaphor on the
understanding of tree visualizations. Participants answered simple data
comprehension questions while viewing either a treemap or a node-link
diagram. Questions were worded to reflect a verbal metaphor that was
either compatible or incompatible with the visualization a participant
was using. The results suggest that the visual metaphor indeed affects
how a user derives information from a visualization. Additionally, we
found that the degree to which a user is affected by the metaphor is
strongly correlated with the user's ability to answer task questions
correctly. These findings are a first step towards illuminating how
visual metaphors shape user understanding, and have significant
implications for the evaluation, application, and theory of
visualization. expand
|
|
Viz-A-Vis: Toward Visualizing Video through Computer Vision |
|
Mario Romero,
Jay Summet,
John Stasko,
Gregory Abowd
|
|
Pages: 1261-1268 |
|
doi>10.1109/TVCG.2008.185 |
|
Available formats:
Publisher Site
|
|
In
the established procedural model of information visualization, the
first operation is to transform raw data into data tables [1]. The
transforms typically include abstractions that aggregate and segment
relevant data and are usually defined by a human, ...
In
the established procedural model of information visualization, the
first operation is to transform raw data into data tables [1]. The
transforms typically include abstractions that aggregate and segment
relevant data and are usually defined by a human, user or programmer.
The theme of this paper is that for video, data transforms should be
supported by low level computer vision. High level reasoning still
resides in the human analyst, while part of the low level perception is
handled by the computer. To illustrate this approach, we present
Viz-A-Vis, an overhead video capture and access system for activity
analysis in natural settings over variable periods of time. Overhead
video provides rich opportunities for long-term behavioral and occupancy
analysis, but it poses considerable challenges. We present initial
steps addressing two challenges. First, overhead video generates
overwhelmingly large volumes of video impractical to analyze manually.
Second, automatic video analysis remains an open problem for computer
vision. expand
|
|
Geometry-Based Edge Clustering for Graph Visualization |
|
Weiwei Cui,
Hong Zhou,
Huamin Qu,
Pak Chung Wong,
Xiaoming Li
|
|
Pages: 1277-1284 |
|
doi>10.1109/TVCG.2008.135 |
|
Available formats:
Publisher Site
|
|
Graphs
have been widely used to model relationships among data. For large
graphs, excessive edge crossings make the display visually cluttered and
thus difficult to explore. In this paper, we propose a novel
geometry-based edge-clustering framework ...
Graphs
have been widely used to model relationships among data. For large
graphs, excessive edge crossings make the display visually cluttered and
thus difficult to explore. In this paper, we propose a novel
geometry-based edge-clustering framework that can group edges into
bundles to reduce the overall edge crossings. Our method uses a control
mesh to guide the edge-clustering process; edge bundles can be formed by
forcing all edges to pass through some control points on the mesh. The
control mesh can be generated at different levels of detail either
manually or automatically based on underlying graph patterns. Users can
further interact with the edge-clustering results through several
advanced visualization techniques such as color and opacity enhancement.
Compared with other edge-clustering methods, our approach is intuitive,
flexible, and efficient. The experiments on some large graphs demonstrate
the effectiveness of our method. expand
|
|
On the Visualization of Social and other Scale-Free Networks |
|
Yuntao Jia,
Jared Hoberock,
Michael Garland,
John Hart
|
|
Pages: 1285-1292 |
|
doi>10.1109/TVCG.2008.151 |
|
Available formats:
Publisher Site
|
|
This
paper proposes novel methods for visualizing specifically the large
power-law graphs that arise in sociology and the sciences. In such cases
a large portion of edges can be shown to be less important and removed
while preserving component connectedness ...
This
paper proposes novel methods for visualizing specifically the large
power-law graphs that arise in sociology and the sciences. In such cases
a large portion of edges can be shown to be less important and removed
while preserving component connectedness and other features (e.g.
cliques) to more clearly reveal the network’s underlying connection
pathways. This simplification approach deterministically filters
(instead of clustering) the graph to retain important node and edge
semantics, and works both automatically and interactively. The improved
graph filtering and layout is combined with a novel computer graphics
anisotropic shading of the dense crisscrossing array of edges to yield a
full social network and scale-free graph visualization system. Both
quantitative analysis and visual results demonstrate the effectiveness
of this approach. expand
|
|
Exploration of Networks using overview+detail with Constraint-based cooperative layout |
|
Tim Dwyer,
Kim Marriott,
Falk Schreiber,
Peter Stuckey,
Michael Woodward,
Michael Wybrow
|
|
Pages: 1293-1300 |
|
doi>10.1109/TVCG.2008.130 |
|
Available formats:
Publisher Site
|
|
A
standard approach to large network visualization is to provide an
overview of the network and a detailed view of a small component of the
graph centred around a focal node.The user explores the network by
changing the focal node in the detailed view ...
A
standard approach to large network visualization is to provide an
overview of the network and a detailed view of a small component of the
graph centred around a focal node.The user explores the network by
changing the focal node in the detailed view or by changing the level of
detail of a node or cluster.For scalability, fast force-based layout
algorithms are used for the overview and the detailed view.However,
using the same layout algorithm in both views is problematic since
layout for the detailed view has different requirements to that in the
overview. Here we present a model in which constrained graph layout
algorithms are used for layout in the detailed view. This means the
detailed view has high-quality layout including sophisticated edge
routing and is customisable by the user who can add placement
constraints on the layout.Scalability is still ensured since the slower
layout techniques are only applied to the small subgraph shown in the
detailed view. The main technical innovations are techniques to ensure
that the overview and detailed view remain synchronized, and modifying
constrained graph layout algorithms to support smooth, stable layout.The
key innovation supporting stability are new dynamic graph layout
algorithms that preserve the topology or structure of the network when
the user changes the focus node or the level of detail by in situ
semantic zooming.We have built a prototype tool and demonstrate its use
in two application domains, UML class diagrams and biological networks.
expand
|
|
Rapid Graph Layout Using Space Filling Curves |
|
Chris Muelder,
Kwan-Liu Ma
|
|
Pages: 1301-1308 |
|
doi>10.1109/TVCG.2008.158 |
|
Available formats:
Publisher Site
|
|
Network
data frequently arises in a wide variety of fields, and node-link
diagrams are a very natural and intuitive represen- tation of such data.
In order for a node-link diagram to be effective, the nodes must be
arranged well on the screen. While ...
Network
data frequently arises in a wide variety of fields, and node-link
diagrams are a very natural and intuitive represen- tation of such data.
In order for a node-link diagram to be effective, the nodes must be
arranged well on the screen. While many graph layout algorithms exist
for this purpose, they often have limitations such as high computational
complexity or node colocation. This paper proposes a new approach to
graph layout through the use of space filling curves which is very fast
and guarantees that there will be no nodes that are colocated. The
resulting layout is also aesthetic and satisfies several criteria for
graph layout effectiveness. expand
|
|
Evaluating the Use of Data Transformation for Information Visualization |
|
Zhen Wen,
Michelle Zhou
|
|
Pages: 1309-1316 |
|
doi>10.1109/TVCG.2008.129 |
|
Available formats:
Publisher Site
|
|
Data
transformation, the process of preparing raw data for effective
visualization, is one of the key challenges in information
visualization. Although researchers have developed many data
transformation techniques, there is little empirical study of ...
Data
transformation, the process of preparing raw data for effective
visualization, is one of the key challenges in information
visualization. Although researchers have developed many data
transformation techniques, there is little empirical study of the
general impact of data transformation on visualization. Without such
study, it is difficult to systematically decide when and which data
transformation techniques are needed. We thus have designed and
conducted a two-part empirical study that examines how the use of common
data transformation techniques impacts visualization quality, which in
turn affects user task performance. Our first experiment studies the
impact of data transformation on user performance in single-step,
typical visual analytic tasks. The second experiment assesses the impact
of data transformation in multi-step analytic tasks. Our results
quantify the benefits of data transformation in both experiments. More
importantly, our analyses reveal that (1) the benefits of data
transformation vary significantly by task and by visualization, and (2)
the use of data transformation depends on a user’s interaction context.
Based on our findings, we present a set of design recommendations that
help guide the development and use of data transformation techniques. expand
|
|
Improving the Readability of Clustered Social Networks using Node Duplication |
|
Nathalie y Henr,
Anastasia Bezerianos,
Jean-Daniel Fekete
|
|
Pages: 1317-1324 |
|
doi>10.1109/TVCG.2008.141 |
|
Available formats:
Publisher Site
|
|
Exploring
communities is an impor tant task in social network analysis. Such
communities are currently identified using clustering methods to group
actors. This approach often leads to actors belonging to one and only
one cluster, whereas in real life ...
Exploring
communities is an impor tant task in social network analysis. Such
communities are currently identified using clustering methods to group
actors. This approach often leads to actors belonging to one and only
one cluster, whereas in real life a person can belong to several
communities. As a solution we propose duplicating actors in social
networks and discuss potential impact of such a move. Several visual
duplication designs are discussed and a controlled experiment comparing
network visualization with and without duplication is performed, using 6
tasks that are impor tant for graph readability and visual
interpretation of social networks. We show that in our experiment,
duplications significantly improve community-related tasks but sometimes
interfere with other graph readability tasks. Finally, we propose a set
of guidelines for deciding when to duplicate actors and choosing
candidates for duplication, and alternative ways to render them in
social network representations. expand
|
|
Effectiveness of Animation in Trend Visualization |
|
George Robertson,
Roland Fernandez,
Danyel Fisher,
Bongshin Lee,
John Stasko
|
|
Pages: 1325-1332 |
|
doi>10.1109/TVCG.2008.125 |
|
Available formats:
Publisher Site
|
|
Animation
has been used to show trends in multi-dimensional data. This technique
has recently gained new prominence for presentations, most notably with
Gapminder Trendalyzer. In Trendalyzer, animation together with
interesting data and an engaging presenter ...
Animation
has been used to show trends in multi-dimensional data. This technique
has recently gained new prominence for presentations, most notably with
Gapminder Trendalyzer. In Trendalyzer, animation together with
interesting data and an engaging presenter helps the audience understand
the results of an analysis of the data. It is less clear whether trend
animation is effective for analysis. This paper proposes two alternative
trend visualizations that use static depictions of trends: one which
shows traces of all trends overlaid simultaneously in one display and a
second that uses a small multiples display to show the trend traces
side-by-side. The paper evaluates the three visualizations for both
analysis and presentation. Results indicate that trend animation can be
challenging to use even for presentations; while it is the fastest
technique for presentation and participants find it enjoyable and
exciting, it does lead to many participant errors. Animation is the
least effective form for analysis; both static depictions of trends are
significantly faster than animation, and the small multiples display is
more accurate. expand
|
|
Perceptual Organization in User-Generated Graph Layouts |
|
Frank van Ham,
Bernice Rogowitz
|
|
Pages: 1333-1339 |
|
doi>10.1109/TVCG.2008.155 |
|
Available formats:
Publisher Site
|
|
Many
graph layout algorithms optimize visual characteristics to achieve
useful representations. Implicitly, their goal is to create visual
representations that are more intuitive to human observers. In this
paper, we asked users to explicitly manipulate ...
Many
graph layout algorithms optimize visual characteristics to achieve
useful representations. Implicitly, their goal is to create visual
representations that are more intuitive to human observers. In this
paper, we asked users to explicitly manipulate nodes in a network
diagram to create layouts that they felt best captured the relationships
in the data. This allowed us to measure organizational behavior
directly, allowing us to evaluate the perceptual importance of
particular visual features, such as edge crossings and edge-lengths
uniformity. We also manipulated the interior structure of the node
relationships by designing data sets that contained clusters, that is,
sets of nodes that are strongly interconnected. By varying the degree to
which these clusters were “masked” by extraneous edges we were able to
measure observers’ sensitivity to the existence of clusters and how they
revealed them in the network diagram. Based on these measurements we
found that observers are able to recover cluster structure, that the
distance between clusters is inversely related to the strength of the
clustering, and that users exhibit the tendency to use edges to visually
delineate perceptual groups. These results demonstrate the role of
perceptual organization in representing graph data and provide concrete
recommendations for graph layout algorithms. expand
|
|
Interactive Visual Analysis of Set-Typed Data |
|
Wolfgang Freiler,
Kresimir Matkovic,
Helwig Hauser
|
|
Pages: 1340-1347 |
|
doi>10.1109/TVCG.2008.144 |
|
Available formats:
Publisher Site
|
|
While
it is quite typical to deal with attributes of different data types in
the visualization of heterogeneous and multivariate datasets, most
existing techniques still focus on the most usual data types such as
numerical attributes or strings. In this ...
While
it is quite typical to deal with attributes of different data types in
the visualization of heterogeneous and multivariate datasets, most
existing techniques still focus on the most usual data types such as
numerical attributes or strings. In this paper we present a new approach
to the interactive visual exploration and analysis of data that
contains attributes which are of set type. A set-typed attribute of a
data item – like one cell in a table – has a list of n>=0 elements as
its value. We present the set’o’gram as a new visualization approach to
represent data of set type and to enable interactive visual exploration
and analysis. We also demonstrate how this approach is capable to help
in dealing with datasets that have a larger number of dimensions (more
than a dozen or more), especially also in the context of categorical
data. To illustrate the effectiveness of our approach, we present the
interactive visual analysis of a CRM dataset with data from a
questionnaire on the education and shopping habits of about 90000
people. expand
|
|
Spatially Ordered Treemaps |
|
Jo Wood,
Jason Dykes
|
|
Pages: 1348-1355 |
|
doi>10.1109/TVCG.2008.165 |
|
Available formats:
Publisher Site
|
|
Existing
treemap layout algorithms suffer to some extent from poor or
inconsistent mappings between data order and visual ordering in their
representation, reducing their cognitive plausibility. While attempts
have been made to quantify this mismatch, ...
Existing
treemap layout algorithms suffer to some extent from poor or
inconsistent mappings between data order and visual ordering in their
representation, reducing their cognitive plausibility. While attempts
have been made to quantify this mismatch, and algorithms proposed to
minimize inconsistency, solutions provided tend to concentrate on
one-dimensional ordering. We propose extensions to the existing
squarified layout algorithm that exploit the two-dimensional arrangement
of treemap nodes more effectively. Our proposed spatial squarified
layout algorithm provides a more consistent arrangement of nodes while
maintaining low aspect ratios. It is suitable for the arrangement of
data with a geographic component and can be used to create tessellated
car tograms for geovisualization. Locational consistency is measured and
visualized and a number of layout algorithms are compared. CIELab color
space and displacement vector overlays are used to assess and emphasize
the spatial layout of treemap nodes. A case study involving locations
of tagged photographs in the Flickr database is described. expand
|
|
Visualizing Incomplete and Partially Ranked Data |
|
Paul Kidwell,
Guy Lebanon,
William Cleveland
|
|
Pages: 1356-1363 |
|
doi>10.1109/TVCG.2008.181 |
|
Available formats:
Publisher Site
|
|
Ranking
data, which result from m raters ranking n items, are difficult to
visualize due to their discrete algebraic structure, and the
computational difficulties associated with them when n is large. This
problem becomes worse when raters provide tied ...
Ranking
data, which result from m raters ranking n items, are difficult to
visualize due to their discrete algebraic structure, and the
computational difficulties associated with them when n is large. This
problem becomes worse when raters provide tied rankings or not all items
are ranked.We develop an approach for the visualization of ranking data
for large n which is intuitive, easy to use, and computationally
efficient. The approach overcomes the structural and computational
difficulties by utilizing a natural measure of dissimilarity for rater,
and projecting the raters into a low dimensional vector space where they
are viewed. The visualization techniques are demonstrated using voting
data, jokes, and movie preferences. expand
|
|
Texture-based Transfer Functions for Direct Volume Rendering |
|
Jesus J. Caban,
Penny Rheingans
|
|
Pages: 1364-1371 |
|
doi>10.1109/TVCG.2008.169 |
|
Available formats:
Publisher Site
|
|
Visualization
of volumetric data faces the difficult task of finding effective
parameters for the transfer functions. Those parameters can determine
the effectiveness and accuracy of the visualization. Frequently,
volumetric data includes multiple structures ...
Visualization
of volumetric data faces the difficult task of finding effective
parameters for the transfer functions. Those parameters can determine
the effectiveness and accuracy of the visualization. Frequently,
volumetric data includes multiple structures and features that need to
be differentiated. However, if those features have the same intensity
and gradient values, existing transferfunctions are limited at
effectively illustrating those similar features with different rendering
properties. We introduce texture-based transfer functions for direct
volume rendering. In our approach, the voxel’s resulting opacity and
color are based on local textural properties rather than individual
intensity values. For example, if the intensity values of the vessels
are similar to those on the boundary of the lungs, our texture-based
transfer function will analyze the textural properties in those regions
and color them differently even though they have the same intensity
values in the volume. The use of texture-based transfer functions has
several benefits. First, structures and features with the same intensity
and gradient values can be automatically visualized with different
rendering properties. Second, segmentation or prior knowledge of the
specific features within the volume is not required for classifying
these features differently. Third, textural metrics can be combined
and/or maximized to capture and better differentiate similar structures.
We demonstrate our texture-based transfer function for direct volume
rendering with synthetic and real-world medical data to show the
strength of our technique. expand
|
|
Volume MLS Ray Casting |
|
Christian Ledergerber,
Gaël Guennebaud,
Miriah Meyer,
Moritz Bächer,
Hanspeter Pfister
|
|
Pages: 1372-1379 |
|
doi>10.1109/TVCG.2008.186 |
|
Available formats:
Publisher Site
|
|
The
method of Moving Least Squares (MLS) is a popular framework for
reconstructing continuous functions from scattered data due to its rich
mathematical properties and well-understood theoretical foundations.
This paper applies MLS to volume rendering, ...
The
method of Moving Least Squares (MLS) is a popular framework for
reconstructing continuous functions from scattered data due to its rich
mathematical properties and well-understood theoretical foundations.
This paper applies MLS to volume rendering, providing a unified
mathematical framework for ray casting of scalar data stored over
regular as well as irregular grids. We use the MLS reconstruction to
render smooth isosurfaces and to compute accurate derivatives for
high-quality shading effects. We also present a novel, adaptive
preintegration scheme to improve the efficiency of the ray casting
algorithm by reducing the overall number of function evaluations, and an
efficient implementation of our framework exploiting modern graphics
hardware. The resulting system enables high-quality volume integration
and shaded isosurface rendering for regular and irregular volume data. expand
|
|
Size-based Transfer Functions: A New Volume Exploration Technique |
|
Carlos Correa,
Kwan-Liu Ma
|
|
Pages: 1380-1387 |
|
doi>10.1109/TVCG.2008.162 |
|
Available formats:
Publisher Site
|
|
The
visualization of complex 3D images remains a challenge, a fact that is
magnified by the difficulty to classify or segment volume data. In this
paper, we introduce size-based transfer functions, which map the local
scale of features to color and opacity. ...
The
visualization of complex 3D images remains a challenge, a fact that is
magnified by the difficulty to classify or segment volume data. In this
paper, we introduce size-based transfer functions, which map the local
scale of features to color and opacity. Features in a data set with
similar or identical scalar values can be classified based on their
relative size. We achieve this with the use of scale fields, which are
3D fields that represent the relative size of the local feature at each
voxel. We present a mechanism for obtaining these scale fields at
interactive rates, through a continuous scale-space analysis and a set
of detection filters. Through a number of examples, we show that
size-based transfer functions can improve classification and enhance
volume rendering techniques, such as maximum intensity projection. The
ability to classify objects based on local size at interactive rates
proves to be a powerful method for complex data exploration.
expand
|
|
Direct Volume Editing |
|
Kai Bürger,
Jens Krüger,
Rüdiger Westermann
|
|
Pages: 1388-1395 |
|
doi>10.1109/TVCG.2008.120 |
|
Available formats:
Publisher Site
|
|
In
this work we present basic methodology for interactive volume editing
on GPUs, and we demonstrate the use of these methods to achieve a number
of different effects. We present fast techniques to modify the
appearance and structure of volumetric scalar ...
In
this work we present basic methodology for interactive volume editing
on GPUs, and we demonstrate the use of these methods to achieve a number
of different effects. We present fast techniques to modify the
appearance and structure of volumetric scalar fields given on Cartesian
grids. Similar to 2D circular brushes as used in surface painting we
present 3D spherical brushes for intuitive coloring of particular
structures in such fields. This paint metaphor is extended to allow the
user to change the data itself, and the use of this functionality for
interactive structure isolation, hole filling, and artefact removal is
demonstrated. Building on previous work in the field we introduce
high-resolution selection volumes, which can be seen as a
resolution-based focus+context metaphor. By utilizing such volumes we
present a novel approach to interactive volume editing at sub-voxel
accuracy. Finally, we introduce a fast technique to paste textures onto
iso-surfaces in a 3D scalar field. Since the texture resolution is
independent of the volume resolution, this technique allows
structure-aligned textures containing appearance properties or textual
information to be used for volume augmentation and annotation. expand
|
|
Smoke Surfaces: An Interactive Flow Visualization Technique Inspired by Real-World Flow Experiments |
|
Wolfram von Funck,
Tino Weinkauf,
Holger Theisel,
Hans-Peter Seidel
|
|
Pages: 1396-1403 |
|
doi>10.1109/TVCG.2008.163 |
|
Available formats:
Publisher Site
|
|
Smoke
rendering is a standard technique for flow visualization. Most
approaches are based on a volumetric, par ticle based, or image based
representation of the smoke. This paper introduces an alternative
representation of smoke structures: as semi-transparent ...
Smoke
rendering is a standard technique for flow visualization. Most
approaches are based on a volumetric, par ticle based, or image based
representation of the smoke. This paper introduces an alternative
representation of smoke structures: as semi-transparent streak surfaces.
In order to make streak surface integration fast enough for interactive
applications, we avoid expensive adaptive retriangulations by coupling
the opacity of the triangles to their shapes. This way, the surface
shows a smoke-like look even in rather turbulent areas. Fur thermore, we
show modifications of the approach to mimic smoke nozzles, wool tufts,
and time surfaces. The technique is applied to a number of test data
sets. expand
|
|
Generation of Accurate Integral Surfaces in Time-Dependent Vector Fields |
|
Christoph Garth,
Han Krishnan,
Xavier Tricoche,
Tom Tricoche,
Kenneth I. Joy
|
|
Pages: 1404-1411 |
|
doi>10.1109/TVCG.2008.133 |
|
Available formats:
Publisher Site
|
|
We
present a novel approach for the direct computation of integral
surfaces in time-dependent vector fields. As opposed to previous work,
which we analyze in detail, our approach is based on a separation of
integral surface computation into two stages: ...
We
present a novel approach for the direct computation of integral
surfaces in time-dependent vector fields. As opposed to previous work,
which we analyze in detail, our approach is based on a separation of
integral surface computation into two stages: surface approximation and
generation of a graphical representation. This allows us to overcome
several limitations of existing techniques. We first describe an
algorithm for surface integration that approximates a series of time
lines using iterative refinement and computes a skeleton of the integral
surface. In a second step, we generate a well-conditioned
triangulation. Our approach allows a highly accurate treatment of very
large time-varying vector fields in an efficient, streaming fashion. We
examine the properties of the presented methods on several example
datasets and perform a numerical study of its correctness and accuracy.
Finally, we investigate some visualization aspects of integral surfaces. expand
|
|
Visualizing Particle/Flow Structure Interactions in the Small Bronchial Tubes |
|
Bela Soni,
David Thompson,
Raghu Machiraju
|
|
Pages: 1412-1427 |
|
doi>10.1109/TVCG.2008.183 |
|
Available formats:
Publisher Site
|
|
Particle
deposition in the small bronchial tubes (generations six through
twelve) is strongly influenced by the vortex-dominated secondary flows
that are induced by axial curvature of the tubes. In this paper, we
employ particle destination maps in conjunction ...
Particle
deposition in the small bronchial tubes (generations six through
twelve) is strongly influenced by the vortex-dominated secondary flows
that are induced by axial curvature of the tubes. In this paper, we
employ particle destination maps in conjunction with two-dimensional,
finite-time Lyapunov exponent maps to illustrate how the trajectories of
finite-mass particles are influenced by the presence of vortices. We
consider two three-generation bronchial tube models: a planar,
asymmetric geometry and a non-planar, asymmetric geometry. Our
visualizations demonstrate that these techniques, coupled with
judiciously seeded particle trajectories, are effective tools for
studying particle/flow structure interactions. expand
|
|
Interactive Visualization and Analysis of Transitional Flow |
|
Gregory P. Johnson,
Victor M. Calo,
Kelly P. Gaither
|
|
Pages: 1420-1427 |
|
doi>10.1109/TVCG.2008.146 |
|
Available formats:
Publisher Site
|
|
A
stand-alone visualization application has been developed by a
multi-disciplinary, collaborative team with the sole purpose of creating
an interactive exploration environment allowing turbulent flow
researchers to experiment and validate hypotheses ...
A
stand-alone visualization application has been developed by a
multi-disciplinary, collaborative team with the sole purpose of creating
an interactive exploration environment allowing turbulent flow
researchers to experiment and validate hypotheses using visualization.
This system has specific optimizations made in data management, caching
computations, and visualization allowing for the interactive exploration
of datasets on the order of 1TB in size. Using this application, the
user (co-author Calo) is able to interactively visualize and analyze all
regions of a transitional flow volume, including the laminar,
transitional and fully turbulent regions. The underlying goal of the
visualizations produced from these transitional flow simulations is to
localize turbulent spots in the laminar region of the boundary layer,
determine under which conditions they form, and follow their evolution.
The initiation of turbulent spots, which ultimately lead to full
turbulence, was located via a proposed feature detection condition and
verified by experimental results. The conditions under which these
turbulent spots form and coalesce are validated and presented. expand
|
|
Continuous Scatterplots |
|
Sven Bachthaler,
Daniel Weiskopf
|
|
Pages: 1428-1435 |
|
doi>10.1109/TVCG.2008.119 |
|
Available formats:
Publisher Site
|
|
Scatterplots
are well established means of visualizing discrete data values with two
data variables as a collection of discrete points. We aim at
generalizing the concept of scatterplots to the visualization of
spatially continuous input data by a continuous ...
Scatterplots
are well established means of visualizing discrete data values with two
data variables as a collection of discrete points. We aim at
generalizing the concept of scatterplots to the visualization of
spatially continuous input data by a continuous and dense plot. An
example of a continuous input field is data defined on an n-D spatial
grid with respective interpolation or reconstruction of in-between
values. We propose a rigorous, accurate, and generic mathematical model
of continuous scatterplots that considers an arbitrary density defined
on an input field on an n-D domain and that maps this density to m-D
scatterplots. Special cases are derived from this generic model and
discussed in detail: scatterplots where the n-D spatial domain and the
m-D data attribute domain have identical dimension, 1-D scatterplots as a
way to define continuous histograms, and 2-D scatterplots of data on
3-D spatial grids. We show how continuous histograms are related to
traditional discrete histograms and to the histograms of isosurface
statistics. Based on the mathematical model of continuous scatterplots,
respective visualization algorithms are derived, in particular for 2-D
scatterplots of data from 3-D tetrahedral grids. For several
visualization tasks, we show the applicability of continuous
scatterplots. Since continuous scatterplots do not only sample data at
grid points but interpolate data values within cells, a dense
and complete visualization of the data set is achieved that scales well
with increasing data set size. Especially for irregular grids with
varying cell size, improved results are obtained when compared to
conventional scatterplots. Therefore, continuous scatterplots are a
suitable extension of a statistics visualization technique to be applied
to typical data from scientific computation. expand
|
|
Extensions of Parallel Coordinates for Interactive Exploration of Large Multi-Timepoint Data Sets |
|
Jorik Blaas,
Charl Botha,
Frits Post
|
|
Pages: 1436-1451 |
|
doi>10.1109/TVCG.2008.131 |
|
Available formats:
Publisher Site
|
|
Parallel
coordinate plots (PCPs) are commonly used in information visualization
to provide insight into multi-variate data. These plots help to spot
correlations between variables. PCPs have been successfullyapplied to
unstructured datasets up to a few ...
Parallel
coordinate plots (PCPs) are commonly used in information visualization
to provide insight into multi-variate data. These plots help to spot
correlations between variables. PCPs have been successfullyapplied to
unstructured datasets up to a few millions of points. In this paper, we
present techniques to enhance the usability of PCPs forthe exploration
of large, multi-timepoint volumetric data sets, containingtens of
millions of points per timestep.
The main difficulties that arise when applying PCPs to large numbers of
data points are visual clutter and slow performance, making
interactiveexploration infeasible. Moreover, the spatial context of the
volumetric
data is usually lost.
We describe techniques for preprocessing using data quantization and
compression, and for fast GPU-based rendering of PCPs using joint
density distributions for each pair of consecutive variables, resulting
in a smooth,continuous visualization. Also, fast brushing techniques are
proposed for interactive data selection in multiple linked views,
including a 3D spatial volume view.
These techniques have been successfully applied to three large data
sets: Hurricane Isabel (Vis'04 contest), the ionization front
instability data set (Vis'08 design contest), and data from a large-eddy
simulation of cumulus clouds. With these data, we show how PCPs can be
extended to successfully visualize and interactively explore
multi-timepoint volumetric datasets with an order of magnitude more data
points.
expand
|
|
Vectorized Radviz and Its Application to Multiple Cluster Datasets |
|
John Sharko,
Georges Grinstein,
Kenneth A. Marx
|
|
Pages: 1444-1427 |
|
doi>10.1109/TVCG.2008.173 |
|
Available formats:
Publisher Site
|
|
Radviz
is a radial visualization with dimensions assigned to points called
dimensional anchors (DAs) placed on the circumference of a circle.
Records are assigned locations within the circle as a function of its
relative attraction to each of the DAs. ...
Radviz
is a radial visualization with dimensions assigned to points called
dimensional anchors (DAs) placed on the circumference of a circle.
Records are assigned locations within the circle as a function of its
relative attraction to each of the DAs. The DAs can be moved either
interactively or algorithmically to reveal different meaningful patterns
in the dataset. In this paper we describe Vectorized Radviz (VRV) which
extends the number of dimensions through data flattening. We show how
VRV increases the power of Radviz through these extra dimensions by
enhancing the flexibility in the layout of the DAs. We apply VRV to the
problem of analyzing the results of multiple clusterings of the same
data set, called multiple cluster sets or cluster ensembles. We show how
features of VRV help discern patterns across the multiple cluster sets.
We use the Iris data set to explain VRV and a newt gene microarray data
set used in studying limb regeneration to show its utility. We then
discuss further applications of VRV. expand
|
|
Effective Visualization of Short Routes |
|
Patrick Degener,
Ruwen Schnabel,
Christopher Schwartz,
Reinhard Klein
|
|
Pages: 1452-1458 |
|
doi>10.1109/TVCG.2008.124 |
|
Available formats:
Publisher Site
|
|
In this work we develop a new alternative to conventional maps for
visualization of relatively short paths as they are frequently encountered in
hotels, resorts or museums. Our approach is based on a warped rendering of a 3D
model of the ...
In this work we develop a new alternative to conventional maps for
visualization of relatively short paths as they are frequently encountered in
hotels, resorts or museums. Our approach is based on a warped rendering of a 3D
model of the environment such that the visualized path appears to be straight
even though it may contain several junctions. This has the advantage that the
beholder of the image gains a realistic impression of the surroundings along
the way which makes it easy to retrace the route in practice. We give an
intuitive method for generation of such images and present results from user
studies undertaken to evaluate the benefit of the warped images for orientation
in unknown environments. expand
|
|
Brushing of Attribute Clouds for the Visualization of Multivariate Data |
|
Heike Jänicke,
Michael Böttinger,
Gerik Scheuermann
|
|
Pages: 1459-1466 |
|
doi>10.1109/TVCG.2008.116 |
|
Available formats:
Publisher Site
|
|
The
visualization and exploration of multivariate data is still a
challenging task. Methods either try to visualize all variables
simultaneously at each position using glyph-based approaches or use
linked views for the interaction between attribute ...
The
visualization and exploration of multivariate data is still a
challenging task. Methods either try to visualize all variables
simultaneously at each position using glyph-based approaches or use
linked views for the interaction between attribute space and
physical domain such as brushing of scatterplots. Most visualizations of
the attribute space are either difficult to understand or suffer
from visual clutter. We propose a transformation of the high-dimensional
data in attribute space to 2D that results in a point cloud,
called attribute cloud, such that points with similar multivariate
attributes are located close to each other. The transformation is
based on ideas from multivariate density estimation and manifold
learning. The resulting attribute cloud is an easy to understand
visualization of multivariate data in two dimensions. We explain several
techniques to incorporate additional information into the
attribute cloud, that help the user get a better understanding of
multivariate data. Using different examples from fluid dynamics and
climate simulation, we show how brushing can be used to explore the
attribute cloud and find interesting structures in physical space. expand
|
|
Visualizing Temporal Patterns in Large Multivariate Data using Modified Globbing |
|
Markus Glatter,
Jian Huang,
Sean Ahern,
Jamison Daniel,
Aidong Lu
|
|
Pages: 1467-1474 |
|
doi>10.1109/TVCG.2008.184 |
|
Available formats:
Publisher Site
|
|
Extracting
and visualizing temporal patterns in large scientific data is an open
problem in visualization research. First, there are few proven methods
to flexibly and concisely define general temporal patterns for
visualization. Second, with large time-dependent ...
Extracting
and visualizing temporal patterns in large scientific data is an open
problem in visualization research. First, there are few proven methods
to flexibly and concisely define general temporal patterns for
visualization. Second, with large time-dependent data sets, as typical
with today’s large-scale simulations, scalable and general solutions for
handling the data are still not widely available. In this work, we have
developed a textual pattern matching approach for specifying and
identifying general temporal patterns. Besides defining the formalism of
the language, we also provide a working implementation with sufficient
efficiency and scalability to handle large data sets. Using recent
large-scale simulation data from multiple application domains, we
demonstrate that our visualization approach is one of the first to
empower a concept driven exploration of large-scale time-varying
multivariate data. expand
|
|
Interactive Comparison of Scalar Fields Based on Largest Contours with Applications to Flow Visualization |
|
Dominic Schneider,
Alexander Wiebel,
Hamish Carr,
Mario Hlawitschka,
Gerik Scheuermann
|
|
Pages: 1475-1482 |
|
doi>10.1109/TVCG.2008.143 |
|
Available formats:
Publisher Site
|
|
Understanding
fluid flow data, especially vortices, is still a challenging task.
Sophisticated visualization tools help to gain insight. In this paper,
we present a novel approach for the interactive comparison of scalar
fields using isosurfaces, and ...
Understanding
fluid flow data, especially vortices, is still a challenging task.
Sophisticated visualization tools help to gain insight. In this paper,
we present a novel approach for the interactive comparison of scalar
fields using isosurfaces, and its application to fluid flow datasets.
Features in two scalar fields are defined by largest contour
segmentation after topological simplification. These features are
matched using a volumetric similarity measure based on spatial overlap
of individual features. The relationships defined by this similarity
measure are ranked and presented in a thumbnail gallery of feature pairs
and a graph representation showing all relationships between individual
contours. Additionally, linked views of the contour trees are provided
to ease navigation. The main render view shows the selected features
overlapping each other. Thus, by displaying individual features and
their relationships in a structured fashion, we enable exploratory
visualization of correlations between similar structures in two scalar
fields. We demonstrate the utility of our approach by applying it to a
number of complex fluid flow datasets, where the emphasis is put on the
comparison of vortex related scalar quantities. expand
|
|
Surface Extraction from Multi-field Particle Volume Data Using Multi-dimensional Cluster Visualization |
|
Lars Linsen,
Tran Van Long,
Paul Rosenthal,
Stephan Rosswog
|
|
Pages: 1483-1490 |
|
doi>10.1109/TVCG.2008.167 |
|
Available formats:
Publisher Site
|
|
Data
sets resulting from physical simulations typically contain a multitude
of physical variables. It is, therefore, desirable that visualization
methods take into account the entire multi-field volume data rather than
concentrating on one variable. ...
Data
sets resulting from physical simulations typically contain a multitude
of physical variables. It is, therefore, desirable that visualization
methods take into account the entire multi-field volume data rather than
concentrating on one variable. We present a visualization approach
based on surface extraction from multi-field particle volume data. The
surfaces segment the data with respect to the underlying multi-variate
function. Decisions on segmentation properties are based on the analysis
of the multi-dimensional feature space. The feature space exploration
is performed by an automated multi-dimensional hierarchical clustering
method, whose resulting density clusters are shown in the form of
density level sets in a 3D star coordinate layout. In the star
coordinate layout, the user can select clusters of interest. A selected
cluster in feature space corresponds to a segmenting surface in object
space. Based on the segmentation property induced by the cluster
membership, we extract a surface from the volume data. Our driving
applications are Smoothed Particle Hydrodynamics (SPH) simulations,
where each particle carries multiple properties. The data sets are given
in the form of unstructured point-based volume data. We directly
extract our surfaces from such data without prior resampling or grid
generation. The surface extraction computes individual points on the
surface, which is supported by an efficient neighborhood computation.
The extracted surface points are rendered using point-based rendering
operations. Our approach combines methods in scientific visualization
for object-space operations with methods in information visualization
for feature-space operations. expand
|
|
Sinus Endoscopy - Application of Advanced GPU Volume Rendering for Virtual Endoscopy |
|
Arno Krueger,
Christoph Kubisch,
Bernhard Preim,
Gero Strauss
|
|
Pages: 1491-1498 |
|
doi>10.1109/TVCG.2008.161 |
|
Available formats:
Publisher Site
|
|
For
difficult cases in endoscopic sinus surgery, a careful planning of the
intervention is necessary. Due to the reduced field of view during the
intervention, the surgeons have less information about the surrounding
structures in the working area compared ...
For
difficult cases in endoscopic sinus surgery, a careful planning of the
intervention is necessary. Due to the reduced field of view during the
intervention, the surgeons have less information about the surrounding
structures in the working area compared to open surgery. Virtual
endoscopy enables the visualization of the operating field and
additional information, such as risk structures (e.g., optical nerve and
skull base) and target structures to be removed (e.g., mucosal
swelling). The Sinus Endoscopy system provides the functional range of a
virtual endoscopic system with special focus on a realistic
representation. Furthermore, by using direct volume rendering, we avoid
time-consuming segmentation steps for the use of individual patient
datasets. However, the image quality of the endoscopic view can be
adjusted in a way that a standard computer with a modern standard
graphics card achieves interactive frame rates with low CPU utilization.
Thereby, characteristics of the endoscopic view are systematically used
for the optimization of the volume rendering speed. The system design
was based on a careful analysis of the endoscopic sinus surgery and the
resulting needs for computer support. As a small standalone application
it can be instantly used for surgical planning and patient education.
First results of a clinical evaluation with ENT surgeons were employed
to fine-tune the user interface, in particular to reduce the number of
controls by using appropriate default values wherever possible. The
system was used for preoperative planning in 102 cases, provides useful
information for intervention planning (e.g., anatomic variations of the
Rec. Frontalis), and closely resembles the intraoperative situation. expand
|
|
Glyph-Based SPECT Visualization for the Diagnosis of Coronary Artery Disease |
|
Jennis Meyer-Spradow,
Lars Stegger,
Christian Döring,
Timo Ropinski,
Klaus Hinrichs
|
|
Pages: 1499-1506 |
|
doi>10.1109/TVCG.2008.136 |
|
Available formats:
Publisher Site
|
|
Myocardial
perfusion imaging with single photon emission computed tomography
(SPECT) is an established method for the detection and evaluation of
coronary artery disease (CAD). State-of-the-art SPECT scanners yield a
large number of regional parameters ...
Myocardial
perfusion imaging with single photon emission computed tomography
(SPECT) is an established method for the detection and evaluation of
coronary artery disease (CAD). State-of-the-art SPECT scanners yield a
large number of regional parameters of the left-ventricular myocardium
(e.g., blood supply at rest and during stress, wall thickness, and wall
thickening during heart contraction) that all need to be assessed by the
physician. Today, the individual parameters of this multivariate data
set are displayed as stacks of 2D slices, bull's eye plots, or, more
recently, surfaces in 3D, which depict the left-ventricular wall. In all
these visualizations, the data sets are displayed side-by-side rather
than in an integrated manner, such that the multivariate data have to be
examined sequentially and need to be fused mentally. This is time
consuming and error-prone. In this paper we present an interactive 3D
glyph visualization, which enables an effective integrated visualization
of the multivariate data. Results from semiotic theory are used to
optimize the mapping of different variables to glyph properties. This
facilitates an improved perception of important information and thus an
accelerated diagnosis. The 3D glyphs are linked to the established 2D
views, which permit a more detailed inspection, and to relevant
meta-information such as known stenoses of coronary vessels supplying
the myocardial region. Our method has demonstrated its potential for
clinical routine use in real application scenarios assessed by nuclear
physicians. expand
|
|
Interactive Volume Exploration for Feature Detection and Quantification in Industrial CT Data |
|
Markus Hadwiger,
Fritz Laura,
Christof Rezk-Salama,
Thomas Höllt,
Georg Geier,
Thomas Pabel
|
|
Pages: 1507-1514 |
|
doi>10.1109/TVCG.2008.147 |
|
Available formats:
Publisher Site
|
|
This
paper presents a novel method for interactive exploration of industrial
CT volumes such as cast metal parts, with the goal of interactively
detecting, classifying, and quantifying features using a
visualization-driven approach. The standard approach ...
This
paper presents a novel method for interactive exploration of industrial
CT volumes such as cast metal parts, with the goal of interactively
detecting, classifying, and quantifying features using a
visualization-driven approach. The standard approach for defect
detection builds on region growing, which requires manually tuning
parameters such as target ranges for density and size, variance, as well
as the specification of seed points. If the results are not
satisfactory, region growing must be performed again with different
parameters. In contrast, our method allows interactive exploration of
the parameter space, completely separated from region growing in an
unattended pre-processing stage. The pre-computed feature volume tracks a
feature size curve for each voxel
over time, which is identified with the main region growing parameter
such as variance. A novel 3D transfer function domain over (density,
feature size, time) allows for interactive exploration of feature
classes. Features and feature size curves can also be explored
individually, which helps with transfer function specification and
allows coloring individual features and disabling features resulting
from CT artifacts. Based on the classification obtained through
exploration, the classified features can be quantified immediately. expand
|
|
Interactive Blood Damage Analysis for Ventricular Assist Devices |
|
Bernd Hentschel,
Irene Tedjo,
Markus Probst,
Marc Wolter,
Marek Behr,
Christian Bischof,
Torsten Kuhlen
|
|
Pages: 1515-1522 |
|
doi>10.1109/TVCG.2008.142 |
|
Available formats:
Publisher Site
|
|
Ventricular
Assist Devices (VADs) support the heart in its vital task of
maintaining circulation in the human body when the heart alone is not
able to maintain a sufficient flow rate due to illness or degenerative
diseases.
However, the engineering ...
Ventricular
Assist Devices (VADs) support the heart in its vital task of
maintaining circulation in the human body when the heart alone is not
able to maintain a sufficient flow rate due to illness or degenerative
diseases.
However, the engineering of these devices is a highly demanding task.
Advanced modeling methods and computer simulations allow the
investigation of the fluid flow inside such a device and in particular
of potential blood damage.
In this paper we present a set of visualization methods which have been
designed to specifically support the analysis of a tensor-based blood
damage prediction model. This model is based on the tracing of particles
through the VAD, for each of which the cumulative blood damage can be
computed. The model's tensor output approximates a single blood cell's
deformation in the flow field. The tensor and derived scalar data are
subsequently visualized using techniques based on icons, particle
visualization, and function plotting. All these techniques are
accessible through a Virtual Reality-based user interface, which
features not only stereoscopic rendering but also natural interaction
with the complex three-dimensional data. To illustrate the effectiveness
of these visualization methods, we present the results of an analysis
session that was performed by domain experts for a specific data set for
the MicroMed DeBakey VAD. expand
|
|
Box Spline Reconstruction On The Face-Centered Cubic Lattice |
|
Minho Kim,
Alireza Entezari,
Jörg Peters
|
|
Pages: 1523-1530 |
|
doi>10.1109/TVCG.2008.115 |
|
Available formats:
Publisher Site
|
|
We
introduce and analyze an efficient reconstruction algorithm for
FCC-sampled data. The reconstruction is based on the 6-direction box
spline that is naturally associated with the FCC lattice and shares the
continuity and approximation order of the ...
We
introduce and analyze an efficient reconstruction algorithm for
FCC-sampled data. The reconstruction is based on the 6-direction box
spline that is naturally associated with the FCC lattice and shares the
continuity and approximation order of the triquadratic B-spline. We
observe less aliasing for generic level sets and derive special
techniques to attain the higher evaluation efficiency promised by the
lower degree and smaller stencil-size of the $C^1$ 6-direction box
spline over the triquadratic B-spline. expand
|
|
Smooth Surface Extraction from Unstructured Point-based Volume Data Using PDEs |
|
Paul Rosenthal,
Lars Linsen
|
|
Pages: 1531-1546 |
|
doi>10.1109/TVCG.2008.164 |
|
Available formats:
Publisher Site
|
|
Smooth
surface extraction using PDEs is a well-known and widely used technique
for visualizing volume data. Existing approaches operate on gridded
data and mainly on regular structured grids. When considering
unstructured point-based volume data where ...
Smooth
surface extraction using PDEs is a well-known and widely used technique
for visualizing volume data. Existing approaches operate on gridded
data and mainly on regular structured grids. When considering
unstructured point-based volume data where sample points do not form
regular patterns nor are they connected in any form, one would typically
resample the data over a grid prior to applying the known PDE-based
methods. As resampling inserts interpolation inaccuracies, data
providers would rather have segmentation methods operate on the actual
unstructured data. We propose an approach that directly extracts smooth
surfaces from unstructured point-based volume data without prior
resampling or mesh generation.When operating on unstructured data one
needs to quickly derive neighborhood informations. The respective
information is retrieved by partitioning the 3D domain into cells using a
kd-tree and operating on its cells. We exploit neighborhood information
to estimate gradients and mean curvature at every sample point using a
four-dimensional least-squares fitting approach. Gradients and mean
curvature are required for applying the chosen PDE-based method that
combines hyperbolic advection to an isovalue of a given scalar field and
mean curvature flow. Since we are using an explicit time-integration
scheme, time steps are bounded by the Courant-Friedrichs-Lewy condition.
To avoid small global time steps, we use asynchronous local
integration. We extract the surface by successively fitting a smooth
function to the data set. This function is initialized with a signed
distance function. For each sample and for every time step we compute
the respective gradient, the mean curvature, and a stable time step.
With these informations the function is manipulated using an explicit
Euler time integration. The process continues with the next sample point
in time. If the norm of the function gradient in a sample exceeds a
given threshold at some time the function is reinitialized to a signed
distance function. The resulting smooth surface is obtained by
extracting the zero isosurface from the function using isosurface
extraction from unstructured data and rendering the surface using
point-based methods. expand
|
|
Particle-based Sampling and Meshing of Surfaces in Multimaterial Volumes |
|
Miriah Meyer,
Ross Whitaker,
Robert M. Kirby,
Christian Ledergerber,
Hanspeter Pfister
|
|
Pages: 1539-1546 |
|
doi>10.1109/TVCG.2008.154 |
|
Available formats:
Publisher Site
|
|
Methods
that faithfully and robustly capture the geometry of complex material
interfaces in labeled volume data are important for generating realistic
and accurate visualizations and simulations of real-world objects. The
generation of such multimaterial ...
Methods
that faithfully and robustly capture the geometry of complex material
interfaces in labeled volume data are important for generating realistic
and accurate visualizations and simulations of real-world objects. The
generation of such multimaterial models from measured data poses two
unique challenges: first, the surfaces must be well-sampled with
regular, efficient tessellations that are consistent across material
boundaries; and second, the resulting meshes must respect the
nonmanifold geometry of the multimaterial interfaces. This paper
proposes a strategy for sampling and meshing multimaterial volumes using
dynamic particle systems, including a novel, differentiable
representation of the material junctions that allows the particle system
to explicitly sample corners, edges, and surfaces of material
intersections. The distributions of particles are controlled by
fundamental sampling constraints, allowing Delaunay-based meshing
algorithms to reliably extract watertight meshes of consistently
high-quality. expand
|
|
Importance-Driven Time-Varying Data Visualization |
|
Chaoli Wang,
Hongfeng Yu,
Kwan-Liu Ma
|
|
Pages: 1547-1554 |
|
doi>10.1109/TVCG.2008.140 |
|
Available formats:
Publisher Site
|
|
The
ability to identify and present the most essential aspects of
time-varying data is critically important in many areas of science and
engineering. This paper introduces an importance-driven approach to
time-varying volume data visualization for enhancing ...
The
ability to identify and present the most essential aspects of
time-varying data is critically important in many areas of science and
engineering. This paper introduces an importance-driven approach to
time-varying volume data visualization for enhancing that ability. By
conducting a block-wise analysis of the data in the joint
feature-temporal space, we derive an importance curve for each data
block based on the formulation of conditional entropy from information
theory. Each curve characterizes the local temporal behavior of the
respective block, and clustering the importance curves of all the volume
blocks effectively classifies the underlying data. Based on different
temporal trends exhibited by importance curves and their clustering
results, we suggest several interesting and effective visualization
techniques to reveal the important aspects of time-varying data. expand
|
|
Visualizing Multiwavelength Astrophysical Data |
|
Hongwei Li,
Chi-Wing Fu,
Andrew Hanson
|
|
Pages: 1555-1562 |
|
doi>10.1109/TVCG.2008.182 |
|
Available formats:
Publisher Site
|
|
With
recent advances in the measurement technology for allsky astrophysical
imaging, our view of the sky is no longer limited to the tiny visible
spectral range over the 2D Celestial sphere.We now can access a third
dimension corresponding to a broad ...
With
recent advances in the measurement technology for allsky astrophysical
imaging, our view of the sky is no longer limited to the tiny visible
spectral range over the 2D Celestial sphere.We now can access a third
dimension corresponding to a broad electromagnetic spectrum with a wide
range of allsky surveys; these surveys span frequency bands including
long long wavelength radio, microwaves, very short X-rays, and gamma
rays.These advances motivate us to study and examine multiwavelength
visualization techniques to maximize
our capabilities to visualize and exploit these informative image data
sets.In this work, we begin with the processing of the data themselves,
uniformizing the representations and units of raw data obtained from
varied detector sources.Then we apply tools to map, convert, color-code,
and format the multiwavelength data in forms useful for applications.We
explore different visual representations for displaying the data,
including such methods as textured image stacks, the horseshoe
representation, and GPU-based volume visualization.A family of visual
tools and analysis methods are introduced to explore the data, including
interactive data mapping on the graphics processing unit (GPU), the
mini-map explorer, and GPU-based interactive feature analysis. expand
|
|
Visiting the Gödel Universe |
|
Frank Grave,
Michael Buser
|
|
Pages: 1563-1570 |
|
doi>10.1109/TVCG.2008.177 |
|
Available formats:
Publisher Site
|
|
Visualization
of general relativity illustrates aspects of Einstein's insights into
the curved nature of space and time to the expert as well as the
layperson. One of the most interesting models which came up with
Einstein's theory was developed by Kurt ...
Visualization
of general relativity illustrates aspects of Einstein's insights into
the curved nature of space and time to the expert as well as the
layperson. One of the most interesting models which came up with
Einstein's theory was developed by Kurt Gödel in 1949. The Gödel
universe is a valid solution of Einstein's field equations, making it a
possible physical description of our universe. It offers remarkable
features like the existence of an optical horizon beyond which time
travel is possible. Although we know that our universe is not a Gödel
universe, it is interesting to visualize physical aspects of a world
model resulting from a theory which is highly confirmed in scientific
history.
Standard techniques to adopt an egocentric point of view in a
relativistic world model have shortcomings with respect to the
timeneeded to render an image as well as difficulties in applying a
direct illumination model. In this paper we want to face both issues to
reduce the gap between common visualization standards and relativistic
visualization. We will introduce two techniques to speed up
recalculation of images by means of preprocessing and lookup tables and
to increase image quality through a special optimization applicable to
the Gödel universe.
The first technique allows the physicist to understand the different
effects of general relativity faster and better by generating images
from existing datasets interactively. By using the intrinsic symmetries
of Gödel's spacetime which are expressed by the Killing vector field, we
are able to reduce the necessary calculations to simple cases using the
second technique. This even makes it feasible to account for a direct
illumination model during the rendering process.Although the presented
methods are applied to Gödel's universe, they can also be extended to
other manifolds, for example light propagation in moving dielectric
media. Therefore, other areas of research can benefit from these generic
improvements. expand
|
|
The Seismic Analyzer: Interpreting and Illustrating 2D Seismic Data |
|
Daniel Patel,
Christopher Giertsen,
John Thurmond,
John Gjelberg,
Eduard Grøller
|
|
Pages: 1571-1578 |
|
doi>10.1109/TVCG.2008.170 |
|
Available formats:
Publisher Site
|
|
We
present a toolbox for quickly interpreting and illustrating 2D slices
of seismic volumetric reflection data. Searching for oil and gas
involves creating a structural overview of seismic reflection data to
identify hydrocarbon reservoirs. We improve ...
We
present a toolbox for quickly interpreting and illustrating 2D slices
of seismic volumetric reflection data. Searching for oil and gas
involves creating a structural overview of seismic reflection data to
identify hydrocarbon reservoirs. We improve the search of seismic
structures by precalculating the horizon structures of the seismic data
prior to interpretation. We improve the annotation of seismic structures
by applying novel illustrative rendering algorithms tailored to seismic
data, such as deformed texturing and line and texture transfer
functions. The illustrative rendering results in multi-attribute and
scale invariant visualizations where features are represented clearly in
both highly zoomed in and zoomed out views. Thumbnail views in
combination with interactive appearance control allows for a quick
overview of the data before detailed interpretation takes place. These
techniques help reduce the work of seismic illustrators and
interpreters. expand
|
|
Hypothesis Generation in Climate Research with Interactive Visual Data Exploration |
|
Johannes Kehrer,
Florian Ladstädter,
Philipp Muigg,
Helmut Doleisch,
Andrea Steiner,
Helwig Hauser
|
|
Pages: 1579-1586 |
|
doi>10.1109/TVCG.2008.139 |
|
Available formats:
Publisher Site
|
|
One
of the most prominent topics in climate research is the investigation,
detection, and allocation of climate change. In this paper, we aim at
identifying regions in the atmosphere (e.g., certain height layers)
which can act as sensitive and robust ...
One
of the most prominent topics in climate research is the investigation,
detection, and allocation of climate change. In this paper, we aim at
identifying regions in the atmosphere (e.g., certain height layers)
which can act as sensitive and robust indicators for climate change. We
demonstrate how interactive visual data exploration of large amounts of
multi-variate and time-dependent climate data enables the steered
generation of promising hypotheses for subsequent statistical
evaluation. The use of new visualization and interaction technology—in
the context of a coordinated multiple views framework—allows not only to
identify these promising hypotheses, but also to efficiently narrow
down parameters that are required in the process of computational data
analysis. Two datasets, namely an ECHAM5 climate model run and the
ERA-40 reanalysis incorporating observational data, are investigated.
Higher-order information such as linear trends or signal-to-noise ratio
is derived and interactively explored in order to detect and explore
those regions which react most sensitively to climate change. As one
conclusion from this study, we identify an excellent potential for
usefully generalizing our approach to other, similar application cases,
as well. expand
|
|
Novel interaction techniques for neurosurgical planning and stereotactic navigation |
|
Alark Joshi,
Dustin Scheinost,
Kenneth Vives,
Dennis Spencer,
Lawrence Staib,
Xenophon Papademetris
|
|
Pages: 1587-1594 |
|
doi>10.1109/TVCG.2008.150 |
|
Available formats:
Publisher Site
|
|
Neurosurgical
planning and image guided neurosurgery require the visualization of
multimodal data obtained from various functional and structural image
modalities, such as Magnetic Resonance Imaging (MRI), Computed
Tomography (CT), functional MRI, Single ...
Neurosurgical
planning and image guided neurosurgery require the visualization of
multimodal data obtained from various functional and structural image
modalities, such as Magnetic Resonance Imaging (MRI), Computed
Tomography (CT), functional MRI, Single photon emission computed
tomography (SPECT) and so on. In the case of epilepsy neurosurgery for
example, these images are used to identify brain regions to guide
intracranial electrode implantation and resection.Generally, such data
is visualized using 2D slices and in some cases using a 3D volume
rendering along with the functional imaging results.Visualizing the
activation region effectively by still preserving sufficient surrounding
brain regions for context is exceedingly important to neurologists and
surgeons.
We present novel interaction techniques for visualization of
multimodaldata to facilitate improved exploration and planning for
neurosurgery.We extended the line widget from VTK to allow surgeons to
control the shape of
the region of the brain that they can visually crop away during
exploration and surgery. We allow simple spherical, cubical, ellipsoidal
and cylindrical (probe aligned cuts) for exploration purposes. In
addition we
integrate the cropping tool with the image-guided navigation system used
for epilepsy neurosurgery. We are currently investigating the use of
these new tools in surgical planning and based on further feedback from
our
neurosurgeons we will integrate them into the setup used for
image-guided neurosurgery. expand
|
|
Visualization of Myocardial Perfusion Derived from Coronary Anatomy |
|
Maurice Termeer,
Javier Oliván Bescós,
Marcel Breeuwer,
Anna Vilanova,
Frans Gerritsen,
M. Eduard Gröller,
Eike Nagel
|
|
Pages: 1595-1602 |
|
doi>10.1109/TVCG.2008.180 |
|
Available formats:
Publisher Site
|
|
Visually
assessing the effect of the coronary artery anatomy on the perfusion of
the heart muscle in patients with coronary artery disease remains a
challenging task. We explore the feasibility of visualizing this effect
on perfusion using a numerical ...
Visually
assessing the effect of the coronary artery anatomy on the perfusion of
the heart muscle in patients with coronary artery disease remains a
challenging task. We explore the feasibility of visualizing this effect
on perfusion using a numerical approach. We perform a computational
simulation of the way blood is perfused throughout the myocardium purely
based on information from a three-dimensional anatomical tomographic
scan. The results are subsequently visualized using both
three-dimensional visualizations and bull’s eye plots, partially
inspired by approaches currently common in medical practice. Our
approach results in a comprehensive visualization of the coronary
anatomy that compares well to visualizations commonly used for other
scanning technologies. We demonstrate techniques giving detailed insight
in blood supply, coronary territories and feeding coronary arteries of a
selected region. We demonstrate the advantages of our approach through
visualizations that show information which commonly cannot be directly
observed in scanning data, such as a separate visualization of the
supply from each coronary artery. We thus show that the results of a
computational simulation can be effectively visualized and facilitate
visually correlating these results to for example perfusion data. expand
|
|
Effective visualization of complex vascular structures using a non-parametric vessel detection method |
|
Alark Joshi,
Xiaoning Qian,
Donald Dione,
Ketan Bulsara,
Christopher Breuer,
Albert Sinusas,
Xenophon Papademetris
|
|
Pages: 1603-1610 |
|
doi>10.1109/TVCG.2008.123 |
|
Available formats:
Publisher Site
|
|
The
effective visualization of vascular structures is critical for
diagnosis, surgical planning as well as treatment evaluation. In recent
work, we have developed an algorithm for vessel detection that examines
the intensity profile around each voxel ...
The
effective visualization of vascular structures is critical for
diagnosis, surgical planning as well as treatment evaluation. In recent
work, we have developed an algorithm for vessel detection that examines
the intensity profile around each voxel in an angiographic image and
determines the likelihood that any given voxel belongs to a vessel; we
term this the "vesselness coefficient" of the voxel.Our results show
that our algorithm works particularly well for visualizing branch points
in vessels. Compared to standard Hessian based techniques, which are
fine-tuned to identify long cylindrical structures, our technique
identifies branches and connections with other vessels.
Using our computed vesselness coefficient, we explore a set of
techniquesfor visualizing vasculature. Visualizing vessels is
particularly challenging because not only is their position in space
important for
clinicians but it is also important to be able to resolve their spatial
relationship. We applied visualization techniques that provide shape
cues as well as depth cues to allow the viewer to differentiate between
vessels that are closer from those that are farther. We use our computed
vesselness coefficient to effectively visualize vasculature in both
clinical neurovascular x-ray computed tomography based angiography
images,
as well as images from three different animal studies. We conducted a
formal user evaluation of our visualization techniques with the help of
radiologists, surgeons, and other expert users. Results indicate that
experts preferred distance color blending and tone shading for conveying
depth over standard visualization techniques. expand
|
|
Visualization of Cellular and Microvascular Relationships |
|
David Mayerich,
Louise Abbott,
John Keyser
|
|
Pages: 1611-1618 |
|
doi>10.1109/TVCG.2008.179 |
|
Available formats:
Publisher Site
|
|
Understanding
the structure of microvasculature structures and their relationship to
cells in biological tissue is an important and complex problem. Brain
microvasculature in particular is known to play an important role in
chronic diseases. However, ...
Understanding
the structure of microvasculature structures and their relationship to
cells in biological tissue is an important and complex problem. Brain
microvasculature in particular is known to play an important role in
chronic diseases. However, these networks are only visible at the
microscopic level and can span large volumes of tissue. Due to recent
advances in microscopy, large volumes of data can be imaged at the
resolution necessary to reconstruct these structures. Due to the dense
and complex nature of microscopy data sets, it is important to limit the
amount of information displayed. In this paper, we describe methods for
encoding the unique structure of microvascular data, allowing
researchers to selectively explore microvascular anatomy. We also
identify the queries most useful to researchers studying microvascular
and cellular relationships. By associating cellular structures with our
microvascular framework, we allow researchers to explore interesting
anatomical relationships in dense and complex data sets. expand
|
|
A Practical Approach to Morse-Smale Complex Computation: Scalability and Generality |
|
Attila Gyulassy,
Peer-Timo Bremer,
Bernd Hamann,
Valerio Pascucci
|
|
Pages: 1619-1626 |
|
doi>10.1109/TVCG.2008.110 |
|
Available formats:
Publisher Site
|
|
The
Morse-Smale (MS) complex has proven to be a useful tool in extracting
and visualizing features from scalar-valued data. However, efficient
computation of the MS complex for large scale data remains a challenging
problem. We describe a new algorithm ...
The
Morse-Smale (MS) complex has proven to be a useful tool in extracting
and visualizing features from scalar-valued data. However, efficient
computation of the MS complex for large scale data remains a challenging
problem. We describe a new algorithm and easily extensible framework
for computing MS complexes for large scale data of any dimension where
scalar values are given at the vertices of a closure-finite and weak
topology (CW) complex, therefore enabling computation on a wide variety
of meshes such as regular grids, simplicial meshes, and adaptive
multiresolution (AMR) meshes. A new divide-and-conquer strategy allows
for memory-efficient computation of the MS complexand simplification
on-the-fly to control the size of the output.
In addition to being able to handle various data formats, the framework
supports implementation-specific optimizations, for example, for regular
data. We present the complete characterization of critical point
cancellations in all dimensions. This technique enables the topology
based analysis of large data on off-the-shelf computers. In particular
we demonstrate the first full computation of the MS complex for a 1
billion/$1024^3$ node grid on a laptop computer with 2Gb memory. expand
|
|
Invariant Crease Lines for Topological and Structural Analysis of Tensor Fields |
|
Xavier Tricoche,
Gordon Kindlmann,
Carl-Fredrik Westin
|
|
Pages: 1627-1634 |
|
doi>10.1109/TVCG.2008.148 |
|
Available formats:
Publisher Site
|
|
We
introduce a versatile framework for characterizing and extracting
salient structures in three-dimensional symmetric second-order tensor
fields.The key insight is that degenerate lines in tensor fields, as
defined by the standard topological approach, ...
We
introduce a versatile framework for characterizing and extracting
salient structures in three-dimensional symmetric second-order tensor
fields.The key insight is that degenerate lines in tensor fields, as
defined by the standard topological approach, are exactly crease (ridge
and valley) lines of a particular tensor invariant called mode. This
reformulation allows us to apply well-studied approaches from scientific
visualization or computer vision to the extraction of topological lines
in tensor fields. More generally, this main result suggests that other
tensor invariants, such as anisotropy measures like fractional
anisotropy (FA), can be used in the same framework in lieu of mode to
identify important structural properties in tensor fields.Our
implementation addresses the specific challenge posed by the
non-linearity of the considered scalar measures and by the smoothness
requirement of the crease manifold computation.We use a combination of
smooth reconstruction kernels and adaptive refinement strategy that
automatically adjust the resolution of the analysis to the spatial
variation of the considered quantities. Together, these improvements
allow for the robust application of existing ridge line extraction
algorithms in the tensor context of our problem. Results are proposed
for a diffusion tensor MRI dataset, and for a benchmark stress tensor
field used in engineering research. expand
|
|
Estimating Crossing Fibers: A Tensor Decomposition Approach |
|
Thomas Schultz,
Hans-Peter Seidel
|
|
Pages: 1635-1642 |
|
doi>10.1109/TVCG.2008.128 |
|
Available formats:
Publisher Site
|
|
Diffusion
weighted magnetic resonance imaging is a unique tool for non-invasive
investigation of major nerve fiber tracts. Since the popular diffusion
tensor (DT-MRI) model is limited to voxels with a single fiber
direction, a number of high angular ...
Diffusion
weighted magnetic resonance imaging is a unique tool for non-invasive
investigation of major nerve fiber tracts. Since the popular diffusion
tensor (DT-MRI) model is limited to voxels with a single fiber
direction, a number of high angular resolution techniques have been
proposed to provide information about more diverse fiber distributions.
Two such approaches are Q-Ball imaging and spherical deconvolution,
which produce orientation distribution functions (ODFs) on the sphere.
For analysis and visualization, the maxima of these functions have been
used as principal directions, even though the results are known to be
biased in case of crossing fiber tracts. In this paper, we present a
more reliable technique for extracting discrete orientations from
continuous ODFs, which is based on decomposing their higher-order tensor
representation into an isotropic component, several rank-1 terms, and a
small residual. Comparing to ground truth in synthetic data shows that
the novel method reduces bias and reliably reconstructs crossing fibers
which are not resolved as individual maxima in the ODF. We present
results on both Q-Ball and spherical deconvolution data and demonstrate
that the estimated directions allow for plausible fiber tracking in a
real data set. expand
|
|
Geodesic Distance-weighted Shape Vector Image Diffusion |
|
Jing Hua,
Zhaoqiang Lai,
Ming Dong,
Xianfeng Gu,
Hong Qin
|
|
Pages: 1643-1650 |
|
doi>10.1109/TVCG.2008.134 |
|
Available formats:
Publisher Site
|
|
This
paper presents a novel and efficient surface matching and visualization
framework through the geodesic distance-weighted shape vector image
diffusion. Based on conformal geometry, our approach can uniquely map a
3D surface to a canonical rectangular ...
This
paper presents a novel and efficient surface matching and visualization
framework through the geodesic distance-weighted shape vector image
diffusion. Based on conformal geometry, our approach can uniquely map a
3D surface to a canonical rectangular domain and encode the shape
characteristics (e.g., mean curvatures and conformal factors) of the
surface in the 2D domain to construct a geodesic distance-weighted shape
vector image, where the distances between sampling pixels are not
uniform but the actual geodesic distances on the manifold. Through the
novel geodesic distance-weighted shape vector image diffusion presented
in this paper, we can create a multiscale diffusion space, in which the
cross-scale extrema can be detected as the robust geometric features for
the matching and registration of surfaces. Therefore, statistical
analysis and visualization of surface properties across subjects become
readily available. The experiments on scanned surface models show that
our method is very robust for feature extraction and surface matching
even under noise and resolution change. We have also applied the
framework on the real 3D human neocortical surfaces, and demonstrated
the excellent performance of our approach in statistical analysis and
integrated visualization of the multimodality volumetric data over the
shape vector image. expand
|
|
Edge Groups: An Approach to Understanding the Mesh Quality of Marching Methods |
|
Carlos A. Dietrich,
Carlos Scheidegger,
João Comba,
Luciana Nedel,
Cláudio Silva
|
|
Pages: 1651-1666 |
|
doi>10.1109/TVCG.2008.122 |
|
Available formats:
Publisher Site
|
|
Marching
Cubes is the most popular isosurface extraction algorithm due to its
simplicity, efficiency and robustness. It has been widely studied,
improved, and extended. While much early work was concerned with
efficiency and correctness issues, lately ...
Marching
Cubes is the most popular isosurface extraction algorithm due to its
simplicity, efficiency and robustness. It has been widely studied,
improved, and extended. While much early work was concerned with
efficiency and correctness issues, lately there has been a push to
improve the quality of Marching Cubes meshes so that they can be used in
computational codes. In this work we present a new classification of MC
cases that we call Edge Groups, which helps elucidate the issues that
impact the triangle quality of the meshes that the method generates.
This formulation allows a more systematic way to bound the triangle
quality, and is general enough to extend to other polyhedral cell shapes
used in other polygonization algorithms. Using this analysis, we also
discuss ways to improve the quality of the resulting triangle mesh,
including some that require only minor modifications of the original
algorithm. expand
|
|
Revisiting Histograms and Isosurface Statistics |
|
Carlos E. Scheidegger,
John M. Schreiner,
Brian Duffy,
Hamish Carr,
Cláudio T. Silva
|
|
Pages: 1659-1666 |
|
doi>10.1109/TVCG.2008.160 |
|
Available formats:
Publisher Site
|
|
Recent
results have shown a link between geometric properties of isosurfaces
and statistical properties of the underlying sampled data. However, this
has two defects: not all of the properties described converge to the
same solution, and the statistics ...
Recent
results have shown a link between geometric properties of isosurfaces
and statistical properties of the underlying sampled data. However, this
has two defects: not all of the properties described converge to the
same solution, and the statistics computed are not always invariant
under isosurface-preserving transformations. We apply Federer’s Coarea
Formula from geometric measure theory to explain these discrepancies.
We describe an improved substitute for histograms based on weighting
with the inverse gradient magnitude, develop a statistical model that is
invariant under isosurface-preserving transformations, and argue that
this provides a consistent method for algorithm evaluation across
multiple datasets based on histogram equalization. We use our corrected
formulation to reevaluate recent results on average isosurface
complexity, and show evidence that noise is one cause of the discrepancy
between the expected figure and the observed one. expand
|
|
Visibility-driven Mesh Analysis and Visualization through Graph Cuts |
|
Kaichi Zhou,
Eugene Zhang,
Jiří Bittner,
Peter Wonka
|
|
Pages: 1667-1674 |
|
doi>10.1109/TVCG.2008.176 |
|
Available formats:
Publisher Site
|
|
In
this paper we present an algorithm that operates on a triangular mesh
and classifies each face of a triangle as either inside or outside. We
present three example applications of this core algorithm: normal
orientation, inside removal, and layer-based ...
In
this paper we present an algorithm that operates on a triangular mesh
and classifies each face of a triangle as either inside or outside. We
present three example applications of this core algorithm: normal
orientation, inside removal, and layer-based visualization. The
distinguishing feature of our algorithm is its robustness even if a
difficult input model that includes holes, coplanar triangles,
intersecting triangles, and lost connectivity is given. Our algorithm
works with the original triangles of the input model and uses sampling
to construct a visibility graph that is then segmented using graph cut. expand
|
|
Text Scaffolds for Effective Surface Labeling |
|
Gregory Cipriano,
Michael Gleicher
|
|
Pages: 1675-1682 |
|
doi>10.1109/TVCG.2008.168 |
|
Available formats:
Publisher Site
|
|
In
this paper we introduce a technique for applying textual labels to 3D
surfaces. An effective labeling must balance the conflicting goals of
conveying the shape of the surface while being legible from a range of
viewing directions. Shape can be conveyed ...
In
this paper we introduce a technique for applying textual labels to 3D
surfaces. An effective labeling must balance the conflicting goals of
conveying the shape of the surface while being legible from a range of
viewing directions. Shape can be conveyed by placing the text as a
texture directly on the surface, providing shape cues, meaningful
landmarks and minimally obstructing the rest of the model. But rendering
such surface text is problematic both in regions of high curvature,
where text would be warped, and in highly occluded regions, where it
would be hidden. Our approach achieves both labeling goals by applying
surface labels to a ’text scaffold’, a surface explicitly constructed to
hold the labels. Text scaffolds conform to the underlying surface
whenever possible, but can also float above problem regions, allowing
them to be smooth while still conveying the overall shape. This paper
provides methods for constructing scaffolds from a variety of input
sources, including meshes, constructive solid geometry, and scalar
fields. These sources are first mapped into a distance transform, which
is then filtered and used to construct a new mesh on which labels are
either manually or automatically placed. In the latter case, annotated
regions of the input surface are associated with proximal regions on the
new mesh, and labels placed using cartographic principles. expand
|
|
Relation-Aware Volume Exploration Pipeline |
|
Ming-Yuen Chan,
Huamin Qu,
Ka-Kei Chung,
Wai-Ho Mak,
Yingcai Wu
|
|
Pages: 1683-1690 |
|
doi>10.1109/TVCG.2008.159 |
|
Available formats:
Publisher Site
|
|
Volume
exploration is an important issue in scientific visualization. Research
on volume exploration has been focused on revealing hidden structures
in volumetric data.While the information of individual structures or
features is useful in practice, ...
Volume
exploration is an important issue in scientific visualization. Research
on volume exploration has been focused on revealing hidden structures
in volumetric data.While the information of individual structures or
features is useful in practice, spatial relations between structures are
also important in many applications and can provide further insights
into the data. In this paper, we systematically study the extraction,
representation,exploration, and visualization of spatial relations in
volumetric data and propose a novel relation-aware visualization
pipeline for volume exploration. In our pipeline, various relations in
the volume are first defined and measured using region connection
calculus (RCC) and then represented using a graph interface called
relation graph. With RCC and the relation graph, relation query and
interactive exploration can be conducted in a comprehensive and
intuitive way. The visualization process is further assisted with
relation-revealing viewpoint selection and color and opacity
enhancement. We also introduce a quality assessment scheme which
evaluates the perception of spatial relations in the rendered images.
Experiments on various datasets demonstrate the practical use of our
system in exploratory visualization. expand
|
|
VisComplete: Automating Suggestions for Visualization Pipelines |
|
David Koop
|
|
Pages: 1691-1698 |
|
doi>10.1109/TVCG.2008.174 |
|
Available formats:
Publisher Site
|
|
Building
visualization and analysis pipelines is a large hurdle in the adoption
of visualization and workflow systems by domain scientists. In this
paper, we propose techniques to help users construct pipelines by
consensus—automatically suggesting ...
Building
visualization and analysis pipelines is a large hurdle in the adoption
of visualization and workflow systems by domain scientists. In this
paper, we propose techniques to help users construct pipelines by
consensus—automatically suggesting completions based on a database of
previously created pipelines. In par ticular, we compute correspondences
between existing pipeline subgraphs from the database, and use these to
predict sets of likely pipeline additions to a given par tial pipeline.
By presenting these predictions in a carefully designed interface,
users can create visualizations and other data products more efficiently
because they can augment their normal work patterns with the suggested
completions. We present an implementation of our technique in a
publicly-available, open-source scientific workflow system and
demonstrate efficiency gains in real-world situations. expand
|
|
Interactive Visual Steering - Rapid Visual Prototyping of a Common Rail Injection System |
|
Kresimir Matkovic,
Denis Gracanin,
Mario Jelovic,
Helwig Hauser
|
|
Pages: 1699-1706 |
|
doi>10.1109/TVCG.2008.145 |
|
Available formats:
Publisher Site
|
|
Interactive
steering with visualization has been a common goal of the visualization
research community for twenty years, but it is rarely ever realized in
practice. In this paper we describe a successful realization of a
tightly coupled steering loop, ...
Interactive
steering with visualization has been a common goal of the visualization
research community for twenty years, but it is rarely ever realized in
practice. In this paper we describe a successful realization of a
tightly coupled steering loop, integrating new simulation technology and
interactive visual analysis in a prototyping environment for automotive
industry system design. Due to increasing pressure on car manufacturers
to meet new emission regulations, to improve efficiency, and to reduce
noise, both simulation and visualization are pushed to their limits.
Automotive system components, such as the powertrain system or the
injection system have an increasing number of parameters, and new design
approaches are required. It is no longer possible to optimize such a
system solely based on experience or forward optimization. By coupling
interactive visualization with the simulation back-end (computational
steering), it is now possible to quickly prototype a new system,
starting from a non-optimized initial prototype and the corresponding
simulation model. The prototyping continues through the refinement of
the simulation model, of the simulation parameters and through
trial-and-error attempts to an optimized solution. The ability to early
see the first results from a multidimensional simulation space —
thousands of simulations are run for a multidimensional variety of input
parameters — and to quickly go back into the simulation and request
more runs in particular parameter regions of interest significantly
improves the prototyping process and provides a deeper understanding of
the system behavior. The excellent results which we achieved for the
common rail injection system strongly suggest that our approach has a
great potential of being generalized to other, similar scenarios. expand
|
|
AD-Frustum: Adaptive Frustum Tracing for Interactive Sound Propagation |
|
Anish Chandak,
Christian Lauterbach,
Micah Taylor,
Zhimin Ren,
Dinesh Manocha
|
|
Pages: 1707-1722 |
|
doi>10.1109/TVCG.2008.111 |
|
Available formats:
Publisher Site
|
|
We
present an interactive algorithm to compute sound propagation paths for
transmission, specular reflection and edge diffraction in complex
scenes. Our formulation uses an adaptive frustum representation that is
automatically sub-divided to accurately ...
We
present an interactive algorithm to compute sound propagation paths for
transmission, specular reflection and edge diffraction in complex
scenes. Our formulation uses an adaptive frustum representation that is
automatically sub-divided to accurately compute intersections with the
scene primitives. We describe a simple and fast algorithm to approximate
the visible surface for each frustum and generate new frusta based on
specular reflection and edge diffraction. Our approach is applicable to
all triangulated models and we demonstrate its performance on
architectural and outdoor models with tens or hundreds of thousands of
triangles and moving objects. In practice, our algorithm can perform
geometric sound propagation in complex scenes at 4-20 frames per second
on a multi-core PC. expand
|
|
Query-Driven Visualization of Time-Varying Adaptive Mesh Refinement Data |
|
Luke J. Gosink,
John C. Anderson,
E. Wes Bethel,
Kenneth I. Joy
|
|
Pages: 1715-1722 |
|
doi>10.1109/TVCG.2008.157 |
|
Available formats:
Publisher Site
|
|
The
visualization and analysis of AMR-based simulations is integral to the
process of obtaining new insight in scientific research. We present a
new method for performing query-driven visualization and analysis on AMR
data, with specific emphasis on ...
The
visualization and analysis of AMR-based simulations is integral to the
process of obtaining new insight in scientific research. We present a
new method for performing query-driven visualization and analysis on AMR
data, with specific emphasis on time-varying AMR data. Our work
introduces a new method that directly addresses the dynamic spatial and
temporal properties of AMR grids that challenge many existing
visualization techniques. Further, we present the first implementation
of query-driven visualization on the GPU that uses a GPU-based indexing
structure to both answer queries and efficiently utilize GPU memory. We
apply our method to two different science domains to demonstrate its
broad applicability. expand
|
|
A
Comparison of the Perceptual Benefits of Linear Perspective and
Physically-Based Illumination for Display of Dense 3D Streamtubes |
|
Chris Weigle,
David Banks
|
|
Pages: 1723-1730 |
|
doi>10.1109/TVCG.2008.108 |
|
Available formats:
Publisher Site
|
|
Large
datasets typically contain coarse features comprised of finer
sub-features. Even if the shapes of the small structures are evident in a
3D display, the aggregate shapes they suggest may not be easily
inferred. From previous studies in shape perception, ...
Large
datasets typically contain coarse features comprised of finer
sub-features. Even if the shapes of the small structures are evident in a
3D display, the aggregate shapes they suggest may not be easily
inferred. From previous studies in shape perception, the evidence has
not been clear whether physically-based illumination confers any
advantage over local illumination for understanding scenes that arise in
visualization of large data sets that contain features at two distinct
scales. In this paper we show that physically-based illumination can
improve the perception for some static scenes of complex 3D geometry
from flow fields. We perform human-subjects experiments to quantify the
effect of physically-based illumination on participant performance for
two tasks: selecting the closer of two streamtubes from a field of
tubes, and identifying the shape of the domain of a flow field over
different densities of tubes. We find that physically-based illumination
influences participant performance as strongly as perspective
projection, suggesting that physically-based illumination is indeed a
strong cue to the layout of complex scenes. We also find that increasing
the density of tubes for the shape identification task improved
participant performance under physically-based illumination but not
under the traditionalhardware-accelerated illumination model. expand
|
|
Focus+Context Visualization with Distortion Minimization |
|
Yu-Shuen Wang,
Tong-Yee Lee,
Chiew-Lan Tai
|
|
Pages: 1731-1738 |
|
doi>10.1109/TVCG.2008.132 |
|
Available formats:
Publisher Site
|
|
The
need to examine and manipulate large surface models is commonly found
in many science, engineering, and medical applications. On a desktop
monitor, however, seeing the whole model in detail is not possible. In
this paper, we present a new, interactive ...
The
need to examine and manipulate large surface models is commonly found
in many science, engineering, and medical applications. On a desktop
monitor, however, seeing the whole model in detail is not possible. In
this paper, we present a new, interactive Focus+Context method for
visualizing large surface models. Our method, based on an energy
optimization model, allows the user to magnify an area of interest to
see it in detail while deforming the rest of the area without
perceivable distortion. The rest of the surface area is essentially
shrunk to use as little of the screen space as possible in order to keep
the entire model displayed on screen. We demonstrate the efficacy and
robustness of our method with a variety of models. expand
|
|
Color Design for Illustrative Visualization |
|
Lujin Wang,
Joachim Giesen,
Kevin T. McDonnell,
Peter Zolliker,
Klaus Mueller
|
|
Pages: 1739-1754 |
|
doi>10.1109/TVCG.2008.118 |
|
Available formats:
Publisher Site
|
|
Professional
designers and artists are quite cognizant of the rules that guide the
design of effective color palettes, from both aesthetic and
attention-guiding points of view. In the field of visualization,
however, the use of systematic rules embracing ...
Professional
designers and artists are quite cognizant of the rules that guide the
design of effective color palettes, from both aesthetic and
attention-guiding points of view. In the field of visualization,
however, the use of systematic rules embracing these aspects has
received less attention. The situation is further complicated by the
fact that visualization often uses semi-transparencies to reveal
occluded objects, in which case the resulting color mixing effects add
additional constraints to the choice of the color palette. Color design
forms a crucial part in visual aesthetics. Thus, the consideration of
these issues can be of great value in the emerging field of illustrative
visualization. We describe a knowledge-based system that captures
established color design rules into a comprehensive interactive
framework, aimed to aid users in the selection of colors for scene
objects and incorporating individual preferences, importance functions,
and overall scene composition. Our framework also offers new knowledge
and solutions for the mixing, ordering and choice of colors in the
rendering of semi-transparent layers and surfaces. All design rules are
evaluated via user studies, for which we extend the method of conjoint
analysis to task-based testing scenarios. Our framework’s use of
principles rooted in color design with application for the illustration
of features in pre-classified data distinguishes it from existing
systems which target the exploration of continuous-range density data
via perceptual color maps. expand
|
|
An Efficient Naturalness-Preserving Image-Recoloring Method for Dichromats |
|
Giovane R. Kuhn,
Manuel M. Oliveira,
Leandro A. F. Fernandes
|
|
Pages: 1747-1754 |
|
doi>10.1109/TVCG.2008.112 |
|
Available formats:
Publisher Site
|
|
We
present an efficient and automatic image-recoloring technique for
dichromats that highlights important visual details that would otherwise
be unnoticed by these individuals. While previous techniques approach
this problem by potentially changing all ...
We
present an efficient and automatic image-recoloring technique for
dichromats that highlights important visual details that would otherwise
be unnoticed by these individuals. While previous techniques approach
this problem by potentially changing all colors of the original image,
causing their results to look unnatural to color vision deficients, our
approach preserves, as much as possible, the image’s original colors.
Our approach is about three orders of magnitude faster than previous
ones. The results of a paired-comparison evaluation carried out with
fourteen color-vision deficients (CVDs) indicated the preference of our
technique over the state-of-the-art automatic recoloring technique for
dichromats. When considering information visualization examples, the
subjects tend to prefer our results over the original images. An
extension of our technique that exaggerates color contrast tends to be
preferred when CVDs compared pairs of scientific visualization images.
These results provide valuable information for guiding the design of
visualizations for color-vision deficients. expand
|
|
Effects of Video Placement and Spatial Context Presentation on Path Reconstruction Tasks with Contextualized Videos |
|
Yi Wang,
Doug Bowman,
David Krum,
Enylton Coalho,
Tonya Smith-Jackson,
David Bailey,
Sarah Peck,
Swethan Anand,
Trevor Kennedy,
Yernar Abdrazakov
|
|
Pages: 1755-1762 |
|
doi>10.1109/TVCG.2008.126 |
|
Available formats:
Publisher Site
|
|
Many
interesting and promising prototypes for visualizing video data have
been proposed, including those that combine videos with their spatial
context (contextualized videos). However, relatively little work has
investigated the fundamental design factors ...
Many
interesting and promising prototypes for visualizing video data have
been proposed, including those that combine videos with their spatial
context (contextualized videos). However, relatively little work has
investigated the fundamental design factors behind these prototypes in
order to provide general design guidance. Focusing on real-time video
data visualization, we evaluated two important design factors — video
placement method and spatial context presentation method — through a
user study. In addition, we evaluated the effect of spatial knowledge of
the environment. Participants’ performance was measured through path
reconstruction tasks, where the participants followed a target through
simulated surveillance videos and marked the target paths on the
environment model. We found that embedding videos inside the model
enabled realtime strategies and led to faster performance. With the help
of contextualized videos, participants not familiar with the real
environment achieved similar task performance to participants that
worked in that environment. We discuss design implications and provide
general design recommendations for traffic and security surveillance
system interfaces. expand
|
|
Back matter |
|
Pages: xxvii-xxviii |
|
doi>10.1109/TVCG.2008.113 |
|
Available formats:
Publisher Site
|
|
Back matter from Vis/InfoVis 2008
Back matter from Vis/InfoVis 2008 expand
|