|
IEEE Visualization Conference and IEEE Information Visualization Conference Proceedings 2007 pre-pages |
|
Pages: ii - xxvii |
|
Available formats:
Publisher Site
|
|
|
|
Visual Analysis of Network Traffic for Resource Planning, Interactive Monitoring, and Interpretation of Security Threats |
|
Florian Mansmann,
Daniel A. Keim,
Stephen C. North,
Brian Rexroad,
Daniel Sheleheda
|
|
Pages: 1105-1112 |
|
doi>10.1109/TVCG.2007.70522 |
|
Available formats:
Publisher Site
|
|
The
Internet has become a wild place: malicious code is spread on personal
computers across the world, deploying botnets ready to attack the
network infrastructure. The vast number of security incidents and other
anomalies overwhelms attempts at manual ...
The
Internet has become a wild place: malicious code is spread on personal
computers across the world, deploying botnets ready to attack the
network infrastructure. The vast number of security incidents and other
anomalies overwhelms attempts at manual analysis, especially when
monitoring service provider backbone links. We present an approach to
interactive visualization with a case study indicating that interactive
visualization can be applied to gain more insight into these large data
sets. We superimpose a hierarchy on IP address space, and study the
suitability of Treemap variants for each hierarchy level. Because
viewing the whole IP hierarchy at once is not practical for most tasks,
we evaluate layout stability when eliding large parts of the hierarchy,
while maintaining the visibility and ordering of the data of interest. expand
|
|
AdaptiviTree: Adaptive Tree Visualization for Tournament-Style Brackets |
|
Desney Tan,
Greg Smith,
Bongshin Lee,
George Robertson
|
|
Pages: 1113-1120 |
|
doi>10.1109/TVCG.2007.70537 |
|
Available formats:
Publisher Site
|
|
Online
pick'em games, such as the recent NCAA college basketball March Madness
tournament, form a large and rapidly growing industry. In these games,
players make predictions on a tournament bracket that defines which
competitors play each other and ...
Online
pick'em games, such as the recent NCAA college basketball March Madness
tournament, form a large and rapidly growing industry. In these games,
players make predictions on a tournament bracket that defines which
competitors play each other and how they proceed toward a single
champion. Throughout the course of the tournament, players monitor the
brackets to track progress and to compare predictions made by multiple
players. This is often a complex sensemaking task. The classic bracket
visualization was designed for use on paper and utilizes an
incrementally additive system in which the winner of each match-up is
rewritten in the next round as the tournament progresses. Unfortunately,
this representation requires a significant amount of space and makes it
relatively difficult to get a quick overview of the tournament state
since competitors take arbitrary paths through the static bracket. In
this paper, we present AdaptiviTree, a novel visualization that
adaptively deforms the representation of the tree and uses its shape to
convey outcome information. AdaptiviTree not only provides a more
compact and understandable representation, but also allows overlays that
display predictions as well as other statistics. We describe results
from a lab study we conducted to explore the efficacy of AdaptiviTree,
as well as from a deployment of the system in a recent real-world sports
tournament. expand
|
|
ManyEyes: a Site for Visualization at Internet Scale |
|
Fernanda B. Viegas,
Martin Wattenberg,
Frank van Ham,
Jesse Kriss,
Matt McKeon
|
|
Pages: 1121-1128 |
|
doi>10.1109/TVCG.2007.70577 |
|
Available formats:
Publisher Site
|
|
We
describe the design and deployment of Many Eyes, a public web site
where users may upload data, create interactive visualizations, and
carry on discussions. The goal of the site is to support collaboration
around visualizations at a large scale by ...
We
describe the design and deployment of Many Eyes, a public web site
where users may upload data, create interactive visualizations, and
carry on discussions. The goal of the site is to support collaboration
around visualizations at a large scale by fostering a social style of
data analysis in which visualizations not only serve as a discovery tool
for individuals but also as a medium to spur discussion among users. To
support this goal, the site includes novel mechanisms for end-user
creation of visualizations and asynchronous collaboration around those
visualizations. In addition to describing these technologies, we provide
a preliminary report on the activity of our users. expand
|
|
Scented Widgets: Improving Navigation Cues with Embedded Visualizations |
|
Wesley Willett,
Jeffrey Heer,
Maneesh Agrawala
|
|
Pages: 1129-1136 |
|
doi>10.1109/TVCG.2007.70589 |
|
Available formats:
Publisher Site
|
|
This
paper presents scented widgets, graphical user interface controls
enhanced with embedded visualizations that facilitate navigation in
information spaces. We describe design guidelines for adding visual cues
to common user interface widgets such ...
This
paper presents scented widgets, graphical user interface controls
enhanced with embedded visualizations that facilitate navigation in
information spaces. We describe design guidelines for adding visual cues
to common user interface widgets such as radio buttons, sliders, and
combo boxes and contribute a general software framework for applying
scented widgets within applications with minimal modifications to
existing source code. We provide a number of example applications and
describe a controlled experiment which finds that users exploring
unfamiliar data make up to twice as many unique discoveries using
widgets imbued with social navigation data. However, these differences
equalize as familiarity with the data increases. expand
|
|
Show Me: Automatic Presentation for Visual Analysis |
|
Jock Mackinlay,
Pat Hanrahan,
Chris Stolte
|
|
Pages: 1137-1144 |
|
doi>10.1109/TVCG.2007.70594 |
|
Available formats:
Publisher Site
|
|
This
paper describes Show Me, an integrated set of user interface commands
and defaults that incorporate automatic presentation into a commercial
visual analysis system called Tableau. A key aspect of Tableau is VizQL,
a language for specifying views, ...
This
paper describes Show Me, an integrated set of user interface commands
and defaults that incorporate automatic presentation into a commercial
visual analysis system called Tableau. A key aspect of Tableau is VizQL,
a language for specifying views, which is used by Show Me to extend
automatic presentation to the generation of tables of views (commonly
called small multiple displays).A key research issue for the commercial
application of automatic presentation is the user experience, which must
support the flow of visual analysis. User experience has not been the
focus of previous research on automatic presentation. The Show Me user
experience includes the automatic selection of mark types, a command to
add a single field to a view, and a pair of commands to build views for
multiple fields. Although the use of these defaults and commands is
optional, user interface logs indicate that Show Me is used by
commercial users. expand
|
|
Casual Information Visualization: Depictions of Data in Everyday Life |
|
Zachary Pousman,
John Stasko,
Michael Mateas
|
|
Pages: 1145-1152 |
|
doi>10.1109/TVCG.2007.70541 |
|
Available formats:
Publisher Site
|
|
Information
visualization has often focused on providing deep insight for expert
user populations and on techniques for amplifying cognition through
complicated interactive visual models. This paper proposes a new
subdomain for infovis research that ...
Information
visualization has often focused on providing deep insight for expert
user populations and on techniques for amplifying cognition through
complicated interactive visual models. This paper proposes a new
subdomain for infovis research that complements the focus on analytic
tasks and expert use. Instead of work-related and analytically driven
infovis, we propose Casual Information Visualization (or Casual Infovis)
as a complement to more traditional infovis domains. Traditional
infovis systems, techniques, and methods do not easily lend themselves
to the broad range of user populations, from expert to novices, or from
work tasks to more everyday situations. We propose definitions,
perspectives, and research directions for further investigations of this
emerging subfield. These perspectives build from ambient information
visualization [32], social visualization, and also from artistic work
that visualizes information [41]. We seek to provide a perspective on
infovis that integrates these research agendas under a coherent
vocabulary and framework for design. We enumerate the following
contributions. First, we demonstrate how blurry the boundary of infovis
is by examining systems that exhibit many of the putative proper ties of
infovis systems, but perhaps would not be considered so. Second, we
explore the notion of insight and how, instead of a monolithic
definition of insight, there may be multiple types, each with particular
characteristics. Third, we discuss design challenges for systems
intended for casual audiences. Finally we conclude with challenges for
system evaluation in this emerging subfield. expand
|
|
Geographically Weighted Visualization: Interactive Graphics for Scale-Varying Exploratory Analysis |
|
Jason Dykes,
Chris Brunsdon
|
|
Pages: 1161-1168 |
|
doi>10.1109/TVCG.2007.70558 |
|
Available formats:
Publisher Site
|
|
We
introduce a series of geographically weighted (GW) interactive
graphics, or geowigs, and use them to explore spatial relationships at a
range of scales. We visually encode information about geographic and
statistical proximity and variation in novel ...
We
introduce a series of geographically weighted (GW) interactive
graphics, or geowigs, and use them to explore spatial relationships at a
range of scales. We visually encode information about geographic and
statistical proximity and variation in novel ways through gw-choropleth
maps, multivariate gw-boxplots, gw-shading and scalograms. The new
graphic types reveal information about GW statistics at several scales
concurrently. We impement these views in prototype software containing
dynamic links and GW interactions that encourage exploration and refine
them to consider directional geographies. An informal evaluation uses
interactive GW techniques to consider Guerry's dataset of 'moral
statistics', casting doubt on correlations originally proposed through
visual analysis, revealing new local anomalies and suggesting
multivariate geographic relationships. Few attempts at visually
synthesising geography with multivariate statistical values at multiple
scales have been reported. The geowigs proposed here provide informative
representations of multivariate local variation, particularly when
combined with interactions that coordinate views and result in
gw-shading. We argue that they are widely applicable to area and
point-based geographic data and provide a set of methods to support
visual analysis using GW statistics through which the effects of
geography can be explored at multiple scales. expand
|
|
Visualizing the History of Living Spaces |
|
Yuri Ivanov,
Christopher Wren,
Alexander Sorokin,
Ishwinder Kaur
|
|
Pages: 1153-1160 |
|
doi>10.1109/TVCG.2007.70621 |
|
Available formats:
Publisher Site
|
|
The
technology available to building designers now makes it possible to
monitor buildings on a very large scale. Video cameras and motion
sensors are commonplace in practically every office space, and are
slowly making their way into living spaces. The ...
The
technology available to building designers now makes it possible to
monitor buildings on a very large scale. Video cameras and motion
sensors are commonplace in practically every office space, and are
slowly making their way into living spaces. The application of such
technologies, in particular video cameras, while improving security,
also violates privacy. On the other hand, motion sensors, while being
privacy-conscious, typically do not provide enough information for a
human operator to maintain the same degree of awareness about the space
that can be achieved by using video cameras. We propose a novel approach
in which we use a large number of simple motion sensors and a small set
of video cameras to monitor a large office space. In our system we
deployed 215 motion sensors and six video cameras to monitor the
3,000-square-meter office space occupied by 80 people for a period of
about one year. The main problem in operating such systems is finding a
way to present this highly multidimensional data, which includes both
spatial and temporal components, to a human operator to allow browsing
and searching recorded data in an efficient and intuitive way. In this
paper we present our experiences and the solutions that we have
developed in the course of our work on the system. We consider this work
to be the first step in helping designers and managers of building
systems gain access to information about occupants' behavior in the
context of an entire building in a way that is only minimally intrusive
to the occupants' privacy. expand
|
|
Legible Cities: Focus-Dependent Multi-Resolution Visualization of Urban Relationships |
|
Remco Chang,
Ginette Wessel,
Robert Kosara,
Eric Sauda,
William Ribarsky
|
|
Pages: 1169-1175 |
|
doi>10.1109/TVCG.2007.70574 |
|
Available formats:
Publisher Site
|
|
Numerous
systems have been developed to display large collections of data for
urban contexts; however, most have focused on layering of single
dimensions of data and manual calculations to understand relationships
within the urban environment. Furthermore, ...
Numerous
systems have been developed to display large collections of data for
urban contexts; however, most have focused on layering of single
dimensions of data and manual calculations to understand relationships
within the urban environment. Furthermore, these systems often limit the
user’s perspectives on the data, thereby diminishing the user’s
spatial understanding of the viewing region. In this paper, we introduce
a highly interactive urban visualization tool that provides intuitive
understanding of the urban data. Our system utilizes an aggregation
method that combines buildings and city blocks into legible clusters,
thus providing continuous levels of abstraction while preserving the
user’s mental model of the city. In conjunction with a 3D view of the
urban model, a separate but integrated information visualization view
displays multiple disparate dimensions of the urban data, allowing the
user to understand the urban environment both spatially and cognitively
in one glance. For our evaluation, expert users from various backgrounds
viewed a real city model with census data and confirmed that our system
allowed them to gain more intuitive and deeper understanding of the
urban model from different perspectives and levels of abstraction than
existing commercial urban visualization systems. expand
|
|
Interactive Visual Exploration of a Large Spatio-temporal Dataset: Reflections on a Geovisualization Mashup. |
|
Jo Wood,
Jason Dykes,
Aidan Slingsby,
Keith Clarke
|
|
Pages: 1176-1183 |
|
doi>10.1109/TVCG.2007.70570 |
|
Available formats:
Publisher Site
|
|
Exploratory
visual analysis is useful for the preliminary investigation of large
structured, multifaceted spatio-temporaldatasets. This process requires
the selection and aggregation of records by time, space and attribute,
the ability to transform data ...
Exploratory
visual analysis is useful for the preliminary investigation of large
structured, multifaceted spatio-temporaldatasets. This process requires
the selection and aggregation of records by time, space and attribute,
the ability to transform data and the flexibility to apply appropriate
visual encodings and interactions. We propose an approach inspired by
geographical 'mashups' in which freely-available functionality and data
are loosely but flexibly combined using de facto exchange standards. Our
case study combines MySQL, PHP and the LandSerf GIS to allow Google
Earth to be used for visual synthesis and interaction with encodings
described in KML. This approach is applied to the exploration of a log
of 1.42 million requests made of a mobile directory service. Novel
combinations of interaction and visual encoding are developed including
spatial 'tag clouds', 'tag maps', 'data dials' and multi-scale density
surfaces. Four aspects of the approach are informally evaluated: the
visual encodings employed, their success in the visual exploration of
the dataset, the specific tools used and the 'mashup' approach.
Preliminary findings will be beneficial to others considering using
mashups for visualization. The specific techniques developed may be more
widely applied to offer insights into the structure of multifarious
spatio-temporal data of the type explored here. expand
|
|
Hotmap: Looking at Geographic Attention |
|
Danyel Fisher
|
|
Pages: 1184-1191 |
|
doi>10.1109/TVCG.2007.70561 |
|
Available formats:
Publisher Site
|
|
Understanding
how people use online maps allows data acquisition teams to concentrate
their efforts on the portions of the map that are most seen by users.
Online maps represent vast databases, and so it is insufficient to
simply look at a list of the ...
Understanding
how people use online maps allows data acquisition teams to concentrate
their efforts on the portions of the map that are most seen by users.
Online maps represent vast databases, and so it is insufficient to
simply look at a list of the most-accessed URLs. Hotmap takes advantage
of the design of a mapping system's imagery pyramid to superpose a
heatmap of the log files over the original maps. Users' behavior within
the system can be observed and interpreted. This paper discusses the
imagery acquisition task that motivated Hotmap, and presents several
examples of information that Hotmap makes visible. we discuss the design
choices behind Hotmap, including logarithmic color schemes;
low-saturation background images; and tuning images to explore both
infrequently-viewed and frequently-viewed spaces. expand
|
|
VisLink: Revealing Relationships Amongst Visualizations |
|
Christopher Collins,
Sheelagh Carpendale
|
|
Pages: 1192-1199 |
|
doi>10.1109/TVCG.2007.70521 |
|
Available formats:
Publisher Site
|
|
We
present VisLink, a method by which visualizations and the relationships
between them can be interactively explored. VisLink readily generalizes
to support multiple visualizations, empowers inter-representational
queries, and enables the reuse of the ...
We
present VisLink, a method by which visualizations and the relationships
between them can be interactively explored. VisLink readily generalizes
to support multiple visualizations, empowers inter-representational
queries, and enables the reuse of the spatial variables, thus supporting
efficient information encoding and providing for powerful visualization
bridging. Our approach uses multiple 2D layouts, drawing each one in
its own plane. These planes can then be placed and re-positioned in 3D
space: side by side, in parallel, or in chosen placements that provide
favoured views. Relationships, connections, and patterns between
visualizations can be revealed and explored using a variety of
interaction techniques including spreading activation and search
filters. expand
|
|
Visualization of Heterogeneous Data |
|
Mike Cammarano,
Xin (Luna) Dong,
Bryan Chan,
Jeff Klingner,
Justin Talbot,
Alon Halevey,
Pat Hanrahan
|
|
Pages: 1200-1207 |
|
doi>10.1109/TVCG.2007.70617 |
|
Available formats:
Publisher Site
|
|
Both
the Resource Description Framework (RDF), used in the semantic web, and
Maya Viz u-forms represent data as a graph of objects connected by
labeled edges. Existing systems for flexible visualization of this kind
of data require manual specification ...
Both
the Resource Description Framework (RDF), used in the semantic web, and
Maya Viz u-forms represent data as a graph of objects connected by
labeled edges. Existing systems for flexible visualization of this kind
of data require manual specification of the possible visualization roles
for each data attribute. When the schema is large and unfamiliar, this
requirement inhibits exploratory visualization by requiring a costly
up-front data integration step. To eliminate this step, we propose an
automatic technique for mapping data attributes to visualization
attributes. We formulate this as a schema matching problem, finding
appropriate paths in the data model for each required visualization
attribute in a visualization template. expand
|
|
Sequential Document Visualization |
|
Yi Mao,
Joshua Dillon,
Guy Lebanon
|
|
Pages: 1208-1215 |
|
doi>10.1109/TVCG.2007.70592 |
|
Available formats:
Publisher Site
|
|
Documents
and other categorical valued time series are often characterized by the
frequencies of short range sequential patterns such as n-grams. This
representation converts sequential data of varying lengths to high
dimensional histogram vectors which ...
Documents
and other categorical valued time series are often characterized by the
frequencies of short range sequential patterns such as n-grams. This
representation converts sequential data of varying lengths to high
dimensional histogram vectors which are easily modeled by standard
statistical models. Unfortunately, the histogram representation ignores
most of the medium and long range sequential dependencies making it
unsuitable for visualizing sequential data. We present a novel framework
for sequential visualization of discrete categorical time series based
on the idea of local statistical modeling. The framework embeds
categorical time series as smooth curves in the multinomial simplex
summarizing the progression of sequential trends. We discuss several
visualization techniques based on the above framework and demonstrate
their usefulness for document visualization. expand
|
|
A Taxonomy of Clutter Reduction for Information Visualisation |
|
Geoffrey Ellis,
Alan Dix
|
|
Pages: 1216-1223 |
|
doi>10.1109/TVCG.2007.70535 |
|
Available formats:
Publisher Site
|
|
Information
visualisation is about gaining insight into data through a visual
representation. This data is often multivariate and increasingly, the
datasets are very large. To help us explore all this data, numerous
visualisation applications, both commercial ...
Information
visualisation is about gaining insight into data through a visual
representation. This data is often multivariate and increasingly, the
datasets are very large. To help us explore all this data, numerous
visualisation applications, both commercial and research prototypes,
have been designed using a variety of techniques and algorithms. Whether
they are dedicated to geo-spatial data or skewed hierarchical data,
most of the visualisations need to adopt strategies for dealing with
overcrowded displays, brought about by too much data to fit in too small
a display space. This paper analyses a large number of these clutter
reduction methods, classifying them both in terms of how they deal with
clutter reduction and more importantly, in terms of the benefits and
losses. The aim of the resulting taxonomy is to act as a guide to match
techniques to problems where different criteria may have different
importance, and more importantly as a means to critique and hence
develop existing and new techniques. expand
|
|
Toward a Deeper Understanding of the Role of Interaction in Information Visualization |
|
Ji Soo Yi,
Youn ah Kang,
John Stasko,
Julie Jacko
|
|
Pages: 1224-1231 |
|
doi>10.1109/TVCG.2007.70515 |
|
Available formats:
Publisher Site
|
|
Even
though interaction is an important part of information visualization
(Infovis), it has garnered a relatively low level of attention from the
Infovis community. A few frameworks and taxonomies of Infovis
interaction techniques exist, but they typically ...
Even
though interaction is an important part of information visualization
(Infovis), it has garnered a relatively low level of attention from the
Infovis community. A few frameworks and taxonomies of Infovis
interaction techniques exist, but they typically focus on low-level
operations and do not address the variety of benefits interaction
provides. After conducting an extensive review of Infovis systems and
their interactive capabilities, we propose seven general categories of
interaction techniques widely used in Infovis: 1) Select, 2) Explore, 3)
Reconfigure, 4) Encode, 5) Abstract/Elaborate, 6) Filter, and 7)
Connect. These categories are organized around a user's intent while
interacting with a system rather than the low-level interaction
techniques provided by a system. The categories can act as a framework
to help discuss and evaluate interaction techniques and hopefully lay an
initial foundation toward a deeper understanding and a science of
interaction. expand
|
|
Interactive Tree Comparison for Co-located Collaborative Information Visualization |
|
Petra Isenberg,
Sheelagh Carpendale
|
|
Pages: 1232-1239 |
|
doi>10.1109/TVCG.2007.70568 |
|
Available formats:
Publisher Site
|
|
In
many domains increased collaboration has lead to more innovation by
fostering the sharing of knowledge, skills, and ideas. Shared analysis
of information visualizations does not only lead to increased
information processing power, but team members ...
In
many domains increased collaboration has lead to more innovation by
fostering the sharing of knowledge, skills, and ideas. Shared analysis
of information visualizations does not only lead to increased
information processing power, but team members can also share,
negotiate, and discuss their views and interpretations on a dataset and
contribute unique perspectives on a given problem. Designing
technologies to support collaboration around information visualizations
poses special challenges and relatively few systems have been designed.
We focus on supporting small groups collaborating around information
visualizations in a co-located setting, using a shared interactive
tabletop display. We introduce an analysis of challenges and
requirements for the design of co-located collaborative information
visualization systems. We then present a new system that facilitates
hierarchical data comparison tasks for this type of collaborative work.
Our system supports multi-user input, shared and individual views on the
hierarchical data visualization, flexible use of representations, and
flexible workspace organization to facilitate group work around
visualizations. expand
|
|
Animated Transitions in Statistical Data Graphics |
|
Jeffrey Heer,
George Robertson
|
|
Pages: 1240-1247 |
|
doi>10.1109/TVCG.2007.70539 |
|
Available formats:
Publisher Site
|
|
In
this paper we investigate the effectiveness of animated transitions
between common statistical data graphics such as bar charts, pie charts,
and scatter plots. We extend theoretical models of data graphics to
include such transitions, introducing ...
In
this paper we investigate the effectiveness of animated transitions
between common statistical data graphics such as bar charts, pie charts,
and scatter plots. We extend theoretical models of data graphics to
include such transitions, introducing a taxonomy of transition types. We
then propose design principles for creating effective transitions and
illustrate the application of these principles in DynaVis, a
visualization system featuring animated data graphics. Two controlled
experiments were conducted to assess the efficacy of various transition
types, finding that animated transitions can significantly improve
graphical perception. expand
|
|
Browsing Zoomable Treemaps: Structure-Aware Multi-Scale Navigation Techniques |
|
Renaud Blanch,
Eric Lecolinet
|
|
Pages: 1248-1253 |
|
doi>10.1109/TVCG.2007.70540 |
|
Available formats:
Publisher Site
|
|
Treemaps
provide an interesting solution for representing hierarchical data.
However, most studies have mainly focused on layout algorithms and paid
limited attention to the interaction with treemaps. This makes it
difficult to explore large data sets ...
Treemaps
provide an interesting solution for representing hierarchical data.
However, most studies have mainly focused on layout algorithms and paid
limited attention to the interaction with treemaps. This makes it
difficult to explore large data sets and to get access to details,
especially to those related to the leaves of the trees. We propose the
notion of *zoomable treemaps* (ZTMs), an hybridization between treemaps
and zoomable user interfaces that facilitates the navigation in large
hierarchical data sets. By providing a consistent set of interaction
techniques, ZTMs make it possible for users to browse through very large
data sets (e.g., 700,000 nodes dispatched amongst 13 levels). These
techniques use the structure of the displayed data to guide the
interaction and provide a way to improve interactive navigation in
treemaps. expand
|
|
Visualizing Causal Semantics Using Animations |
|
Nivedita Kadaba,
Pourang Irani,
Jason Leboe
|
|
Pages: 1254-1261 |
|
doi>10.1109/TVCG.2007.70528 |
|
Available formats:
Publisher Site
|
|
Michotte's
theory of ampliation suggests that causal relationships are perceived
by objects animated under appropriate spatiotemporal conditions. We
extend the theory of ampliation and propose that the immediate
perception of complex causal relations ...
Michotte's
theory of ampliation suggests that causal relationships are perceived
by objects animated under appropriate spatiotemporal conditions. We
extend the theory of ampliation and propose that the immediate
perception of complex causal relations is also dependent on a set of
structural and temporal rules. We designed animated representations,
based on Michotte's rules, for showing complex causal relationships or
causal semantics. In this paper we describe a set of animations for
showing semantics such as causal amplification, causal strength, causal
dampening, and causal multiplicity. In a two part study we compared the
effectiveness of both the static and animated representations. The first
study (N=44) asked participants to recall passages that were previously
displayed using both types of representations. Participants were 8%
more accurate in recalling causal semantics when they were presented
using animations instead of static graphs. In the second study (N=112)
we evaluated the intuitiveness of the representations. Our results
showed that while users were as accurate with the static graphs as with
the animations, they were 9% faster in matching the correct causal
statements in the animated condition. Overall our results show that
animated diagrams that are designed based on perceptual rules such as
those proposed by Michotte have the potential to facilitate
comprehension of complex causal relations. expand
|
|
Spatialization Design: Comparing Points and Landscapes |
|
Melanie Tory,
David Sprague,
Fuqu Wu,
Wing Yan So,
Tamara Munzner
|
|
Pages: 1262-1269 |
|
doi>10.1109/TVCG.2007.70596 |
|
Available formats:
Publisher Site
|
|
Spatializations
represent non-spatial data using a spatial layout similar to a map. We
present an experiment comparing different visual representations of
spatialized data, to determine which representations are best for a
non-trivial search and point ...
Spatializations
represent non-spatial data using a spatial layout similar to a map. We
present an experiment comparing different visual representations of
spatialized data, to determine which representations are best for a
non-trivial search and point estimation task. Primarily, we compare
point-based displays to 2D and 3D information landscapes. We also
compare a colour (hue) scale to a grey (lightness) scale. For the task
we studied, point-based spatializations were far superior to landscapes,
and 2D landscapes were superior to 3D landscapes. Little or no benefit
was found for redundantly encoding data using colour or greyscale
combined with landscape height. 3D landscapes with no colour scale
(height-only) were particularly slow and inaccurate. A colour scale was
found to be better than a greyscale for all display types, but a
greyscale was helpful compared to height-only. These results suggest
that point-based spatializations should be chosen over landscape
representations, at least for tasks involving only point data itself
rather than derived information about the data space. expand
|
|
Weaving
Versus Blending: a quantitative assessment of the information carrying
capacities of two alternative methods for conveying multivariate data
with color. |
|
Haleh Hagh-Shenas,
Sunghee Kim,
Victoria Interrante,
Christopher Healey
|
|
Pages: 1270-1277 |
|
doi>10.1109/TVCG.2007.70623 |
|
Available formats:
Publisher Site
|
|
In
many applications, it is important to understand the individual values
of, and relationships between, multiple related scalar variables defined
across a common domain. Several approaches have been proposed for
representing data in these situations.In ...
In
many applications, it is important to understand the individual values
of, and relationships between, multiple related scalar variables defined
across a common domain. Several approaches have been proposed for
representing data in these situations.In this paper we focus on
strategies for the visualization of multivariate data that rely on color
mixing.In particular, through a series of controlled observer
experiments, we seek to establish a fundamental understanding of the
information-carrying capacities of two alternative methods for encoding
multivariate information using color: color blending and color weaving.
We begin with a baseline experiment in which we assess participants'
abilities to accurately read numerical data encoded in six different
basic color scales defined in the L*a*b* color space. We then assess
participants' abilities to read combinations of 2, 3, 4 and 6 different
data values represented in a common region of the domain, encoded using
either color blending or color weaving. In color blending a single mixed
color is formed via linear combination of the individual values in
L*a*b* space, and in color weaving the original individual colors are
displayed side-by-side in a high frequency texture that fills the
region. A third experiment was conducted to clarify some of the trends
regarding the color contrast and its effect on the magnitude of the
error that was observed in the second experiment. The results indicate
that when the component colors are represented side-by-side in a high
frequency texture, most participants' abilities to infer the values of
individual components are significantly improved, relative to when the
colors are blended. Participants' performance was significantly better
with color weaving particularly when more than 2 colors were used, and
even when the individual colors subtended only 3 minutes of visual angle
in the texture. However, the information-carrying capacity of the color
weaving approach has its limits.We found that participants' abilities
to accurately interpret each of the individual components in a high
frequency color texture typically falls off as the number of components
increases from 4 to 6. We found no significant advantages, in either
color blending or color weaving, to using color scales based on
component hues thatare more widely separated in the L*a*b* color
space.Furthermore, we found some indications that extra difficulties may
arise when opponent hues are employed. expand
|
|
Overview Use in Multiple Visual Information Resolution Interfaces |
|
Heidi Lam,
Tamara Munzner,
Robert Kincaid
|
|
Pages: 1278-1285 |
|
doi>10.1109/TVCG.2007.70583 |
|
Available formats:
Publisher Site
|
|
In
interfaces that provide multiple visual information resolutions (VIR),
low-VIR overviews typically sacrifice visual details for display
capacity, with the assumption that users can select regions of interest
to examine at higher VIRs. Designers can ...
In
interfaces that provide multiple visual information resolutions (VIR),
low-VIR overviews typically sacrifice visual details for display
capacity, with the assumption that users can select regions of interest
to examine at higher VIRs. Designers can create low-VIRs based on
multi-level structure inherent in the data, but have little guidance
with single-level data. To better guide design tradeoff between display
capacity and visual target perceivability, we looked at overview use in
two multiple-VIR interfaces with high-VIR displays either embedded
within, or separate from, the overviews. We studied two visual
requirements for effective overview and found that participants would
reliably use the low-VIR overviews only when the visual targets were
simple and had small visual spans. Otherwise, at least 20% chose to use
the high-VIR view exclusively. Surprisingly, neither of the multiple-VIR
interfaces provided performance benefits when compared to using the
high-VIR view alone. However, we did observe benefits in providing
side-by-side comparisons for target matching. We conjecture that the
high cognitive load of multiple-VIR interface interactions, whether real
or perceived, is a more considerable barrier to their effective use
than was previously considered. expand
|
|
Visualizing Changes of Hierarchical Data using Treemaps |
|
Ying Tu,
Han-Wei Shen
|
|
Pages: 1286-1293 |
|
doi>10.1109/TVCG.2007.70529 |
|
Available formats:
Publisher Site
|
|
While
the treemap is a popular method for visualizing hierarchical data, it
is often difficult for users to track layout and attribute changes when
the data evolve over time. When viewing the treemaps side by side or
back and forth, there exist several ...
While
the treemap is a popular method for visualizing hierarchical data, it
is often difficult for users to track layout and attribute changes when
the data evolve over time. When viewing the treemaps side by side or
back and forth, there exist several problems that can prevent viewers
from performing effective comparisons. Those problems include abrupt
layout changes, a lack of prominent visual patterns to represent
layouts, and a lack of direct contrast to highlight differences. In this
paper, we present strategies to visualize changes of hierarchical data
using treemaps. A new treemap layout algorithm is presented to reduce
abrupt layout changes and produce consistent visual patterns. Techniques
are proposed to effectively visualize the difference and contrast
between two treemap snapshots in terms of the map items¡¯ colors,
sizes, and positions. Experimental data show that our algorithm can
achieve a good balance in maintaining a treemap¡¯s stability,
continuity, readability, and average aspect ratio. A software tool is
created to compare treemaps and generate the visualizations. User
studies show that the users can better understand the changes in the
hierarchy and layout, and more quickly notice the color and size
differences using our method. expand
|
|
Exploring Multiple Trees through DAG Representations |
|
Martin Graham,
Jessie Kennedy
|
|
Pages: 1294-1301 |
|
doi>10.1109/TVCG.2007.70556 |
|
Available formats:
Publisher Site
|
|
We
present a Directed Acyclic Graph visualisation designed to allow
interaction with a set of multiple classification trees, specifically to
find overlaps and differences between groups of trees and individual
trees. The work is motivated by the need ...
We
present a Directed Acyclic Graph visualisation designed to allow
interaction with a set of multiple classification trees, specifically to
find overlaps and differences between groups of trees and individual
trees. The work is motivated by the need to find a representation for
multiple trees that has the space-saving property of a general graph
representation and the intuitive parent-child direction cues present in
individual representation of trees. Using example taxonomic data sets,
we describe augmentations to the common barycenter DAG layout method
that reveal shared sets of child nodes between common parents in a
clearer manner. Other interactions such as displaying the multiple
ancestor paths of a node when it occurs in several trees, and revealing
intersecting sibling sets within the context of a single DAG
representation are also discussed. expand
|
|
NodeTrix: a Hybrid Visualization of Social Networks |
|
Nathalie Henry,
Jean-Daniel Fekete,
Michael J. McGuffin
|
|
Pages: 1302-1309 |
|
doi>10.1109/TVCG.2007.70582 |
|
Available formats:
Publisher Site
|
|
The
need to visualize large social networks is growing as hardware
capabilities make analyzing large networks feasible and many new data
sets become available. Unfortunately, the visualizations in existing
systems do not satisfactorily resolve the basic ...
The
need to visualize large social networks is growing as hardware
capabilities make analyzing large networks feasible and many new data
sets become available. Unfortunately, the visualizations in existing
systems do not satisfactorily resolve the basic dilemma of being
readable both for the global structure of thenetwork and also for
detailed analysis of local communities. To address this problem, we
present NodeTrix, a hybrid representation for networks that combines the
advantages of two traditional representations: node-link diagrams are
used to show the global structure of a network, while arbitrary portions
of the network can be shown as adjacency matrices to better support the
analysis of communities. A key contribution is a set of interaction
techniques. These allow analysts to create a NodeTrix visualization by
dragging selections to and from node-link and matrix forms, and to
flexibly manipulate the NodeTrix representation to explore the dataset
andcreate meaningful summary visualizations of their findings. Finally,
we present a case study applying NodeTrix to the analysis of the InfoVis
2004 coauthorship dataset to illustrate the capabilities of NodeTrix as
both an exploration tool and an effective means of communicating
results. expand
|
|
Multi-Level Graph Layout on the GPU |
|
Yaniv Frishman,
Ayellet Tal
|
|
Pages: 1310-1319 |
|
doi>10.1109/TVCG.2007.70580 |
|
Available formats:
Publisher Site
|
|
This
paper presents a new algorithm for force directed graph layout on the
GPU. The algorithm, whose goal is to compute layouts accurately and
quickly, has two contributions. The first contribution is proposing a
general multi-level scheme, which is ...
This
paper presents a new algorithm for force directed graph layout on the
GPU. The algorithm, whose goal is to compute layouts accurately and
quickly, has two contributions. The first contribution is proposing a
general multi-level scheme, which is based on spectral partitioning. The
second contribution is computing the layout on the GPU. Since the GPU
requires a data parallel programming model, the challenge is devising a
mapping of a naturally unstructured graph into a well-partitioned
structured one. This is done by computing a balanced partitioning of a
general graph. This algorithm provides a general multi-level scheme,
which has the potential to be used not only for computation on the GPU,
but also on emerging multi-core architectures. The algorithm manages to
compute high quality layouts of large graphs in a fraction of the time
required by existing algorithms of similar quality. An application for
visualization of the topologies of ISP (Internet Service Provider)
networks is presented. expand
|
|
Illustrative Deformation for Data Exploration |
|
Carlos Correa,
Debora Silver,
Mi Chen
|
|
Pages: 1320-1327 |
|
doi>10.1109/TVCG.2007.70565 |
|
Available formats:
Publisher Site
|
|
Much
of the visualization research has focused on improving the rendering
quality and speed, and enhancing the perceptibility of features in the
data. Recently, significant emphasis has been placed on focus+context
(F+C) techniques (e.g., fisheye views ...
Much
of the visualization research has focused on improving the rendering
quality and speed, and enhancing the perceptibility of features in the
data. Recently, significant emphasis has been placed on focus+context
(F+C) techniques (e.g., fisheye views and magnification lens) for data
exploration in addition to viewing transformation and hierarchical
navigation. However, most of the existing data exploration techniques
rely on the manipulation of viewing attributes of the rendering system
or optical attributes of the data objects, with users being passive
viewers. In this paper, we propose a more active approach to data
exploration, which attempts to mimic how we would explore data if we
were able to hold it and interact with it in our hands. This involves
allowing the users to physically or actively manipulate the geometry of a
data object. While this approach has been traditionally used in
applications, such as surgical simulation, where the original geometry
of the data objects is well understood by the users, there are several
challenges when this approach is generalized for applications, such as
flow and information visualization, where there is no common perception
as to the normal or natural geometry of a data object. We introduce a
taxonomy and a set of transformations especially for illustrative
deformation of general data exploration. We present combined geometric
or optical illustration operators for focus+context visualization, and
examine the best means for preventing the deformed context from being
misperceived. We demonstrated the feasibility of this generalization
with examples of flow, information and video visualization. expand
|
|
An Effective Illustrative Visualization Framework Based on Photic Extremum Lines (PELs) |
|
Xuexiang Xie,
Ying He,
Feng Tian,
Hock-Soon Seah,
Xianfeng Gu,
Hong Qin
|
|
Pages: 1328-1335 |
|
doi>10.1109/TVCG.2007.70538 |
|
Available formats:
Publisher Site
|
|
Conveying
shape using feature lines is an important visualization tool in visual
computing. The existing feature lines (e.g., ridges, valleys,
silhouettes, suggestive contours, etc.) are solely determined by local
geometry properties (e.g., normals and ...
Conveying
shape using feature lines is an important visualization tool in visual
computing. The existing feature lines (e.g., ridges, valleys,
silhouettes, suggestive contours, etc.) are solely determined by local
geometry properties (e.g., normals and curvatures) as well as the view
position. This paper is strongly inspired by the observation in human
vision and perception that a sudden change in the luminance plays a
critical role to faithfully represent and recover the 3D information. In
particular, we adopt the edge detection techniques in image processing
for 3D shape visualization and present Photic Extremum Lines (PELs)
which emphasize significant variations of illumination over 3D surfaces.
Comparing with the existing feature lines, PELs are more flexible and
offer users more freedom to achieve desirable visualization effects. In
addition, the user can easily control the shape visualization by
changing the light position, the number of light sources, and choosing
various light models. We compare PELs with the existing approaches and
demonstrate that PEL is a flexible and effective tool to illustrate 3D
surface and volume for visual computing. expand
|
|
Semantic Layers for Illustrative Volume Rendering |
|
Peter Rautek,
Stefan Bruckner,
Eduard Groller
|
|
Pages: 1336-1343 |
|
doi>10.1109/TVCG.2007.70591 |
|
Available formats:
Publisher Site
|
|
Direct
volume rendering techniques map volumetric attributes (e.g., density,
gradient magnitude, etc.) to visual styles. Commonly this mapping is
specified by a transfer function. The specification of transfer
functions is a complex task and requires ...
Direct
volume rendering techniques map volumetric attributes (e.g., density,
gradient magnitude, etc.) to visual styles. Commonly this mapping is
specified by a transfer function. The specification of transfer
functions is a complex task and requires expert knowledge about the
underlying rendering technique. In the case of multiple volumetric
attributes and multiple visual styles the specification of the
multi-dimensional transfer function becomes more challenging and
non-intuitive. We present a novel methodology for the specification of a
mapping from several volumetric attributes to multiple illustrative
visual styles. We introduce semantic layers that allow a domain expert
to specify the mapping in the natural language of the domain. A semantic
layer defines the mapping of volumetric attributes to one visual style.
Volumetric attributes and visual styles are represented as fuzzy sets.
The mapping is specified by rules that are evaluated with fuzzy logic
arithmetics. The user specifies the fuzzy sets and the rules without
special knowledge about the underlying rendering technique. Semantic
layers allow for a linguistic specification of the mapping from
attributes to visual styles replacing the traditional transfer function
specification. expand
|
|
Enhancing Depth-Perception with Flexible Volumetric Halos |
|
Stefan Bruckner,
Eduard Gröller
|
|
Pages: 1344-1351 |
|
doi>10.1109/TVCG.2007.70555 |
|
Available formats:
Publisher Site
|
|
Volumetric
data commonly has high depth complexity which makes it difficult to
judge spatial relationships accurately. There are many different ways to
enhance depth perception, such as shading, contours, and shadows.
Artists and illustrators frequently ...
Volumetric
data commonly has high depth complexity which makes it difficult to
judge spatial relationships accurately. There are many different ways to
enhance depth perception, such as shading, contours, and shadows.
Artists and illustrators frequently employ halos for this purpose. In
this technique, regions surrounding the edges of certain structures are
darkened or brightened which makes it easier to judge occlusion.Based on
this concept, we present a flexible method for enhancing and
highlighting structures of interest using GPU-based direct volume
rendering. Our approach uses an interactively defined halo transfer
function to classify structures of interest based on data value,
direction, and position. A feature-preserving spreading algorithm is
applied to distribute seed values to neighboring locations, generating a
controllably smooth field of halo intensities. These halo intensities
are then mapped to colors and opacities using a halo profile function.
Our method can be used to annotate features at interactive frame rates. expand
|
|
Registration Techniques for Using Imperfect and Partially Calibrated Devices in Planar Multi-Projector Displays |
|
Ezekiel Bhasker,
Ray Juang,
Aditi Majumder
|
|
Pages: 1352-1359 |
|
doi>10.1109/TVCG.2007.70587 |
|
Available formats:
Publisher Site
|
|
Multi-projector
displays today are automatically registered, both geometrically and
photometrically, using cameras. Existing registration techniques assume
pre-calibrated projectors and cameras that are devoid of imperfections
such as lens distortion. ...
Multi-projector
displays today are automatically registered, both geometrically and
photometrically, using cameras. Existing registration techniques assume
pre-calibrated projectors and cameras that are devoid of imperfections
such as lens distortion. In practice, however, these devices are usually
imperfect and uncalibrated. Registration of each of these devices is
often more challenging than the multi-projector display registration
itself. To make tiled projection-based displays accessible to a layman
user we should allow the use of uncalibrated inexpensive devices that
are prone to imperfections. In this paper, we make two important
advances in this direction. First, we present a new geometric
registration technique that can achieve geometric alignment {\em in the
presence of severe projector lens distortion} using a relatively
inexpensive low-resolution camera. This is achieved via a closed-form
model that relates the projectors to cameras, in planar multi-projector
displays, using rational Bezier patches. This enables us to
geometrically calibrate a 3000 x 2500 resolution planar multi-projector
display made of 3 x 3 array of nine severely distorted projectors using a
low resolution (640 x 480) VGA camera. Second, we present a photometric
self-calibration technique for a projector-camera pair. This allows us
to photometrically calibrate the same display made of nine projectors
using a photometrically uncalibrated camera. To the best of our
knowledge, this is the first work that allows geometrically imperfect
projectors and photometrically uncalibrated cameras in calibrating
multi-projector displays. expand
|
|
A Unified Paradigm For Scalable Multi-Projector Displays |
|
Niranjan Damera-Venkata,
Nelson Chang,
Jeffrey Dicarlo
|
|
Pages: 1360-1367 |
|
doi>10.1109/TVCG.2007.70536 |
|
Available formats:
Publisher Site
|
|
We
present a general framework for the modeling and optimization of
scalable multi-projector displays. Based on this framework, we derive
algorithms that can robustly optimize the visual quality of an arbitrary
combination of projectors without manual ...
We
present a general framework for the modeling and optimization of
scalable multi-projector displays. Based on this framework, we derive
algorithms that can robustly optimize the visual quality of an arbitrary
combination of projectors without manual adjustment. When the
projectors are tiled, we show that our framework automatically produces
blending maps that outperform state-of-the-art projector blending
methods. When all the projectors are superimposed, the framework can
produce high-resolution images beyond the Nyquist resolution limits of
component projectors. When a combination of tiled and superimposed
projectors are deployed, the same framework harnesses the best features
of both tiled and superimposed multi-projector projection paradigms. The
framework creates for the first time a new unified paradigm that is
agnostic to a particular configuration of projectors yet robustly
optimizes for the brightness, contrast, and resolution of that
configuration.In addition, we demonstrate that our algorithms support
high resolution video at real-time interactive frame rates achieved on
commodity graphics platforms. This work allows for inexpensive,
compelling, flexible, and robust large scale visualization systems to be
built and deployed very efficiently. expand
|
|
Registration Techniques for Using Imperfect and Par tially Calibrated Devices in Planar Multi-Projector Displays |
|
Ezekiel Bhasker,
Ray Juang,
Aditi Majumder
|
|
Pages: 1368-1375 |
|
doi>10.1109/TVCG.2007.70586 |
|
Available formats:
Publisher Site
|
|
Multi-projector
displays today are automatically registered, both geometrically and
photometrically, using cameras. Existing registration techniques assume
pre-calibrated projectors and cameras that are devoid of imperfections
such as lens distor tion. ...
Multi-projector
displays today are automatically registered, both geometrically and
photometrically, using cameras. Existing registration techniques assume
pre-calibrated projectors and cameras that are devoid of imperfections
such as lens distor tion. In practice, however, these devices are
usually imperfect and uncalibrated. Registration of each of these
devices is often more challenging than the multi-projector display
registration itself. To make tiled projection-based displays accessible
to a layman user we should allow the use of uncalibrated inexpensive
devices that are prone to imperfections. In this paper, we make two
impor tant advances in this direction. First, we present a new geometric
registration technique that can achieve geometric alignment in the
presence of severe projector lens distor tion using a relatively
inexpensive low-resolution camera. This is achieved via a closed-form
model that relates the projectors to cameras, in planar multi-projector
displays, using rational Bezier patches. This enables us to
geometrically calibrate a 3000 × 2500 resolution planar multi-projector
display made of 3 × 3 array of nine severely distor ted projectors using
a low resolution (640 × 480) VGA camera. Second, we present a
photometric self-calibration technique for a projector-camera pair. This
allows us to photometrically calibrate the same display made of nine
projectors using a photometrically uncalibrated camera. To the best of
our knowledge, this is the first work that allows geometrically
imperfect projectors and photometrically uncalibrated cameras in
calibrating multi-projector displays. expand
|
|
Time Dependent Processing in a Parallel Pipeline Architecture |
|
John Biddiscombe,
Berk Geveci,
Ken Martin,
Kenneth Moreland,
David Thompson
|
|
Pages: 1376-1383 |
|
doi>10.1109/TVCG.2007.70600 |
|
Available formats:
Publisher Site
|
|
Pipeline
architectures provide a versatile and efficient mechanism for
constructing visualizations, and they have been implemented in numerous
libraries and applications over the past two decades. In addition to
allowing developers and users to freely ...
Pipeline
architectures provide a versatile and efficient mechanism for
constructing visualizations, and they have been implemented in numerous
libraries and applications over the past two decades. In addition to
allowing developers and users to freely combine algorithms,
visualization pipelines have proven to work well when streaming data and
scale well on parallel distributed-memory computers.However, current
pipeline visualization frameworks have a critical flaw: they are unable
to manage time varying data.As data flows through the pipeline, each
algorithm has access to only a single snapshot in time of the data. This
prevents the implementation of algorithms that do any temporal
processing such as particle tracing; plotting over time; or
interpolation, fitting, or smoothing of time series data. As data
acquisition technology improves, as simulation time-integration
techniques become more complex, and as simulations save less frequently
and regularly, the ability to analyze the time-behavior of data becomes
more important. This paper describes a modification to the traditional
pipeline architecture that allows it to accommodate temporal algorithms.
Furthermore, the architecture allows temporal algorithms to be used in
conjunction with algorithms expecting a single time snapshot, thus
simplifying software design and allowing adoption into existing pipeline
frameworks.Our architecture also continues to work well in parallel
distributed-memory environments.We demonstrate our architecture by
modifying the popular VTK framework and exposing the functionality to
the ParaView application.We use this framework to apply time-dependent
algorithms on large data with a parallel cluster computer and thereby
exercise a functionality that previously did not exist. expand
|
|
Visual Verification and Analysis of Cluster Detection for Molecular Dynamics |
|
Sebastian Grottel,
Guido Reina,
Jadran Vrabec,
Thomas Ertl
|
|
Pages: 1384-1391 |
|
doi>10.1109/TVCG.2007.70615 |
|
Available formats:
Publisher Site
|
|
A
current research topic in molecular thermodynamics is the condensation
of vapor to liquid and the investigation of this process at the
molecular level. Condensation is found in many physical phenomena, e.g.
the formation of atmospheric clouds or the ...
A
current research topic in molecular thermodynamics is the condensation
of vapor to liquid and the investigation of this process at the
molecular level. Condensation is found in many physical phenomena, e.g.
the formation of atmospheric clouds or the processes inside steam
turbines, where a detailed knowledge of the dynamics of condensation
processes will help to optimize energy efficiency and avoid problems
with droplets of macroscopic size. The key properties of these processes
are the nucleation rate and the critical cluster size. For the
calculation of these properties it is essential to make use of a
meaningful definition of molecular clusters, which currently is a not
completely resolved issue. In this paper a framework capable of
interactively visualizing molecular datasets of such nucleation
simulations is presented, with an emphasis on the detected molecular
clusters. To check the quality of the results of the cluster detection,
our framework introduces the concept of flow groups to highlight
potential cluster evolution over time which is not detected by the
employed algorithm. To confirm the findings of the visual analysis, we
coupled the rendering view with a schematic view of the clusters'
evolution. This allows to rapidly assess the quality of the molecular
cluster detection algorithm and to identify locations in the simulation
data in space as well as in time where the cluster detection fails.
Thus, thermodynamics researchers can eliminate weaknesses in their
cluster detection algorithms. Several examples for the effective and
efficient usage of our tool are presented. expand
|
|
Interactive Visual Analysis of Perfusion Data |
|
Steffen Oeltze,
Helmut Doleisch,
Helwig Hauser,
Philipp Muigg,
Bernhard Preim
|
|
Pages: 1392-1399 |
|
doi>10.1109/TVCG.2007.70569 |
|
Available formats:
Publisher Site
|
|
Perfusion
data are dynamic medical image data which characterize the regional
blood flow in human tissue. These data bear a great potential in medical
diagnosis, since diseases can be better distinguished and detected at
an earlier stage compared to ...
Perfusion
data are dynamic medical image data which characterize the regional
blood flow in human tissue. These data bear a great potential in medical
diagnosis, since diseases can be better distinguished and detected at
an earlier stage compared to static image data. The wide-spread use of
perfusion data is hampered by the lack of efficient evaluation methods.
For each voxel, a time-intensity curve characterizes the enhancement of a
contrast agent. Parameters derived from these curves characterize the
perfusion and have to be integrated for diagnosis. The diagnostic
evaluation of this multi-field data is challenging and time-consuming
due to its complexity. For the visual analysis of such datasets,
feature-based approaches allow to reduce the amount of data and direct
the user to suspicious areas. We present an interactive visual analysis
approach for the evaluation of perfusion data. For this purpose, we
integrate statistical methods and interactive feature specification.
Correlation analysis and Principal Component Analysis (PCA) are applied
for dimensionreduction and to achieve a better understanding of the
inter-parameter relations. Multiple, linked views facilitate the
definition of features by brushing multiple dimensions. The
specification result is linked to all views establishing a focus+context
style of visualization in 3D. We discuss our approach with respect to
clinical datasets from the three major application areas: ischemic
stroke diagnosis, breast tumor diagnosis, as well as the diagnosis of
the coronary heart disease (CHD). It turns out that the significance of
perfusion parameters strongly depends on the individual patient,
scanning parameters, and data pre-processing. expand
|
|
Variable Interactions in Query-Driven Visualization |
|
Luke Gosink,
John Anderson,
Wes Bethel,
Kenneth Joy
|
|
Pages: 1400-1407 |
|
doi>10.1109/TVCG.2007.70519 |
|
Available formats:
Publisher Site
|
|
Our
ability to generate ever-larger, increasingly-complex data, has
established the need for scalable methods that identify, and provide
insight into, important variable trends and interactions. Query-driven
methods are among the small subset of techniques ...
Our
ability to generate ever-larger, increasingly-complex data, has
established the need for scalable methods that identify, and provide
insight into, important variable trends and interactions. Query-driven
methods are among the small subset of techniques that are able to
address both large and highly complex datasets.This paper presents a new
method that increases the utility of query-driven techniques by
visually conveying statistical information about the trends that exist
between variables in a query. In this method, correlation fields,
created between pairs of variables, are used with the cumulative
distribution functions of variables expressed in a user's query. This
integrated use of cumulative distribution functions and correlation
fields visually reveals, with respect to the solution space of the
query, statistically important interactions between any three variables,
and allows for trends between these variables to be readily identified.
We demonstrate our method by analyzing interactions between variables
in two flame-front simulations. expand
|
|
Visual Analysis of the Air Pollution Problem in Hong Kong |
|
Huamin Qu,
Wing-Yi Chan,
Anbang Xu,
Kai-Lun Chung,
Kai-Hon Lau,
Ping Guo
|
|
Pages: 1408-1415 |
|
doi>10.1109/TVCG.2007.70523 |
|
Available formats:
Publisher Site
|
|
We
present a comprehensive system for weather data visualization. Weather
data are multivariate and contain vector fields formed by wind speed and
direction. Several well-established visualization techniques such as
parallel coordinates and polar systems ...
We
present a comprehensive system for weather data visualization. Weather
data are multivariate and contain vector fields formed by wind speed and
direction. Several well-established visualization techniques such as
parallel coordinates and polar systems are integrated into our system.
We also develop various novel methods, including circular pixel bar
charts embedded into polar systems, enhanced parallel coordinates with
S-shape axis, and weighted complete graphs. Our system was used to
analyze the air pollution problem in Hong Kong and some interesting
patterns have been found. expand
|
|
Topological Landscapes: A Terrain Metaphor for Scientific Data |
|
Gunther Weber,
Peer-Timo Bremer,
Valerio Pascucci
|
|
Pages: 1416-1423 |
|
doi>10.1109/TVCG.2007.70601 |
|
Available formats:
Publisher Site
|
|
Scientific
visualization and illustration tools are designed to help people
understand the structure and complexity of scientific data with images
that are as informative and intuitive as possible. In this context the
use of metaphors plays an important ...
Scientific
visualization and illustration tools are designed to help people
understand the structure and complexity of scientific data with images
that are as informative and intuitive as possible. In this context the
use of metaphors plays an important role since they make complex
information easily accessible by using commonly known concepts. In this
paper we propose a new metaphor, called “Topological Landscapes,” which
facilitates understanding the topological structure of scalar functions.
The basic idea is to construct a terrain with the same topology as a
given dataset and to display the terrain as an easily understood
representation of the actual input data. In this projection from an
$n$-dimensional scalar function to a two-dimensional (2D) model we
preserve function values of critical points, the persistence (function
span) of topological features, and one possible additional metric
property (in our examples volume). By displaying this topologically
equivalent landscape together with the original data we harness the
natural human proficiency in understanding terrain topography and make
complex topological information easily accessible. expand
|
|
IStar: A Raster Representation for Scalable Image and Volume Data |
|
Joe Kniss,
Warren Hunt,
Kristin Potter,
Pradeep Sen
|
|
Pages: 1424-1431 |
|
doi>10.1109/TVCG.2007.70572 |
|
Available formats:
Publisher Site
|
|
Topology
has been an important tool for analyzing scalar data and flow fields in
visualization. In this work, we analyze the topology of multivariate
image and volume data sets with discontinuities in order to create an
efficient, raster-based representation ...
Topology
has been an important tool for analyzing scalar data and flow fields in
visualization. In this work, we analyze the topology of multivariate
image and volume data sets with discontinuities in order to create an
efficient, raster-based representation we call IStar. Specifically, the
topology information is used to create a dual structure that contains
nodes and connectivity information for every segmentable region in the
original data set. This graph structure, along with a sampled
representation of the segmented data set, is embedded into a standard
raster image which can then be substantially downsampled and compressed.
During rendering, the raster image is upsampled and the dual graph is
used to reconstruct the original function. Unlike traditional raster
approaches, our representation can preserve sharp discontinuities at any
level of magnification, much like scalable vector graphics. However,
because our representation is raster-based, it is well suited to the
real-time rendering pipeline. We demonstrate this by reconstructing our
data sets on graphics hardware at real-time rates. expand
|
|
Topologically Clean Distance Fields |
|
Attila Gyulassy,
Mark Duchaineau,
Vijay Natarajan,
Valerio Pascucci,
Eduardo Bringa,
Andrew Higginbotham,
Bernd Hamann
|
|
Pages: 1432-1439 |
|
doi>10.1109/TVCG.2007.70603 |
|
Available formats:
Publisher Site
|
|
Analysis
of the results obtained from material simulations is important in the
physical sciences. Our research was motivated by the need to investigate
the properties of a simulated porous solid as it is hit by a
projectile. This paper describes two ...
Analysis
of the results obtained from material simulations is important in the
physical sciences. Our research was motivated by the need to investigate
the properties of a simulated porous solid as it is hit by a
projectile. This paper describes two techniques for the generation of
distance fields containing a minimal number of topological features, and
we use them to identify features of the material. We focus on distance
fields defined on a volumetric domain considering the distance to a
given surface embedded within the domain. Topological features of the
field are characterized by its critical points. Our first methodbegins
with a distance field that is computed using a standard approach, and
simplifies this field using ideas from Morse theory. We present a
procedure for identifying and extracting a feature set through analysis
of the MS complex, and apply it to find the invariants in the clean
distance field. Our second method proceeds by advancing a front,
beginning at the surface, and locally controlling the creation of new
critical points. We demonstrate the value of topologically clean
distance fields for the analysis of filament structures in porous
solids. Our methods produce a curved skeleton representation of the
filaments that helps material scientists to perform a detailed
qualitative and quantitative analysis of pores, and hence infer
important material properties. Furthermore, we provide a set of criteria
for finding the “difference” between two skeletal structures, and use
this to examine how the structure of the porous solid changes over
several timesteps in the simulation of the particle impact. expand
|
|
Efficient Computation of Morse-Smale Complexes for Three-dimensional Scalar Functions |
|
Attila Gyulassy,
Vijay Natarajan,
Valerio Pascucci,
Bernd Hamann
|
|
Pages: 1440-1447 |
|
doi>10.1109/TVCG.2007.70552 |
|
Available formats:
Publisher Site
|
|
The
Morse-Smale complex is an efficient representation of the gradient
behavior of a scalar function, and critical points paired by the complex
identify topological features and their importance. We present an
algorithm that constructs the Morse-Smale ...
The
Morse-Smale complex is an efficient representation of the gradient
behavior of a scalar function, and critical points paired by the complex
identify topological features and their importance. We present an
algorithm that constructs the Morse-Smale complex in a series of sweeps
through the data, identifying various components of the complex in a
consistent manner. All components of the complex, both geometric and
topological, are computed, providing a complete decomposition of the
domain. Efficiency is maintained by representing the geometry of the
complex in terms of point sets. expand
|
|
Similarity-Guided Streamline Placement with Error Evaluation |
|
Yuan Chen,
Jonathan Cohen,
Julian Krolik
|
|
Pages: 1448-1455 |
|
doi>10.1109/TVCG.2007.70595 |
|
Available formats:
Publisher Site
|
|
Most
streamline generation algorithms either provide a particular density of
streamlines across the domain or explicitly detect features, such as
critical points, and follow customized rules to emphasize those
features. However, the former generally ...
Most
streamline generation algorithms either provide a particular density of
streamlines across the domain or explicitly detect features, such as
critical points, and follow customized rules to emphasize those
features. However, the former generally includes many redundant
streamlines, and the latter requires Boolean decisions on which points
are features (and may thus suffer from robustness problems for
real-world data). We take a new approach to adaptive streamline
placement for steady vector fields in 2D and 3D. We define a metric for
local similarity among streamlines and use this metric to grow
streamlines from a dense set of candidate seed points. The metric
considers not only Euclidean distance, but also a simple statistical
measure of shape and directional similarity. Without explicit feature
detection, our method produces streamlines that naturally accentuate
regions of geometric interest. In conjunction with this method, we also
propose a quantitative error metric for evaluating a streamline
representation based on how well it preserves the information from the
original vector field. This error metric reconstructs a vector field
from points on the streamline representation and computes a difference
of the reconstruction from the original vector field. expand
|
|
Efficient Visualization of Lagrangian Coherent Structures by Filtered AMR Ridge Extraction |
|
Filip Sadlo,
Ronald Peikert
|
|
Pages: 1456-1463 |
|
doi>10.1109/TVCG.2007.70554 |
|
Available formats:
Publisher Site
|
|
This
paper presents a method for filtered ridge extraction based on adaptive
mesh refinement. It is applicable in situations where the underlying
scalar field can be refined during ridge extraction. This requirement is
met by the concept of Lagrangian ...
This
paper presents a method for filtered ridge extraction based on adaptive
mesh refinement. It is applicable in situations where the underlying
scalar field can be refined during ridge extraction. This requirement is
met by the concept of Lagrangian coherent structures which is based on
trajectories started at arbitrary sampling grids that are independent of
the underlying vector field. The Lagrangian coherent structures are
extracted as ridges in finite Lyapunov exponent fields computed from
these grids of trajectories. The method is applied to several variants
of finite Lyapunov exponents, one of which is newly introduced. High
computation time due to the high number of required trajectories is a
main drawback when computing Lyapunov exponents of 3-dimensional vector
fields. The presented method allows a substantial speed-up by avoiding
the seeding of trajectories in regions where no ridges are present or do
not satisfy the prescribed filter criteria such as a minimum finite
Lyapunov exponent. expand
|
|
Efficient Computation and Visualization of Coherent Structures in Fluid Flow Applications |
|
Christoph Garth,
Florian Gerhardt,
Xavier Tricoche,
Hagen Hans
|
|
Pages: 1464-1471 |
|
doi>10.1109/TVCG.2007.70551 |
|
Available formats:
Publisher Site
|
|
The
recently introduced notion of Finite-Time Lyapunov Exponent to
characterize Coherent Lagrangian Structures provides a powerful
framework for the visualization and analysis of complex technical flows.
Its definition is simple and intuitive, and it ...
The
recently introduced notion of Finite-Time Lyapunov Exponent to
characterize Coherent Lagrangian Structures provides a powerful
framework for the visualization and analysis of complex technical flows.
Its definition is simple and intuitive, and it has a deep theoretical
foundation. While the application of this approach seems straightforward
in theory, the associated computational cost is essentially
prohibitive. Due to the Lagrangian nature of this technique, a huge
number of particle paths must be computed to fill the space-time flow
domain.In this paper, we propose a novel scheme for the adaptive
computation of FTLE fields in two and three dimensions that
significantly reduces the number ofrequired particle paths. Furthermore,
for three-dimensional flows, we show on several examples that
meaningful results can be obtained by restricting the analysis to a
well-chosen plane intersecting the flow domain. Finally, we examine some
of the visualization aspects of FTLE-based methods and introduce
several new variations that help in the analysis of specific aspects of a
flow. expand
|
|
Texture-based feature tracking for effective time-varying data visualization |
|
Jesus Caban,
Alark Joshi,
Penny Rheingans
|
|
Pages: 1472-1479 |
|
doi>10.1109/TVCG.2007.70599 |
|
Available formats:
Publisher Site
|
|
Analyzing,
visualizing, and illustrating changes within time-varying volumetric
data is challenging due to the dynamic changes occurring between
timesteps. The changes and variations in computational fluid dynamic
volumes and atmospheric 3D datasets ...
Analyzing,
visualizing, and illustrating changes within time-varying volumetric
data is challenging due to the dynamic changes occurring between
timesteps. The changes and variations in computational fluid dynamic
volumes and atmospheric 3D datasets do not follow any particular
transformation. Features within the data move at different speeds and
directions making the tracking and visualization of these features a
difficult task. We introduce a texture-based feature tracking technique
to overcome some of the current limitations found in the illustration
and visualization of dynamic changes within time-varying volumetric
data. Our texture-based technique tracks various features individually
and then uses the tracked objects to better visualize structural
changes.We show the effectiveness of our texture-based tracking
technique with both synthetic and real world time-varying data.
Furthermore, we highlight the specific visualization, annotation,
registration, and feature isolation benefits of our technique. For
instance, we show how our texture-based tracking can lead to insightful
visualizations of time-varying data. Such visualizations, more than
traditional visualization techniques, can assist domain scientists to
explore and understand dynamic changes. expand
|
|
Interactive Visualization of Volumetric White Matter Connectivity in DT-MRI Using a Parallel-Hardware Hamilton-Jacobi Solver |
|
Won-Ki Jeong,
P. Thomas Fletcher,
Ran Tao,
Ross Whitaker
|
|
Pages: 1480-1487 |
|
doi>10.1109/TVCG.2007.70571 |
|
Available formats:
Publisher Site
|
|
In
this paper we present a method to compute and visualize volumetric
white matter connectivity in diffusion tensor magnetic resonance imaging
(DT-MRI) using a Hamilton-Jacobi (H-J) solver on the GPU (Graphics
Processing Unit). Paths through the volume ...
In
this paper we present a method to compute and visualize volumetric
white matter connectivity in diffusion tensor magnetic resonance imaging
(DT-MRI) using a Hamilton-Jacobi (H-J) solver on the GPU (Graphics
Processing Unit). Paths through the volume are assigned costs that are
lower if they are consistent with the preferred diffusion directions.The
proposed method finds a set of voxels in the DTI volume that contain
paths between two regions whose costs are within a threshold of the
optimal path.The result is a volumetric optimal path analysis, which is
driven by clinical and scientific questions relating to the connectivity
between various known anatomical regions of the brain.To solve the
minimal path problem quickly, we introduce a novel numerical algorithm
for solving H-J equations, which we call the Fast Iterative Method
(FIM). This algorithm is well-adapted to parallel architectures, and we
present a GPU-based implementation, which runs roughly 50-100 times
faster than traditional CPU-based solvers for anisotropic H-J equations.
The proposed system allows users to freely change the endpoints of
interesting pathways and to visualize the optimal volumetric path
between them at an interactive rate. We demonstrate the proposed method
on some synthetic and real DT-MRI datasets and compare the performance
with existing methods. expand
|
|
Visualizing Whole-Brain DTI Tractography with GPU-based Tuboids and LoD Management |
|
Vid Petrovic,
James Fallon,
Falko Kuester
|
|
Pages: 1488-1495 |
|
doi>10.1109/TVCG.2007.70532 |
|
Available formats:
Publisher Site
|
|
Diffusion
Tensor Imaging (DTI) of the human brain, coupled with tractography
techniques, enable the extraction of large-collections of
three-dimensional tract pathways per subject. These pathways andpathway
bundles represent the connectivity between ...
Diffusion
Tensor Imaging (DTI) of the human brain, coupled with tractography
techniques, enable the extraction of large-collections of
three-dimensional tract pathways per subject. These pathways andpathway
bundles represent the connectivity between different brain regions and
are critical for the understanding of brain related diseases. A flexible
and efficient GPU-based rendering technique for DTI tractography data
is presented that addresses common performance bottlenecks and
image-quality issues, allowing interactiverender rates to be achieved on
commodity hardware. An occlusion query-based pathway LoD management
system for streamlines/streamtubes/tuboids is introduced that optimizes
input geometry, vertex processing, and fragment processing loads, and
helps reduce overdraw. The tuboid, a fully-shaded streamtube impostor
constructed entirely on the GPU from streamline vertices, is also
introduced. Unlike full streamtubes and other impostor constructs,
tuboids require little to no preprocessing or extra space over the
original streamline data. The supported fragment processing levels of
detail range from texture-based draft shading to full raycast normal
computation, Phong shading, environment mapping, and curvature-correct
text labeling. The presented text labeling technique for tuboids
provides adaptive, aesthetically pleasing labels that appear attached to
the surface of the tubes. Furthermore, an occlusion query aggregating
and scheduling scheme for tuboids is described that reduces the query
overhead. Results for a tractography dataset are presented, and
demonstrate that LoD-managed tuboids offer benefits over traditional
streamtubes both in performance and appearance. expand
|
|
Topological Visualization of Brain Diffusion MRI Data |
|
Thomas Schultz,
Holger Theisel,
Hans-Peter Seidel
|
|
Pages: 1496-1503 |
|
doi>10.1109/TVCG.2007.70602 |
|
Available formats:
Publisher Site
|
|
Topological
methods give concise and expressive visual representations of flow
fields. The present work suggests acomparable method for the
visualization of human brain diffusion MRI data. We explore existing
techniques for the topological analysis of ...
Topological
methods give concise and expressive visual representations of flow
fields. The present work suggests acomparable method for the
visualization of human brain diffusion MRI data. We explore existing
techniques for the topological analysis of generic tensor fields, but
find them inappropriate for diffusion MRI data. Thus, we propose a novel
approach that considers the asymptotic behavior of a probabilistic
fiber tracking method and define analogs of the basic concepts of flow
topology, like critical points, basins, and faces, with interpretations
in terms of brain anatomy. The resulting features are fuzzy, reflecting
the uncertainty inherent in any connectivity estimate from diffusion
imaging. We describe an algorithm to extract the new type of features,
demonstrate its robustness under noise, and present results for two
regions in a diffusion MRI dataset to illustrate that the method allows a
meaningful visual analysis of probabilistic fiber tracking results. expand
|
|
Stochastic DT-MRI Connectivity Mapping on the GPU |
|
Tim McGraw,
Mariappan Nadar
|
|
Pages: 1504-1511 |
|
doi>10.1109/TVCG.2007.70597 |
|
Available formats:
Publisher Site
|
|
We
present a method for stochastic fiber tract mapping from diffusion
tensor MRI (DT-MRI) implemented on graphics hardware. From the simulated
fibers we compute a connectivity map that gives an indication of the
probability that two points in the dataset ...
We
present a method for stochastic fiber tract mapping from diffusion
tensor MRI (DT-MRI) implemented on graphics hardware. From the simulated
fibers we compute a connectivity map that gives an indication of the
probability that two points in the dataset are connected by a neuronal
fiber path. A Bayesian formulation of the fiber model is given and it is
shown that the inversion method can be used to construct plausible
connectivity. An implementation of this fiber model on the graphics
processing unit (GPU) is presented. Since the fiber paths can be
stochastically generated independently of one another, the algorithm is
highly parallelizable. This allows us to exploit the data-parallel
nature of the GPU fragment processors. We also present a framework for
the connectivity computation on the GPU. Our implementation allows the
user to interactively select regions of interest and observe the
evolving connectivity results during computation. Results are presented
from the stochastic generation of over 250,000 fiber steps per iteration
at interactive frame rates on consumer-grade graphics hardware. expand
|
|
Efficient Surface Reconstruction using Generalized Coulomb Potentials |
|
Andrei C. Jalba,
Jos B. T. M. Roerdink
|
|
Pages: 1512-1519 |
|
doi>10.1109/TVCG.2007.70553 |
|
Available formats:
Publisher Site
|
|
We
propose a novel, geometrically adaptive method for surface
reconstruction from noisy and sparse point clouds, without orientation
information. The method employs a fast convection algorithm to attract
the evolving surface towards the data points. ...
We
propose a novel, geometrically adaptive method for surface
reconstruction from noisy and sparse point clouds, without orientation
information. The method employs a fast convection algorithm to attract
the evolving surface towards the data points. The force field in which
the surface is convected is based on generalized Coulomb potentials
evaluated on an adaptive grid (i.e., an octree) using a fast,
hierarchical algorithm. Formulating reconstruction as a convection
problem in a velocity field generated by Coulomb potentials offers a
number of advantages. Unlike methods which compute the distance from the
data set to the implicit surface, which are sensitive to noise due to
the very reliance on the distance transform, our method is highly
resilient to shot noise since global, generalized Coulomb potentials can
be used to disregard the presence of outliers due to noise. Coulomb
potentials represent long-range interactions that consider all data
points at once, and thus they convey global information which is crucial
in the fitting process. Both the spatial and temporal complexities of
our spatially-adaptive method are proportional to the size of the
reconstructed object, which makes our method compare favorably with
respect to previous approaches in terms of speed and flexibility.
Experiments with sparse as well as noisy data sets show that the method
is capable of delivering crisp and detailed yet smooth surfaces. expand
|
|
Surface Extraction from Multi-Material Components for Metrology using Dual Energy CT |
|
Christoph Heinzl,
Johann Kastner,
Eduard Gröller
|
|
Pages: 1520-1527 |
|
doi>10.1109/TVCG.2007.70598 |
|
Available formats:
Publisher Site
|
|
This
paper describes a novel method for creating surface models of
multi-material components using dual energy computed tomography (DECT).
The application scenario is metrology and dimensional measurement in
industrial high resolution 3D x-ray computed ...
This
paper describes a novel method for creating surface models of
multi-material components using dual energy computed tomography (DECT).
The application scenario is metrology and dimensional measurement in
industrial high resolution 3D x-ray computed tomography (3DCT). Based on
the dual source / dual exposure technology this method employs 3DCT
scans of a high precision micro-focus and a high energy macro-focus
x-ray source. The presented work makes use of the advantages of dual
x-ray exposure technology in order to facilitate dimensional
measurements of multi-material components with high density material
within low density material. We propose a workflow which uses image
fusion and local surface extraction techniques: a prefiltering step
reduces noise inherent in the data. For image fusion the datasets have
to be registered. In the fusion step the benefits of both scans are
combined. The structure of the specimen is taken from the low precision,
blurry, high energy dataset while the sharp edges are adopted and fused
into the resulting image from the high precision, crisp, low energy
dataset. In the final step a reliable surface model is extracted from
the fused dataset using a local adaptive technique. The major
contribution of this paper is the development of a specific workflow for
dimensional measurements of multi-material industrial components, which
takes two x-ray CT datasets with complementary strengths and weaknesses
into account. The performance of the workflow is discussed using a test
specimen as well as two real world industrial parts. As result, a
significant improvement in overall measurement precision, surface
geometry and mean deviation to reference measurement compared to single
exposure scans was facilitated. expand
|
|
Construction of Simplified Boundary Surfaces from Serial-sectioned Metal Micrographs |
|
Scott Dillard,
John Bingert,
Dan Thoma,
Bernd Hamann
|
|
Pages: 1528-1535 |
|
doi>10.1109/TVCG.2007.70543 |
|
Available formats:
Publisher Site
|
|
We
present a method for extracting boundary surfaces from segmented
cross-section image data. We use a constrained Potts model to
interpolate an arbitrary number of region boundaries between segmented
images. This produces a segmented volume from which ...
We
present a method for extracting boundary surfaces from segmented
cross-section image data. We use a constrained Potts model to
interpolate an arbitrary number of region boundaries between segmented
images. This produces a segmented volume from which we extract a
triangulated boundary surface using well-known marching tetrahedra
methods. This surface contains staircase-like artifacts and an abundance
of unnecessary triangles. We describe an approach that addresses these
problems with a voxel-accurate simplification algorithm that reduces
surface complexity by an order of magnitude. Our boundary interpolation
and simplification methods are novel contributions to the study of
surface extraction from segmented cross-sections. We have applied our
method to construct polycrystal grain boundary surfaces from micrographs
of a sample of the metal tantalum. expand
|
|
Random-Accessible Compressed Triangle Meshes |
|
Sung-eui Yoon,
Peter Lindstrom
|
|
Pages: 1536-1543 |
|
doi>10.1109/TVCG.2007.70585 |
|
Available formats:
Publisher Site
|
|
With
the exponential growth in size of geometric data, it is becoming
increasingly important to make effective use of multilevel caches,
limited disk storage, and bandwidth. As a result, recent work in the
visualization community has focused either on ...
With
the exponential growth in size of geometric data, it is becoming
increasingly important to make effective use of multilevel caches,
limited disk storage, and bandwidth. As a result, recent work in the
visualization community has focused either on designing sequential
access compression schemes or on producing cache-coherent layouts of
(uncompressed) meshes for random access. Unfortunately combining these
two strategies is challenging as they fundamentally assume conflicting
modes of data access. In this paper, we propose a novel order-preserving
compression method that supports transparent random access to
compressed triangle meshes. Our decompression method selectively fetches
from disk, decodes, and caches in memory requested parts of a mesh. We
also provide a general mesh access API for seamless mesh traversal and
incidence queries. While the method imposes no particular mesh layout,
it is especially suitable for cache-oblivious layouts, which minimize
the number of decompression I/O requests and provide high cache
utilization during access to decompressed, in-memory portions of the
mesh. Moreover, the transparency of our scheme enables improved
performance without the need for application code changes. We achieve
compression rates on the order of 20:1 and significantly improved I/O
performance due to reduced data transfer. To demonstrate the benefits of
our method, we implement two common applications as benchmarks. By
using cache-oblivious layouts for the input models, we observe 2?6 times
overall speedup compared to using uncompressed meshes. expand
|
|
LiveSync: Deformed Viewing Spheres for Knowledge-Based Navigation |
|
Peter Kohlmann,
Stefan Bruckner,
Armin Kanitsar,
Eduard Gröller
|
|
Pages: 1544-1551 |
|
doi>10.1109/TVCG.2007.70576 |
|
Available formats:
Publisher Site
|
|
Although
real-time interactive volume rendering is available even for very large
data sets, this visualization method is used quite rarely in the
clinical practice. We suspect this is because it is very complicated and
time consuming to adjust the parameters ...
Although
real-time interactive volume rendering is available even for very large
data sets, this visualization method is used quite rarely in the
clinical practice. We suspect this is because it is very complicated and
time consuming to adjust the parameters to achieve meaningful results.
The clinician has to take care of the appropriate viewpoint, zooming,
transfer function setup, clipping planes and other parameters. Because
of this, most often only 2D slices of the data set are examined. Our
work introduces LiveSync, a new concept to synchronize 2D slice views
and volumetric views of medical data sets. Through intuitive picking
actions on the slice, the users define the anatomical structures they
are interested in. The 3D volumetric view is updated automatically with
the goal that the users are provided with expressive result images. To
achieve this live synchronization we use a minimal set of derived
information without the need for segmented data sets or data-specific
pre-computations. The components we consider are the picked point, slice
view zoom, patient orientation, viewpoint history, local object shape
and visibility. We introduce deformed viewing spheres which encode the
viewpoint quality for the components. A combination of these deformed
viewing spheres is used to estimate a good viewpoint. Our system
provides the physician with synchronized views which help to gain deeper
insight into the medical data with minimal user interaction. expand
|
|
Navigating in a Shape Space of Registered Models |
|
Randall Smith,
Richard Pawlicki,
István Kókai,
Jörg Finger,
Thomas Vetter
|
|
Pages: 1552-1559 |
|
doi>10.1109/TVCG.2007.70581 |
|
Available formats:
Publisher Site
|
|
New
product development involves people with different
backgrounds.Designers, engineers, and consumers all have different
criteria, and these criteria interact.Early concepts evolve in this kind
of collaborative context, and there is a need for dynamic ...
New
product development involves people with different
backgrounds.Designers, engineers, and consumers all have different
criteria, and these criteria interact.Early concepts evolve in this kind
of collaborative context, and there is a need for dynamic visualization
of the interaction between design shape and other shape-related design
criteria. In this paper, a Morphable Model is defined from simplified
representations of suitably chosen real cars, providing a continuous
shape space to navigate, manipulate, and visualize. Physical properties
and consumer-provided scores for the real cars (such as 'weight' and
'sportiness') are estimated for new designs across the shape space.This
coupling allows one to manipulate the shape directly while reviewing the
impact on estimated criteria, or conversely, to manipulate the
criterial values of the current design to produce a new shape with more
desirable attributes. expand
|
|
Querying and Creating Visualizations by Analogy |
|
Carlos Scheidegger,
Huy Vo,
David Koop,
Juliana Freire,
Claudio Silva
|
|
Pages: 1560-1567 |
|
doi>10.1109/TVCG.2007.70584 |
|
Available formats:
Publisher Site
|
|
While
there have been advances in visualization systems, particularly in
multi-view visualizations and visual exploration, the process of
building visualizations remains a major bottleneck in data exploration.
We show that provenance metadata collected ...
While
there have been advances in visualization systems, particularly in
multi-view visualizations and visual exploration, the process of
building visualizations remains a major bottleneck in data exploration.
We show that provenance metadata collected during the creation of
pipelines can be reused to suggest similar content in related
visualizations and guide semi-automated changes. We introduce the idea
of query-by-example in the context of an ensemble of visualizations, and
the use of analogies as first-class operations in a system to guide
scalable interactions. We describe an implementation of these techniques
in VisTrails, a publicly-available, open-source system. expand
|
|
Contextualized Videos: Combining Videos with Environment Models to Support Situational Understanding |
|
Yi Wang,
David M. Krum,
Enylton M. Coelho,
Doug A. Bowman
|
|
Pages: 1568-1575 |
|
doi>10.1109/TVCG.2007.70544 |
|
Available formats:
Publisher Site
|
|
Multiple
spatially-related videos are increasingly used in security,
communication, and other applications. Since it can be difficult to
understand the spatial relationships between multiple videos in complex
environments (e.g. to predict a person’s ...
Multiple
spatially-related videos are increasingly used in security,
communication, and other applications. Since it can be difficult to
understand the spatial relationships between multiple videos in complex
environments (e.g. to predict a person’s path through a building), some
visualization techniques, such as video texture projection, have been
used to aid spatial understanding. In this paper, we identify and begin
to characterize an overall class of visualization techniques that
combine video with 3D spatial context. This set of techniques, which we
call contextualized videos, forms a design palette which must be well
understood so that designers can select and use appropriate techniques
that address the requirements of particular spatial video tasks. In this
paper, we first identify user tasks in video surveillance that are
likely to benefit from contextualized videos and discuss the video,
model, and navigation related dimensions of the contextualized video
design space. We then describe our contextualized video testbed which
allows us to explore this design space and compose various video
visualizations for evaluation. Finally, we describe the results of our
process to identify promising design patterns through user selection of
visualization features from the design space, followed by user
interviews. expand
|
|
Lattice-Based Volumetric Global Illumination |
|
Feng Qiu,
Fang Xu,
Zhe Fan,
Neophytou Neophytos,
Arie Kaufman,
Klaus Mueller
|
|
Pages: 1576-1583 |
|
doi>10.1109/TVCG.2007.70573 |
|
Available formats:
Publisher Site
|
|
We
describe a novel volumetric global illumination framework based on the
Face-Centered Cubic (FCC) lattice. An FCC lattice has important
advantages over a Cartesian lattice. It has higher packing density in
the frequency domain, which translates to ...
We
describe a novel volumetric global illumination framework based on the
Face-Centered Cubic (FCC) lattice. An FCC lattice has important
advantages over a Cartesian lattice. It has higher packing density in
the frequency domain, which translates to better sampling efficiency.
Furthermore, it has the maximal possible kissing number (equivalent to
the number of nearest neighbors of each site), which provides optimal 3D
angular discretization among all lattices. We employ a new two-pass
(illumination and rendering) global illumination scheme on an FCC
lattice. This scheme exploits the angular discretization to greatly
simplify the computation in multiple scattering and to minimize
illumination information storage. The GPU has been utilized to further
accelerate the rendering stage. We demonstrate our new framework with
participating media and volume rendering with multiple scattering, where
both are significantly faster than traditional techniques with
comparable quality. expand
|
|
A Flexible Multi-Volume Shader Framework for Arbitrarily Intersecting Multi-Resolution Datasets |
|
John Plate,
Thorsten Holtkaemper,
Bernd Froehlich
|
|
Pages: 1584-1591 |
|
doi>10.1109/TVCG.2007.70534 |
|
Available formats:
Publisher Site
|
|
We
present a powerful framework for 3D-texture-based rendering of multiple
arbitrarily intersecting volumetric datasets. Each volume is
represented by a multi-resolution octree-based structure and we use
out-of-core techniques to support extremely large ...
We
present a powerful framework for 3D-texture-based rendering of multiple
arbitrarily intersecting volumetric datasets. Each volume is
represented by a multi-resolution octree-based structure and we use
out-of-core techniques to support extremely large volumes. Users define a
set of convex polyhedral volume lenses, which may be associated with
one or more volumetric datasets. The volumes or the lenses can be
interactively moved around while the region inside each lens is rendered
using interactively defined multi-volume shaders. Our rendering
pipeline splits each lens into multiple convex regions such that each
region is homogenous and contains a fixed number of volumes. Each such
region is further split by the brick boundaries of the associated octree
representations. The resulting puzzle of lens fragments is sorted in
front-to-back or back-to-front order using a combination of a
view-dependent octree traversal and a GPU-based depth peeling technique.
Our current implementation uses slice-based volume rendering and allows
interactive roaming through multiple intersecting multi-gigabyte
volumes. expand
|
|
Scalable Hybrid Unstructured and Structured Grid Raycasting |
|
Philipp Muigg,
Markus Hadwiger,
Helmut Doleisch,
Helwig Hauser
|
|
Pages: 1592-1599 |
|
doi>10.1109/TVCG.2007.70588 |
|
Available formats:
Publisher Site
|
|
This
paper presents a scalable framework for real-time raycasting of large
unstructured volumes that employs a hybrid bricking approach. It
adaptively combines original unstructured bricks in important (focus)
regions, with structured bricks that are ...
This
paper presents a scalable framework for real-time raycasting of large
unstructured volumes that employs a hybrid bricking approach. It
adaptively combines original unstructured bricks in important (focus)
regions, with structured bricks that are resampled on demand in less
important (context) regions. The basis of this focus+context approach is
interactive specification of a scalar degree of interest (DOI)
function. Thus, rendering always considers two volumes simultaneously: a
scalar data volume, and the current DOI volume. The crucial problem of
visibility sorting is solved by raycasting individual bricks and
compositing in visibility order from front to back. In order to minimize
visual errors at the grid boundary, it is always rendered accurately,
even for resampled bricks. A variety of different rendering modes can be
combined, including contour enhancement. A very important property of
our approach is that it supports a variety of cell types natively, i.e.,
it is not constrained to tetrahedral grids, even when interpolation
within cells is used. Moreover, our framework can handle multi-variate
data, e.g., multiple scalar channels such as temperature or pressure, as
well as time-dependent data. The combination of unstructured and
structured bricks with different quality characteristics such as the
type of interpolation or resampling resolution in conjunction with
custom texture memory management yields a very scalable system. expand
|
|
Transform Coding for Hardware-accelerated Volume Rendering |
|
Nathaniel Fout,
Kwan-Liu Ma
|
|
Pages: 1600-1607 |
|
doi>10.1109/TVCG.2007.70516 |
|
Available formats:
Publisher Site
|
|
Hardware-accelerated
volume rendering using the GPU is now the standard approach for
real-time volume rendering, although limited graphics memory can present
a problem when rendering large volume data sets. Volumetric compression
in which the decompression ...
Hardware-accelerated
volume rendering using the GPU is now the standard approach for
real-time volume rendering, although limited graphics memory can present
a problem when rendering large volume data sets. Volumetric compression
in which the decompression is coupled to rendering has been shown to be
an effective solution to this problem; however, most existing
techniques were developed in the context of software volume rendering,
and all but the simplest approaches are prohibitive in a real-time
hardware-accelerated volume rendering context. In this paper we present a
novel block-based transform coding scheme designed specifically with
real-time volume rendering in mind, such that the decompression is fast
without sacrificing compression quality. This is made possible by
consolidating the inverse transform with dequantization in such a way as
to allow most of the reprojection to be precomputed. Furthermore, we
take advantage of the freedom afforded by off-line compression in order
to optimize the encoding as much as possible while hiding this
complexity from the decoder. In this context we develop a new block
classification scheme which allows us to preserve perceptually important
features in the compression. The result of this work is an asymmetric
transform coding scheme that allows very large volumes to be compressed
and then decompressed in real-time while rendering on the GPU. expand
|
|
Molecular Surface Abstraction |
|
Gregory Cipriano,
Michael Gleicher
|
|
Pages: 1608-1615 |
|
doi>10.1109/TVCG.2007.70578 |
|
Available formats:
Publisher Site
|
|
In
this paper we introduce a visualization technique that provides an
abstracted view of the shape and spatio-physico-chemical properties of
complex molecules. Unlike existing molecular viewing methods, our
approach suppresses small details to facilitate ...
In
this paper we introduce a visualization technique that provides an
abstracted view of the shape and spatio-physico-chemical properties of
complex molecules. Unlike existing molecular viewing methods, our
approach suppresses small details to facilitate rapid comprehension, yet
marks the location of significant features so they remain visible. Our
approach uses a combination of filters and mesh restructuring to
generate a simplified representation that conveys the overall shape and
spatio-physico-chemical properties (e.g. electrostatic charge). Surface
markings are then used in the place of important removed details, as
well as to supply additional information. These simplified
representations are amenable to display using stylized rendering
algorithms to further enhance comprehension. Our initial experience
suggests that our approach is particularly useful in browsing
collections of large molecules and in readily making comparisons between
them. expand
|
|
Two-Level Approach to Efficient Visualization of Protein Dynamics |
|
Ove Daae Lampe,
Ivan Viola,
Nathalie Reuter,
Helwig Hauser
|
|
Pages: 1616-1623 |
|
doi>10.1109/TVCG.2007.70517 |
|
Available formats:
Publisher Site
|
|
Proteins
are highly flexible and large amplitude deformations of their
structure, also called slow dynamics, are often decisive to their
function. We present a two-level rendering approach that enables
visualization of slow dynamics of large protein ...
Proteins
are highly flexible and large amplitude deformations of their
structure, also called slow dynamics, are often decisive to their
function. We present a two-level rendering approach that enables
visualization of slow dynamics of large protein assemblies. Our approach
is aligned with a hierarchical model of large scale molecules. Instead
of constantly updating positions of large amounts of atoms, we update
the position and rotation of residues, i.e., higher level building
blocks of a protein. Residues are represented by one vertex only
indicating its position and additional information defining the
rotation. The atoms in the residues are generated on-the-fly on the GPU,
exploiting the new graphics hardware geometry shader capabilities.
Moreover, we represent the atoms by billboards instead of tessellated
spheres. Our representation is then significantly faster and pixel
precise. We demonstrate the usefulness of our new approach in the
context of our collaborative bioinformatics project. expand
|
|
Visual Verification and Analysis of Cluster Detection for Molecular Dynamics |
|
Sebastian Grottel,
Guido Reina,
Jadran Vrabec,
Thomas Ertl
|
|
Pages: 1624-1631 |
|
doi>10.1109/TVCG.2007.70614 |
|
Available formats:
Publisher Site
|
|
A
current research topic in molecular thermodynamics is the condensation
of vapor to liquid and the investigation of this process at the
molecular level. Condensation is found in many physical phenomena, e.g.
the formation of atmospheric clouds or the ...
A
current research topic in molecular thermodynamics is the condensation
of vapor to liquid and the investigation of this process at the
molecular level. Condensation is found in many physical phenomena, e.g.
the formation of atmospheric clouds or the processes inside steam
turbines, where a detailed knowledge of the dynamics of condensation
processes will help to optimize energy efficiency and avoid problems
with droplets of macroscopic size. The key properties of these processes
are the nucleation rate and the critical cluster size. For the
calculation of these properties it is essential to make use of a
meaningful definition of molecular clusters, which currently is a not
completely resolved issue. In this paper a framework capable of
interactively visualizing molecular datasets of such nucleation
simulations is presented, with an emphasis on the detected molecular
clusters. To check the quality of the results of the cluster detection,
our framework introduces the concept of flow groups to highlight
potential cluster evolution over time which is not detected by the
employed algorithm. To confirm the findings of the visual analysis, we
coupled the rendering view with a schematic view of the clusters'
evolution. This allows to rapidly assess the quality of the molecular
cluster detection algorithm and to identify locations in the simulation
data in space as well as in time where the cluster detection fails.
Thus, thermodynamics researchers can eliminate weaknesses in their
cluster detection algorithms. Several examples for the effective and
efficient usage of our tool are presented. expand
|
|
CoViCAD: Comprehensive Visualization of Coronary Artery Disease |
|
Maurice Termeer,
Javier Oliván Bescós,
Marcel Breeuwer,
Anna Vilanova,
Frans Gerritsen,
Eduard Gröller
|
|
Pages: 1632-1639 |
|
doi>10.1109/TVCG.2007.70550 |
|
Available formats:
Publisher Site
|
|
We
present novel, comprehensive visualization techniques for the diagnosis
of patients with Coronary Artery Disease using segmented cardiac MRI
data. We extent an accepted medical visualization technique called the
bull's eye plot by removing discontinuities, ...
We
present novel, comprehensive visualization techniques for the diagnosis
of patients with Coronary Artery Disease using segmented cardiac MRI
data. We extent an accepted medical visualization technique called the
bull's eye plot by removing discontinuities, preserving the volumetric
nature of the left ventricular wall and adding anatomical context. The
resulting volumetric bull's eye plot can be used for the assessment of
transmurality. We link these visualizations to a 3D view that presents
viability information in a detailed anatomical context. We combine
multiple MRI scans (whole heart anatomical data, late enhancement data)
and multiple segmentations (polygonal heart model, late enhancement
contours, coronary artery tree). By selectively combining different
rendering techniques we obtain comprehensive yet intuitive
visualizations of the various data sources. expand
|
|
Visualizing Large-Scale Uncertainty in Astrophysical Data |
|
Hongwei Li,
Chi-Wing Fu,
Yinggang Li,
Andrew Hanson
|
|
Pages: 1640-1647 |
|
doi>10.1109/TVCG.2007.70530 |
|
Available formats:
Publisher Site
|
|
Visualization
of uncertainty or error in astrophysical data is seldom available in
simulations of astronomical phenomena, and yet almost all rendered
attributes possess some degree of uncertainty due to observational
error. Uncertainties associated with ...
Visualization
of uncertainty or error in astrophysical data is seldom available in
simulations of astronomical phenomena, and yet almost all rendered
attributes possess some degree of uncertainty due to observational
error. Uncertainties associated with spatial location typically vary
signicantly with scale and thus introduce further complexity in the
interpretation of a given visualization. This paper introduces effective
techniques for visualizing uncertainty in large-scale virtual
astrophysical environments. Building upon our previous transparently
scalable visualization architecture, we develop tools that enhance the
perception and comprehension of uncertainty across wide scale ranges.
Our methods include a unified color-coding scheme for representing
log-scale distances and percentage errors, an ellipsoid model to
represent positional uncertainty, an ellipsoid envelope model to expose
trajectory uncertainty, and a magic-glass design supporting the
selection of ranges of log-scale distance and uncertainty parameters, as
well as an overview mode and a scalable WIM tool for exposing the
magnitudes of spatial context and uncertainty. expand
|
|
Uncertainty Visualization in Medical Volume Rendering Using Probabilistic Animation |
|
Claes Lundström,
Patric Ljung,
Anders Persson,
Anders Ynnerman
|
|
Pages: 1648-1655 |
|
doi>10.1109/TVCG.2007.70518 |
|
Available formats:
Publisher Site
|
|
Direct
Volume Rendering has proved to be an effective visualization method for
medical data sets and has reached wide-spread clinical use. The
diagnostic exploration, in essence, corresponds to a tissue
classification task, which is often complex and ...
Direct
Volume Rendering has proved to be an effective visualization method for
medical data sets and has reached wide-spread clinical use. The
diagnostic exploration, in essence, corresponds to a tissue
classification task, which is often complex and time-consuming.
Moreover, a major problem is the lack of information on the uncertainty
of the classification, which can have dramatic consequences for the
diagnosis. In this paper this problem is addressed by proposing
animation methods to convey uncertainty in the rendering.The foundation
is a probabilistic Transfer Function model which allows for direct user
interaction with the classification. The rendering is animated by
sampling the probability domain over time, which results in varying
appearance for uncertain regions. A particularly promising application
of this technique is a “sensitivity lens” applied to focus regions in
the data set. The methods have been evaluated by radiologists in a study
simulating the clinical task of stenosis assessment, in which the
animation technique is shown to outperform traditional rendering in
terms of assessment accuracy. expand
|
|
Grid With a View: Optimal Texturing for Perception of Layered Surface Shape |
|
Alethea Bair,
Donald House
|
|
Pages: 1656-1663 |
|
doi>10.1109/TVCG.2007.70559 |
|
Available formats:
Publisher Site
|
|
We
present the results of two controlled studies comparing layered surface
visualizations under various texture conditions. The task was to
estimate surface normals, measured by accuracy of a hand-set surface
normal probe. A single surface visualization ...
We
present the results of two controlled studies comparing layered surface
visualizations under various texture conditions. The task was to
estimate surface normals, measured by accuracy of a hand-set surface
normal probe. A single surface visualization was compared with the
two-surfaces case under conditions of no texture and with projected grid
textures. Variations in relative texture spacing on top and bottom
surfaces were compared, as well as opacity of the top surface.
Significant improvements are found for the textured cases over
non-textured surfaces. Either larger or thinner top-surface textures,
and lower top surface opacities are shown to give less bottom surface
error. Top surface error appears to be highly resilient to changes in
texture. Given the results we also present an example of how appropriate
textures might be useful in volume visualization. expand
|
|
Conjoint Analysis to Measure the Perceived Quality in Volume Rendering |
|
Joachim Giesen,
Klaus Mueller,
Eva Schuberth,
Lujin Wang,
Peter Zolliker
|
|
Pages: 1664-1671 |
|
doi>10.1109/TVCG.2007.70542 |
|
Available formats:
Publisher Site
|
|
Visualization
algorithms can have a large number of parameters, making the space of
possible rendering results rather high-dimensional. Only a systematic
analysis of the perceived quality can truly reveal the optimal setting
for each such parameter. ...
Visualization
algorithms can have a large number of parameters, making the space of
possible rendering results rather high-dimensional. Only a systematic
analysis of the perceived quality can truly reveal the optimal setting
for each such parameter. However, an exhaustive search in which all
possible parameter permutations are presented to each user within a
study group would be infeasible to conduct. Additional complications may
result from possible parameter co-dependencies. Here, we will introduce
an efficient user study design and analysis strategy that is geared to
cope with this problem. The user feedback is fast and easy to obtain and
does not require exhaustive parameter testing. To enable such a
framework we have modified a preference measuring methodology, conjoint
analysis, that originated in psychology and is now also widely used in
market research. We demonstrate our framework by a study that measures
the perceived quality in volume rendering within the context of large
parameter spaces. expand
|
|
Interactive sound rendering in complex and dynamic scenes using frustum tracing |
|
Christian Lauterbach,
Anish Chandak,
Dinesh Manocha
|
|
Pages: 1672-1679 |
|
doi>10.1109/TVCG.2007.70567 |
|
Available formats:
Publisher Site
|
|
We
present a new approach for simulating real-time sound propagation in
complex, virtual scenes with dynamic sources and objects. Our approach
combines the efficiency of interactive ray tracing with the accuracy of
tracing a volumetric representation. ...
We
present a new approach for simulating real-time sound propagation in
complex, virtual scenes with dynamic sources and objects. Our approach
combines the efficiency of interactive ray tracing with the accuracy of
tracing a volumetric representation. We use a four-sided convex frustum
and perform clipping and intersection tests using ray packet tracing. A
simple and efficient formulation is used to compute secondary frusta and
perform hierarchical traversal. We demonstrate the performance of our
algorithm in an interactive system for complex environments and
architectural models with tens or hundreds of thousands of triangles.
Our algorithm can perform real-time simulation and rendering on a
high-end PC. expand
|
|
Listener-based Analysis of Surface Importance for Acoustic Metrics |
|
Frank Michel,
Eduard Deines,
Martin Hering-Bertram,
Christoph Garth,
Hans Hagen
|
|
Pages: 1680-1687 |
|
doi>10.1109/TVCG.2007.70575 |
|
Available formats:
Publisher Site
|
|
Acoustic
quality in room acoustics is measured by well defined quantities, like
definition, which can be derived from simulated impulse response filters
or measured values. These take into account the intensity and phase
shift of multiple reflections ...
Acoustic
quality in room acoustics is measured by well defined quantities, like
definition, which can be derived from simulated impulse response filters
or measured values. These take into account the intensity and phase
shift of multiple reflections due to a wave front emanating from a sound
source. Definition (D50) and clarity (C50) for example correspond to
the fraction of the energy received in total to the energy received in
the first 50 ms at a certain listener position. Unfortunately, the
impulse response measured at a single point does not provide any
information about the direction of reflections, and about the reflection
surfaces which contribute to this measure. For the visualization of
room acoustics, however, this information is very useful since it allows
to discover regions with high contribution and provides insight into
the influence of all reflecting surfaces to the quality measure. We use
the phonon tracing method to calculate the contribution of the
reflection surfaces to the impulse response for different listener
positions. This data is used to compute importance values for the
geometry taking a certain acoustic metric into account. To get a visual
insight into the directional aspect, we map the importance to the
reflecting surfaces of the geometry. This visualization indicates which
parts of the surfaces need to be changed to enhance the chosen acoustic
quality measure.We apply our method to the acoustic improvement of a
lecture hall by means of enhancing the overall speech comprehensibility
(clarity) and evaluate the results using glyphs to visualize the clarity
($C_{50}$) values at listener positions throughout the room. expand
|
|
Shadow-Driven 4D Haptic Visualization |
|
Hui Zhang,
Andrew Hanson
|
|
Pages: 1688-1695 |
|
doi>10.1109/TVCG.2007.70593 |
|
Available formats:
Publisher Site
|
|
Just
as we can work with two-dimensional floor plans to communicate 3D
architectural design, we can exploit reduced-dimension shadows to
manipulate the higher-dimensional objects generating the shadows.In
particular, by taking advantage of physically ...
Just
as we can work with two-dimensional floor plans to communicate 3D
architectural design, we can exploit reduced-dimension shadows to
manipulate the higher-dimensional objects generating the shadows.In
particular, by taking advantage of physically reactive 3D shadow-space
controllers, we can transform the task of interacting with 4D objects to
a new level of physical reality.We begin with a teaching tool that uses
2D knot diagrams to manipulate the geometry of 3D mathematical knots
via their projections; our unique 2D haptic interface allows the user to
become familiar with sketching, editing, exploration, and manipulation
of 3D knots rendered as projected imageson a 2D shadow space. By
combining graphics and collision-sensing haptics, we can enhance the 2D
shadow-driven editing protocol to successfully leverage 2D pen-and-paper
or blackboard skills. Building on the reduced-dimension 2D editing tool
for manipulating 3D shapes, we develop the natural analogy to produce a
reduced-dimension 3D tool for manipulating 4D shapes.By physically
modeling the correct properties of 4D surfaces, their bending forces,
and their collisions in the 3D haptic controller interface, we can
support full-featured physical exploration of 4D mathematical objects in
a manner that is otherwise far beyond the experience accessible to
human beings.As far as we are aware, this paper reports the first
interactive system with force-feedback that provides "4D haptic
visualization" permitting the user tomodel and interact with 4D
cloth-like objects. expand
|
|
High-Quality Multimodal Volume Rendering for Preoperative Planning of Neurosurgical Interventions |
|
Johanna Beyer,
Markus Hadwiger,
Stefan Wolfsberger,
Katja Bühler
|
|
Pages: 1696-1703 |
|
doi>10.1109/TVCG.2007.70560 |
|
Available formats:
Publisher Site
|
|
Surgical
approaches tailored to an individual patient's anatomy and pathology
have become standard in neurosurgery. Precise preoperative planning of
these procedures, however, is necessary to achieve an optimal
therapeutic effect. Therefore, multiple ...
Surgical
approaches tailored to an individual patient's anatomy and pathology
have become standard in neurosurgery. Precise preoperative planning of
these procedures, however, is necessary to achieve an optimal
therapeutic effect. Therefore, multiple radiological imaging modalities
are used prior to surgery to delineate the patient's anatomy,
neurological function, and metabolic processes. Developing a
three-dimensional perception of the surgical approach, however, is
traditionally still done by mentally fusing multiple modalities.
Concurrent 3D visualization of these datasets can, therefore, improve
the planning process significantly. In this paper we introduce an
application for planning of individual neurosurgical approaches with
high-quality interactive multimodal volume rendering. The application
consists of three main modules which allow to (1) plan the optimal skin
incision and opening of the skull tailored to the underlying pathology;
(2) visualize superficial brain anatomy, function and metabolism; and
(3) plan the patient-specific approach for surgery of deep-seated
lesions. The visualization is based on direct multi-volume raycasting on
graphics hardware, where multiple volumes from different modalities can
be displayed concurrently at interactive frame rates. Graphics memory
limitations are avoided by performing raycasting on bricked volumes. For
preprocessing tasks such as registration or segmentation, the
visualization modules are integrated into a larger framework, thus
supporting the entire workflow of preoperative planning. expand
|
|
Topology, Accuracy, and Quality of Isosurface Meshes Using Dynamic Particles |
|
Miriah Meyer,
Robert M. Kirby,
Ross Whitaker
|
|
Pages: 1704-1711 |
|
doi>10.1109/TVCG.2007.70604 |
|
Available formats:
Publisher Site
|
|
This
paper describes a method for constructing isosurface triangulations of
sampled, volumetric, three-dimensional scalar fields.The resulting
meshes consist of triangles that are of consistently high quality,
making them well suited for accurate interpolation ...
This
paper describes a method for constructing isosurface triangulations of
sampled, volumetric, three-dimensional scalar fields.The resulting
meshes consist of triangles that are of consistently high quality,
making them well suited for accurate interpolation of scalar and
vector-valued quantities, as required for numerous applications in
visualization and numerical simulation.The proposed method does not rely
on a local construction or adjustment of triangles as is done, for
instance, in advancing wavefront or adaptive refinement methods.
Instead, a system of dynamic particles optimally samples an implicit
function such that the particles' relative positions can produce a
topologically correct Delaunay triangulation. Thus, the proposed method
relies on a global placement of triangle vertices. The main
contributions of the paper are the integration of dynamic particles
systems with surface sampling theory and PDE-based methods for
controlling the local variability of particle densities, as well as
detailing a practical method that accommodates Delaunay sampling
requirements to generate sparse sets of points for the production of
high-quality tessellations. expand
|
|
Visualization of Cosmological Particle-Based Datasets |
|
Paul Navratil,
Jarrett Johnson,
Volker Bromm
|
|
Pages: 1712-1718 |
|
doi>10.1109/TVCG.2007.70526 |
|
Available formats:
Publisher Site
|
|
We
describe our visualization process for a particle-based simulation of
the formation of the first stars and their impact on cosmic history. The
dataset consists of several hundred time-steps of point simulation
data, with each time-step containing ...
We
describe our visualization process for a particle-based simulation of
the formation of the first stars and their impact on cosmic history. The
dataset consists of several hundred time-steps of point simulation
data, with each time-step containing approximately two million point
particles. For each time-step, we interpolate the point data onto a
regular grid using a method taken from the radiance estimate of photon
mapping. We import the resulting regular grid representation into
ParaView, with which we extract isosurfaces across multiple variables.
Our images provide insights into the evolution of the early universe,
tracing the cosmic transition from an initially homogeneous state to one
of increasing complexity. Specifically, our visualizations capture the
build-up of regions of ionized gas around the first stars, their
evolution, and their complex interactions with the surrounding matter.
These observations will guide the upcoming James Webb Space Telescope,
the key astronomy mission of the next decade. expand
|
|
Segmentation of Three-dimensional Retinal Image Data |
|
Alfred Fuller,
Robert Zawadzki,
Stacey Choi,
David Wiley,
John Werner,
Bernd Hamann
|
|
Pages: 1719-1726 |
|
doi>10.1109/TVCG.2007.70590 |
|
Available formats:
Publisher Site
|
|
We
have combined methods from volume visualization and data analysis to
support better diagnosis and treatment of human retinal diseases. Many
diseases can be identified by abnormalities in the thicknesses of
various retinal layers captured using optical ...
We
have combined methods from volume visualization and data analysis to
support better diagnosis and treatment of human retinal diseases. Many
diseases can be identified by abnormalities in the thicknesses of
various retinal layers captured using optical coherence tomography
(OCT). We used a support vector machine (SVM) to perform semi-automatic
segmentation of retinal layers for subsequent analysis including a
comparison of layer thicknesses to known healthy parameters. We have
extended and generalized an older SVM approach to support better
performance in a clinical setting through performance enhancements and
graceful handling of inherent noise in OCT data by considering
statistical characteristics at multiple levels of resolution. The
addition of the multi-resolution hierarchy extends the SVM to have
“global awareness.” A feature, such as a retinal layer, can therefore be
modeled expand
|
|
Interactive Isosurface Ray Tracing of Time-Varying Tetrahedral Volumes |
|
Ingo Wald,
Heiko Friedrich,
Aaron Knoll,
Charles D. Hansen
|
|
Pages: 1727-1734 |
|
doi>10.1109/TVCG.2007.70566 |
|
Available formats:
Publisher Site
|
|
We
describe a system for interactively rendering isosurfaces of
tetrahedral finite-element scalar fields using coherent ray tracing
techniques on the CPU. By employing state-of-the art methods in
polygonal ray tracing, namely aggressive packet/frustum ...
We
describe a system for interactively rendering isosurfaces of
tetrahedral finite-element scalar fields using coherent ray tracing
techniques on the CPU. By employing state-of-the art methods in
polygonal ray tracing, namely aggressive packet/frustum traversal of a
bounding volume hierarchy, we can accomodate large and time-varying
unstructured data. In conjunction with this efficiency structure, we
introduce a novel technique for intersecting ray packetswith tetrahedral
primitives. Ray tracing is flexible, allowing for dynamic changes in
isovalue and time step, visualization of multiple isosurfaces, shadows,
and depth-peeling transparency effects. The resulting system offers the
intuitive simplicity of isosurfacing, guaranteed-correct visual results,
and ultimately a scalable, dynamic and consistently interactive
solution for visualizing unstructured volumes. expand
|
|
Generalized Streak Lines: Analysis and Visualization of Boundary Induced Vortices |
|
Alexander Wiebel,
Xavier Tricoche,
Dominic Schneider,
Heike Jaenicke,
Gerik Scheuermann
|
|
Pages: 1735-1742 |
|
doi>10.1109/TVCG.2007.70557 |
|
Available formats:
Publisher Site
|
|
We
present a method to extract and visualize vortices that originate from
bounding walls of three-dimensional time-dependent flows. These vortices
can be detected using their footprint on the boundary, which consists
of critical points in the wall shear ...
We
present a method to extract and visualize vortices that originate from
bounding walls of three-dimensional time-dependent flows. These vortices
can be detected using their footprint on the boundary, which consists
of critical points in the wall shear stress vector field. In order to
follow these critical points and detect their transformations, affected
regions of the surface are parameterized. Thus, an existing singularity
tracking algorithm devised for planar settings can be applied. The
trajectories of the singularities are used as a basis for seeding
particles. This leads to a new type of streak line visualization, in
which particles are released from a moving source. These generalized
streak lines visualize the particles that are ejected from the wall. We
demonstrate the usefulness of our method on several transient fluid flow
datasets from computational fluid dynamics simulations. expand
|
|
Moment Invariants for the Analysis of 2D Flow Fields |
|
Michael Schlemmer,
Manuel Heringer,
Florian Morr,
Ingrid Hotz,
Martin Hering-Bertram,
Christoph Garth,
Wolfgang Kollmann,
Bernd Hamann,
Hans Hagen
|
|
Pages: 1743-1750 |
|
doi>10.1109/TVCG.2007.70579 |
|
Available formats:
Publisher Site
|
|
We
present a novel approach for analyzing two-dimensional (2D) flow field
data based on the idea of invariant moments. Moment invariants have
traditionally been used in computer vision applications, and we have
adapted them for the purpose of interactive ...
We
present a novel approach for analyzing two-dimensional (2D) flow field
data based on the idea of invariant moments. Moment invariants have
traditionally been used in computer vision applications, and we have
adapted them for the purpose of interactive exploration of flow field
data. The new class of moment invariants we have developed allows us to
extract and visualize 2D flow patterns, invariant under translation,
scaling, and rotation. With our approach one can study arbitrary flow
patterns by searching a given 2D flow data set for any type of pattern
as specified by a user. Further, our approach supports the computation
of moments at multiple scales, facilitating fast pattern extraction and
recognition. This can be done for critical point classification, but
also for patterns with greater complexity. This multi-scale moment
representation is also valuable for the comparative visualization of
flow field data. The specific novel contributions of the work presented
are the mathematical derivation of the new class of moment invariants,
their analysis regarding critical point features, the efficient
computation of a novel feature space representation, and based upon this
the development of a fast pattern recognition algorithm for complex
flow structures. expand
|
|
Virtual Rheoscopic Fluids for Flow Visualization |
|
William Barth,
Christopher Burns
|
|
Pages: 1751-1758 |
|
doi>10.1109/TVCG.2007.70610 |
|
Available formats:
Publisher Site
|
|
Physics-based
flow visualization techniques seek to mimic laboratory flow
visualization methods with virtual analogues. In this work we describe
the rendering of a virtual rheoscopic fluid to produce images with
results strikingly similar to laboratory ...
Physics-based
flow visualization techniques seek to mimic laboratory flow
visualization methods with virtual analogues. In this work we describe
the rendering of a virtual rheoscopic fluid to produce images with
results strikingly similar to laboratory experiments with real-world
rheoscopic fluids using products such as Kalliroscope. These fluid
additives consist of microscopic, anisotropic particles which, when
suspended in the flow, align with both the flow velocity and the local
shear to produce high-quality depictions of complex flow structures. Our
virtual rheoscopic fluid is produced by defining a closed-form formula
for the orientation of shear layers in the flow and using this
orientation to volume render the flow as a material with anisotropic
reflectance and transparency. Examples are presented for natural
convection, thermocapillary convection, and Taylor-Couette flow
simulations. The latter agree well with photographs of experimental
results of Taylor-Couette flows from the literature. expand
|
|
Cores of Swirling Particle Motion in Unsteady Flows |
|
Tino Weinkauf,
Jan Sahner,
Holger Theisel,
Hans-Christian Hege
|
|
Pages: 1759-1766 |
|
doi>10.1109/TVCG.2007.70545 |
|
Available formats:
Publisher Site
|
|
In
nature and in flow experiments particles form patterns of swirling
motion in certain locations. Existing approaches identify these
structures by considering the behavior of stream lines. However, in
unsteady flows particle motion is described by path ...
In
nature and in flow experiments particles form patterns of swirling
motion in certain locations. Existing approaches identify these
structures by considering the behavior of stream lines. However, in
unsteady flows particle motion is described by path lines which
generally gives different swirling patterns than stream lines. We
introduce a novel mathematical characterization of swirling motion cores
in unsteady flows by generalizing the approach of Sujudi/Haimes to path
lines. The cores of swirling particle motion are lines sweeping over
time, i.e., surfaces in the space-time domain. They occur at locations
where three derived 4D vectors become coplanar. To extract them, we show
how to re-formulate the problem using the Parallel Vectors operator. We
apply our method to a number of unsteady flow fields. expand
|
|
IEEE Visualization Conference and IEEE Information Visualization Conference Proceedings 2007 back matter |
|
Page: backmatter |
|
Available formats:
Publisher Site
|
|
|