We introduce semantic pointing, a novel interaction technique that improves target acquisition in graphical user interfaces (GUIs). Semantic pointing uses two independent sizes for each potential target presented to the user: one size in motor space adapted to its importance for the manipulation, and one size in visual space adapted to the amount of information it conveys. This decoupling between visual and motor size is achieved by changing the control-to-display ratio according to cursor distance to nearby targets. We present a controlled experiment supporting our hypothesis that the performance of semantic pointing is given by Fitts'index of difficulty in motor rather than visual space. We apply semantic pointing to the redesign of traditional GUI widgets by taking advantage of the independent manipulation of motor and visual widget sizes.
The paper is available as [pdf file], [html file], and in the [ACM digital library].
Copyright ACM, (2004).
This is the author's version of the work.
It is posted here by permission of ACM for your personnal use.
Not for redistribution.
The definitive version was published in Proceedings of the 2004 conference on Human factors
in computing systems (CHI 2004).