publications([{ "lang": "en", "type_publi": "icolcomlec", "doi": "https://doi.org/10.1007/978-3-031-42283-6_4", "title": "BiVis: Interactive and Progressive Visualization of Billions (and Counting) Items", "url": "http://iihm.imag.fr/blanch/projects/bivis/", "abstract": "Recent advances in information visualization have shown that building proper structures to allow efficient lookup in the data can reduce significantly the time to build graphical representation of very large data sets, when compared to the linear scanning of the data.\r\nWe present BiVis, a visualization technique that shows how such techniques can be further improved to reach a rendering time compatible with continuous interaction.\r\nTo do so, we turn the lookup into an anytime algorithm compatible with a progressive visualization: a visualization presenting an approximation of the data and an estimation of the error can be displayed almost instantaneously and refined in successive frames until the error converges to zero.\r\nWe also leverage the spatial coherency of the navigation: during the interaction, the state of the (possibly partial) lookup for the previous frames is reused to bootstrap the lookup for the next frame despite the view change.\r\nWe show that those techniques allow the interactive exploration of out-of-core time series consisting of billions of events on commodity computers.\r\n", "year": 2023, "uri": "http://iihm.imag.fr/publication/B23a/", "pages": "65-85", "bibtype": "inproceedings", "id": 954, "abbr": "B23a", "authors": { "1": { "first_name": "Renaud", "last_name": "Blanch" } }, "date": "2023-08-30", "type": "Conférences internationales de large diffusion avec comité de lecture sur texte complet", "booktitle": "proc. 19th IFIP TC13 International Conference (Interact 2023)" }, { "lang": "en", "publisher": "IEEE", "doi": "https://doi.org/10.1109/ISMAR59233.2023.00095", "title": "3D Selection in Mixed Reality: Designing a Two-Phase Technique To Reduce Fatigue", "url": "https://hal.science/hal-04297966", "abstract": "Mid-air pointing is widely used for 3D selection in Mixed Reality but leads to arm fatigue. In a first exploratory experiment we study a two-phase design and compare modalities for each phase: mid-air gestures, eye-gaze and microgestures. Results suggest that eye-gaze and microgestures are good candidates to reduce fatigue and improve interaction speed. We therefore propose two 3D selection techniques: Look&MidAir and Look&Micro. Both techniques include a first phase during which users control a cone directed along their eye-gaze. Using the flexion of their non-dominant hand index finger, users pre-select the objects intersecting this cone. If several objects are pre-selected, a disambiguation phase is performed using direct mid-air touch for Look&MidAir or thumb to finger microgestures for Look&Micro. In a second study, we compare both techniques to the standard raycasting technique. Results show that Look&MidAir and Look&Micro perform similarly. However they are 55% faster, perceived easier to use and are less tiring than the baseline. We discuss how the two techniques could be combined for greater flexibility and for object manipulation after selection.", "authors": { "1": { "first_name": "Adrien", "last_name": "Chaffangeon Caillet" }, "2": { "first_name": "Alix", "last_name": "Goguey" }, "3": { "first_name": "Laurence", "last_name": "Nigay" } }, "year": 2023, "uri": "http://iihm.imag.fr/publication/CGN23b/", "pages": "800-809", "bibtype": "inproceedings", "id": 955, "abbr": "CGN23b", "address": "Sydney (Australia), Australia", "date": "2023-10-16", "type": "Conférences internationales de large diffusion avec comité de lecture sur texte complet", "booktitle": "2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)", "type_publi": "icolcomlec" }, { "lang": "en", "publisher": "ACM: Association for Computing Machinery, New York", "doi": "https://doi.org/10.1145/3577190.3614131", "title": "µGeT: Multimodal eyes-free text selection technique combining touch interaction and microgestures", "url": "https://hal.science/hal-04353214", "abstract": "We present μGeT, a novel multimodal eyes-free text selection technique. μGeT combines touch interaction with microgestures. μGeT is especially suited for People with Visual Impairments (PVI) by expanding the input bandwidth of touchscreen devices, thus shortening the interaction paths for routine tasks. To do so, μGeT extends touch interaction (left/right and up/down flicks) using two simple microgestures: thumb touching either the index or the middle finger. For text selection, the multimodal technique allows us to directly modify the positioning of the two selection handles and the granularity of text selection. Two user studies, one with 9 PVI and one with 8 blindfolded sighted people, compared μGeT with a baseline common technique (VoiceOver like on iPhone). Despite a large variability in performance, the two user studies showed that μGeT is globally faster and yields fewer errors than VoiceOver. A detailed analysis of the interaction trajectories highlights the different strategies adopted by the participants. Beyond text selection, this research shows the potential of combining touch interaction and microgestures for improving the accessibility of touchscreen devices for PVI.", "authors": { "1": { "first_name": "Gauthier", "last_name": "Faisandaz" }, "2": { "first_name": "Alix", "last_name": "Goguey" }, "3": { "first_name": "Christophe", "last_name": "Jouffrais" }, "4": { "first_name": "Laurence", "last_name": "Nigay" } }, "year": 2023, "uri": "http://iihm.imag.fr/publication/FGJ+23a/", "pages": "594-603", "bibtype": "inproceedings", "id": 958, "abbr": "FGJ+23a", "address": "Paris, France", "date": "2023-10-09", "type": "Conférences internationales de large diffusion avec comité de lecture sur texte complet", "booktitle": "25th ACM International Conference on Multimodal Interaction Paris (ICMI 2023)", "type_publi": "icolcomlec" }, { "lang": "en", "type_publi": "icolcomlec", "doi": "https://doi.org/10.1145/3604272", "title": "Studying the Visual Representation of Microgestures", "url": "https://hal.science/hal-04193374", "abstract": "The representations of microgestures are essentials for researchers presenting their results through academic papers and system designers proposing tutorials to novice users. However, those representations remain disparate and inconsistent. As a first attempt to investigate how to best graphically represent microgestures, we created 21 designs, each depicting static and dynamic versions of 4 commonly used microgestures (tap, swipe, flex and hold). We first studied these designs in a quantitative online experiment with 45 participants. We then conducted a qualitative laboratory experiment in Augmented Reality with 16 participants. Based on the results, we provide design guidelines on which elements of a microgesture should be represented and how. In particular, it is recommended to represent the actuator and the trajectory of a microgesture. Also, although preferred by users, dynamic representations are not considered better than their static counterparts for depicting a microgesture and do not necessarily result in a better user recognition", "authors": { "1": { "first_name": "Vincent", "last_name": "Lambert" }, "2": { "first_name": "Adrien", "last_name": "Chaffangeon Caillet" }, "3": { "first_name": "Alix", "last_name": "Goguey" }, "4": { "first_name": "Sylvain", "last_name": "Malacria" }, "5": { "first_name": "Laurence", "last_name": "Nigay" } }, "year": 2023, "uri": "http://iihm.imag.fr/publication/LCG+23a/", "id": 961, "bibtype": "inproceedings", "abbr": "LCG+23a", "address": "Athens, Greece", "date": "2023-09-25", "type": "Conférences internationales de large diffusion avec comité de lecture sur texte complet", "booktitle": "ACM International Conference on Mobile Human-Computer Interaction (MobileHCI 2023)" }, { "lang": "en", "type_publi": "icolcomlec", "doi": "https://doi.org/10.1145/3544548.3581179", "title": "Impact of softness on users' perception of curvature for future soft curvature-changing UIs", "url": "https://hal.science/hal-04045261", "abstract": "Soft (compliant) curvature-changing UIs provide haptic feedback through changes in softness and curvature. Different softness can impact the deformation of UIs when worn and touched, and thus impact the users' perception of the curvature. To investigate how softness impacts users’ perception of curvature, we measured participants’ curvature perception accuracy and precision in different softness conditions. We found that participants perceived the curviest surfaces with similar precision in all different softness conditions. Participants lost half the precision of the rigid material when touching the flattest surfaces with the softest material. Participants perceived all curvatures with similar accuracy in all softness conditions. The results of our experiment lay the foundation for soft curvature perception and provide guidelines for the future design of curvature- and softness-changing UIs.", "authors": { "1": { "first_name": "Zhuzhi", "last_name": "Fan" }, "2": { "first_name": "Céline", "last_name": "Coutrix" } }, "year": 2023, "uri": "http://iihm.imag.fr/publication/FC23a/", "pages": "747:1-19", "bibtype": "inproceedings", "id": 949, "abbr": "FC23a", "address": "Hamburg, Germany", "date": "2023-04-22", "type": "Conférences internationales de large diffusion avec comité de lecture sur texte complet", "booktitle": "2023 CHI Conference on Human Factors in Computing Systems (CHI ’23)" }, { "lang": "en", "type_publi": "icolcomlec", "doi": "https://doi.org/10.1145/3500866.3516371", "title": "µGlyph: a Microgesture Notation", "url": "https://hal.science/hal-04026125", "abstract": "In the active field of hand microgestures, microgesture descriptions are typically expressed informally and are accompanied by images, leading to ambiguities and contradictions. An important step in moving the field forward is a rigorous basis for precisely describing, comparing, and analyzing microgestures. Towards this goal, we propose µGlyph, a hybrid notation based on a vocabulary of events inspired by finger biomechanics. First, we investigate the expressiveness of µGlyph by building a database of 118 microgestures extracted from the literature. Second, we experimentally explore the usability of µGlyph. Participants correctly read and wrote µGlyph descriptions 90% of the time, as compared to 46% for conventional descriptions. Third we present tools that promote µGlyph usage, including a visual editor with LaTeX export. We finally describe how µGlyph can guide research on designing, developing, and evaluating microgesture interaction. Results demonstrate the strong potential of µGlyph to establish a common ground for microgesture research.", "authors": { "1": { "first_name": "Adrien", "last_name": "Chaffangeon Caillet" }, "2": { "first_name": "Alix", "last_name": "Goguey" }, "3": { "first_name": "Laurence", "last_name": "Nigay" } }, "year": 2023, "uri": "http://iihm.imag.fr/publication/CGN23a/", "pages": "3:1-13", "bibtype": "inproceedings", "id": 948, "abbr": "CGN23a", "address": "Hamburg, Germany", "date": "2023-04-23", "type": "Conférences internationales de large diffusion avec comité de lecture sur texte complet", "booktitle": "Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems" }, { "bibtype": "article", "publisher": "Association pour la diffusion de la recherche francophone en intelligence artificielle", "doi": "https://dx.doi.org/10.5802/roia.50", "lang": "fr", "uri": "http://iihm.imag.fr/publication/DC23a/", "title": "Ordonnancement dans l'habitat intelligent", "url": "https://hal.science/hal-04520754", "journal": "Revue Ouverte d'Intelligence Artificielle", "year": 2023, "number": 1, "pages": "53-76", "volume": 4, "id": 963, "abbr": "DC23a", "authors": { "1": { "first_name": "Alexandre", "last_name": "Demeure" }, "2": { "first_name": "Sybille", "last_name": "Caffiau" } }, "date": "2023-05-30", "type": "Revues internationales avec comité de lecture", "abstract": "The text is about the problem of the scheduling of the actions applied to the actuators of a smart home. These actions can be triggered either by inhabitants or by programs encoding automatisms. We show that this is a complex problem that cannot be solved a priori. On the contrary, it depends on the context. We defend the idea that this problem should be tackle from the angle of an operating system which scheduling engine would be based on CCBL (Cascading Contexts Based Language). CCBL is an end-user programming language for the smart home that enable inhabitants to programs automatisms based on devices and services. We provide several examples of scheduling strategies programmed with CCBL. We show using CCBL to program such strategies is not fundamentally different than programming mere automatisms. Hence, the skills acquired in one of the tasks will be reusable in the other.", "type_publi": "irevcomlec" }, { "lang": "en", "publisher": "Institute of Electrical and Electronics Engineers", "type_publi": "irevcomlec", "bibtype": "article", "title": "A Hierarchical Framework for Collaborative Artificial Intelligence", "url": "https://hal.univ-grenoble-alpes.fr/hal-03895933", "abstract": "We propose a hierarchical framework for collaborative intelligent systems. This framework organizes research challenges based on the nature of the collaborative activity and the information that must be shared, with each level building on capabilities provided by lower levels. We review research paradigms at each level, with a description of classical engineering-based approaches and modern alternatives based on machine learning, illustrated with a running example using a hypothetical personal service robot. We discuss cross-cutting issues that occur at all levels, focusing on the problem of communicating and sharing comprehension, the role of explanation and the social nature of collaboration. We conclude with a summary of research challenges and a discussion of the potential for economic and societal impact provided by technologies that enhance human abilities and empower people and society through collaboration with Intelligent Systems.", "year": 2023, "number": 1, "uri": "http://iihm.imag.fr/publication/CCG+23a/", "volume": 22, "id": 950, "abbr": "CCG+23a", "authors": { "1": { "first_name": "James L.", "last_name": "Crowley" }, "2": { "first_name": "Joëlle", "last_name": "Coutaz" }, "3": { "first_name": "Jasmin", "last_name": "Grosinger" }, "4": { "first_name": "Javier", "last_name": "Vázquez-Salceda" }, "5": { "first_name": "Cecilio", "last_name": "Angulo" }, "6": { "first_name": "Alberto", "last_name": "Sanfeliu" }, "7": { "first_name": "Luca", "last_name": "Iocchi" }, "8": { "first_name": "Anthony", "last_name": "Cohn" } }, "date": "2023-03-01", "document": "http://iihm.imag.fr/publs/2023/IEEE-Pervasive-CollaborativeIntelligentSystems-VersionAuteur-2023.pdf", "type": "Revues internationales avec comité de lecture", "journal": "IEEE Pervasive Computing" }, { "lang": "en", "publisher": "Elsevier", "type_publi": "irevcomlec", "bibtype": "article", "title": "Studies and guidelines for two concurrent stroke gestures", "url": "https://hal.science/hal-04031673", "abstract": "This paper investigates thumb-index interaction on touch input devices, and more precisely the potential of two concurrent stroke gestures, i.e. gestures in which two fingers of the same hand concurrently draw one stroke each. We present two fundamental studies, one using such gestures for two-dimensional control, by precisely drawing figures, and the other for command activation, by roughly sketching figures. Results give a first analysis of user performance on 35 gestures with a varying complexity based on numbers of turns and symmetries. All 35 gestures, were grouped into six families. From these results we classify these families and propose new guidelines for designing future mobile interfaces. For instance, favoring anchored gestures (forefinger drawing while the thumb remains still on the surface) to increase input bandwidth when forefinger precision is required.", "year": 2023, "uri": "http://iihm.imag.fr/publication/GO23a/", "id": 951, "volume": 170, "abbr": "GO23a", "authors": { "1": { "first_name": "Alix", "last_name": "Goguey" }, "2": { "first_name": "Michael", "last_name": "Ortega" } }, "date": "2023-02-01", "type": "Revues internationales avec comité de lecture", "journal": "International Journal of Human-Computer Studies" }, { "lang": "fr", "type_publi": "these", "title": "Understanding and designing microgesture interaction", "url": "https://hal.science/tel-04359801", "abstract": "Over the last three decades, some of the objects we use in our daily life have gradually become computers. Our habits are changing with these mutations, and it is now not uncommon that we interact with these computers while performing other tasks, e.g. checking our GPS position on our smartwatch while biking. Over the last ten years, a new interaction modality has emerged to meet these needs, hand microgestures. Hand microgestures, simplified to microgestures, are fast and subtle movements of the fingers. They enable interaction in parallel with a main task, as they are quick and can be performed while holding an object. However, as it is a recent modality, the field of research still lacks structure and sometimes coherence. For instance, there is no convention for naming or describing microgestures, which can lead to terminological inconsistencies between different studies. Moreover, the literature focuses mainly on how to build systems to sense and recognize microgestures. Thus, few studies examine the expected properties of microgestures, such as speed or low impact on physical fatigue in certain contexts of use. As a result, this thesis focuses on the study of microgestures, from their description to their application in a specific field, i.e. Augmented Reality (AR), as well as their sensing and recognition.Our scientific approac is comprised of three steps. In the first step, we focus on the space of possibilities. After a literature review to highlight the diversity of microgestures and terminological issues, we present μGlyph, a notation to describe microgestures. Next, we present a user study to understand the constraints imposed when holding an object on the feasibility of microgestures. The results of this study were used to create a set of three rules to determine the feasibility of microgestures in different contexts, i.e. different grasps. For ease of use, we reused μGlyph to provide a visual description of these rules. Finally, we study different ways of making a set of microgestures compatible with many contexts, i.e. that each microgesture in the set is feasible in all contexts.With the space of possibilities defined, we focus on the design of systems for sensing and recognizing microgestures. After a review of such systems in the literature, we present our easily reproducible sensing systems that we implemented, resulting in two gloves. We then present a user study on the impact of wearing these gloves on the feasibility of microgestures. Our results suggest that our gloves have little impact on the feasibility of microgestures. Next, we present a more comprehensive system that recognizes both microgestures and contexts. Our studies on recognition rates suggest that our system is usable for microgesture detection, with a recognition rate of 94%, but needs to be improved for context recognition, with a rate of 80%. Finally, we present a proof-of-concept of a modular glove and a recognition system based on μGlyph to enable the unification of microgesture sensing systems.Our final step is then dedicated to interaction techniques based on microgestures. We focus on the properties of microgestures for 3D selection in AR. We have designed two 3D selection techniques based on eye-gaze and microgestures for interaction with low fatigue. Our results suggest that the combination of eye-gaze and microgesture enables fast interaction while minimizing fatigue, compared to the commonly used virtual pointer. We conclude with an extension of our techniques to integrate 3D object manipulation in AR.", "year": 2023, "uri": "http://iihm.imag.fr/publication/C23a/", "bibtype": "phdthesis", "abbr": "C23a", "authors": { "1": { "first_name": "Adrien", "last_name": "Chaffangeon Caillet" } }, "date": "2023-12-18", "type": "Thèses et habilitations", "id": 959 }, { "lang": "en", "type_publi": "autre", "title": "InSARViz: an open source interactive visualization tool for InSAR ", "url": "https://journee-visu.github.io/2023/", "booktitle": "actes des Journées Visu 2023", "year": 2023, "uri": "http://iihm.imag.fr/publication/MBP+23a/", "id": 953, "bibtype": "unpublished", "abbr": "MBP+23a", "authors": { "1": { "first_name": "Margaux", "last_name": "Mouchené" }, "2": { "first_name": "Renaud", "last_name": "Blanch" }, "3": { "first_name": "Erwan", "last_name": "Pathier" }, "4": { "first_name": "Franck", "last_name": "Thollard" } }, "date": "2023-06-22", "document": "http://iihm.imag.fr/publs/2023/2023-06-15-insarviz-journee-visu.pdf", "type": "Autres publications", "pages": "2" }, { "lang": "en", "type_publi": "autre", "doi": "https://doi.org/None", "title": "Examining word writing in handwriting and smartphone-writing: orthographic processing affects movement production in different ways", "url": "https://hal.science/hal-04292356", "abstract": "New technological devices are changing the way we communicate. With the popularization of smartphones, some people spend more time writing on a phone than handwriting or typing on a keyboard. Does phone-writing change the way we process orthographic information? Does this affect movement production? In the present study, French participants had to write words in a spelling to dictation task. They wrote orthographically consistent and inconsistent short and long words. First, they had to write the words by hand in upper-case letters on a digitizer. One month later, they had to write the words on a smartphone. The results revealed that orthographic consistency affects the spelling processes in both handwriting and phone-writing. We observe more spelling errors for inconsistent words than consistent ones. When analyzing the movement production of the words that were spelled correctly, the data revealed that the timing of orthographic processing differs between the two ways of writing. Orthographic consistency seems to affect the time before movement initiation (latency data) in handwriting, especially in short words. In addition, once the participant starts to write, it also mediates movement production throughout the whole word, affecting the timing of the initial and final letters of the word. In phone-writing, orthographic consistency tends to modulate movement production at the end of the word. Inconsistent words require more processing time than consistent words, especially when they are long. These timing differences are not surprising, since the whole word writing process is much longer in handwriting than in phone-writing. We are preparing another phone-writing experiment in which we examine the implementation of word suggestions. With word suggestions, the spelling processes are no longer a mere recall of information on the letter components of a word. While writing the first letters, smartphones suggest words on top of the virtual keyboard to complete the target word before we write the last letters. This back and forth mechanism of writing letters, reading word suggestions and selecting one of them, radically changes the way we process orthographic information during word writing.", "authors": { "1": { "first_name": "Anna", "last_name": "Anastaseni" }, "2": { "first_name": "Quentin", "last_name": "Roy" }, "3": { "first_name": "Cyril", "last_name": "Perret" }, "4": { "first_name": "Antonio", "last_name": "Romano" }, "5": { "first_name": "Sonia", "last_name": "Kandel" } }, "year": 2023, "uri": "http://iihm.imag.fr/publication/ARP+23a/", "id": 956, "bibtype": "unpublished", "abbr": "ARP+23a", "address": "Potsdam, Germany", "date": "2023-07-12", "type": "Autres publications", "booktitle": "Writing Word(s) Workshop" }, { "lang": "en", "type_publi": "autre", "title": "Writing words by hand and by phone: Differences in the timing of orthographic processing", "url": "https://hal.science/hal-04310488", "abstract": "Texting and email writing with smartphones are activities that are done regularly by a very importantproportion of the population. Phonewriting (PW) differs from handwriting (HW) in many ways.Previous HW research revealed that the orthographic processes modulate movement production(see APOMI, Kandel, 2023). Do spelling processes also affect hand movements in PW? To answer thisquestion, we focused on orthographic processing in HW and PW in a spelling-to-dictation task inFrench. We manipulated orthographic consistency and length. First, the participant had to write thewords by hand in upper-case letters on a digitizer. One month later, they had to write the words on asmartphone. We collected data on latency, letter movement duration, errors and online corrections.The data revealed that the timing of orthographic processing differs between handwriting andphonewriting. Latencies -i.e., the time before starting to write- were longer in PW than HW. Incontrast, once we start writing the word, the hand movements took longer in HW than PW. AlthoughPW takes less writing time, errors and online corrections are far more frequent than in HW. Latenciesfor orthographically inconsistent words were longer than for consistent words, both in HW and PW.However, the mean letter duration of orthographically inconsistent words was longer than forconsistent words only in HW but not for PW. Also, inconsistent words elicited a higher number ofphonologically plausible errors than consistent words, in HW and PW.The impact of the technological progress due to the telephone is to decrease the time we spendwriting but the cognitive cost is that we produce more errors and online corrections. Regardingorthographic processing, HW and PW are very different. In PW most of the central processing is donebefore starting to write. In HW, spelling processes start before movement initiation but are stillactive while we write. This modulates movement production.", "year": 2023, "uri": "http://iihm.imag.fr/publication/ARP+23b/", "bibtype": "unpublished", "abbr": "ARP+23b", "authors": { "1": { "first_name": "Anna", "last_name": "Anastaseni" }, "2": { "first_name": "Quentin", "last_name": "Roy" }, "3": { "first_name": "Cyril", "last_name": "Perret" }, "4": { "first_name": "Antonio", "last_name": "Romano" }, "5": { "first_name": "Sonia", "last_name": "Kandel" } }, "date": "2023-10-18", "type": "Autres publications", "id": 957 }, { "lang": "fr", "type_publi": "autre", "title": "Visualisation de données spatio-temporelles : Étude sismologique du glacier d’Argentière", "url": "https://journee-visu.github.io/2023/", "booktitle": "actes des Journées Visu 2023", "year": 2023, "uri": "http://iihm.imag.fr/publication/BLO+23a/", "id": 952, "bibtype": "unpublished", "abbr": "BLO+23a", "authors": { "1": { "first_name": "Renaud", "last_name": "Blanch" }, "2": { "first_name": "Albanne", "last_name": "Lecointre" }, "3": { "first_name": "Michael", "last_name": "Ortega" }, "4": { "first_name": "Philippe", "last_name": "Roux" } }, "date": "2023-06-22", "document": "http://iihm.imag.fr/publs/2023/2023-05-23-resolve-journee-visu.pdf", "type": "Autres publications", "pages": "2" }]);