Creating Natural Data Visualization and Analysis Environments
NSF Award IIS-1717111

Team Members: Arjun Srinivasan, Hayeong Song, John Stasko

     Talk about the project and related research
 

     Interweaving Multimodal Interaction with Flexible Unit Visualizations for Data Exploration - TVCG 2021
    Post-WIMP Interaction for Information Visualization - Foundations & Trends in HCI
    Collecting and Characterizing Natural Language Utterances for Specifying Data Visualizations - CHI 2021
     NL4DV: A Toolkit for Generating Analytic Specifications for Data Visualization from Natural Language Queries - InfoVis 2020
     How to Ask What to Say?: Strategies for Evaluating Natural Language Interfaces for Data Visualization - CG&A 2020
     Touch? Speech? or Touch and Speech? Investigating Multimodal Interaction for Visual Network Exploration and Analysis - PacificVis 2020
     VisWall: Visual Data Exploration Using Direct Combination on Large Touch Displays - VIS 2019
     Augmenting Visualizations with Interactive Data Facts to Facilitate Interpretation and Communication - InfoVis 2018 (Talk Video)
     Tangraphe: Interactive Exploration of Network Visualizations using Single Hand, Multi-touch Gestures - AVI 2018
     Facilitating Spreadsheet Manipulation on Mobile Devices Leveraging Speech - Data Vis on Mobile Devices workshop at CHI '18
     Orko: Facilitating Multimodal Interaction for Visual Network Exploration and Analysis - InfoVis 2017 (Talk video and slides)
     Affordances of Input Modalities for Visual Data Exploration in Immersive Environments - Workshop on Immersive Analytics 2017
     Natural Language Interfaces for Data Analysis with Visualization: Considering What Has and Could Be Asked - EuroVis 2017
     NL4DV: Toolkit for Natural Language Driven Data Visualization (Poster) - VIS 2016

    DATA COLLECTION: NL Vis Corpus GitHub site, CHI '21
    VIDEO: NL4DV overview, InfoVis '20
    TOOLKIT: NL4DV GitHub site, InfoVis '20
    VIDEO: DataBreeze overview, TVCG '20 (24 MB mp4)
    VIDEO: DataBreeze usage scenario, TVCG '20 (60 MB mp4)
    VIDEO: VisWall overview, VIS '19 (75 MB mp4)
    VIDEO: Voder overview, InfoVis '18 (45 MB mp4)
    VIDEO: Tangraphe overview, AVI '18 (33 MB mp4)
    VIDEO: Orko introduction, InfoVis '17 (37 MB mp4)
    VIDEO: Orko usage scenario, InfoVis '17 (52 MB mp4)

  Workshop
     Multimodal Interaction for Data Visualization - AVI 2018 (Summary and Website)

We live in a data-rich era. Data visualization and exploration capabilities are becoming more widely used in a variety of disciplines including business, health, education, and public policy, to name just a few. Currently, people use visualization systems on desktop and laptop computers and typically interact via keyboard and mouse. Such interactions, while useful, pale in comparison to the natural, fluid interactions presented in futuristic feature films such as "Minority Report" and "Iron Man" where characters interact with large, projected wall displays through speech, gaze, and gesture. To move towards such futuristic interfaces, we must develop new forms of Natural User Interfaces (NUIs) employing multimodal interactions such as speech, pen, touch, gestures, gaze, and head and body movements. While no specific interaction modality may provide all desired capabilities, combinations of modalities (e.g., speech, gaze, and pen) could ideally provide a more natural, intuitive, and integrated interface experience. This project will explore, design, develop, and evaluate NUIs for data visualization and visual analytics. Developing techniques and systems that provide natural, expressive, multimodal input and interaction for multiple representations of data has the potential to broadly impact a virtually unlimited number of disciplines and areas of society.

Drawing upon prior research on natural language and multi-touch interfaces for visualization, the research seeks to enable a next generation of powerful, expressive, and natural systems that facilitate fluid interaction with visual representations of data. While these interaction modalities harbor great potential, many research challenges exist and must be addressed. For instance, how should a system handle speech input that is ill-formed, incomplete, or ambiguous? What if an intention is misinterpreted or misunderstood? Regarding multi-touch gesture input, what are the "best" touch gestures to use in these interfaces? Do those gestures map well to different types of visualizations and to different display types and sizes and types? How do we make these gestures easier to discover and learn? Combining multiple input modalities may allow systems to counterbalance the weaknesses of one modality with the strengths of another (e.g., ambiguity in selection via speech can be balanced by the preciseness of touch), facilitating a naturalistic user experience. Project objectives include the design, implementation, and evaluation of multimodal interfaces to data visualization systems. In particular, the research will investigate how different interaction methods affect and enable data analysis and exploration. The project will create applications for particular data domains as well as open-source toolkits for other researchers to use in their own work. User studies will identify if and how different types of interaction (speech, touch, gesture, etc.) make analysis faster, easier to learn, and more powerful.


This material is based upon work supported by the National Science Foundation under Grant No. IIS-1717111. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Project information:
Award number: IIS-1717111
Title: III: Small: Creating Natural Data Visualization and Analysis Environments
Duration: 11/15/17 - 10/31/20 (extended to 10/31/21)
PI: John Stasko
Students: Arjun Srinivasan, Hayeong Song
Last updated: July 18, 2021