We live in a data-rich era. Data visualization and exploration capabilities are becoming more widely used in a variety of disciplines including business, health, education, and public policy, to name just a few. Currently, people use visualization systems on desktop and laptop computers and typically interact via keyboard and mouse. Such interactions, while useful, pale in comparison to the natural, fluid interactions presented in futuristic feature films such as "Minority Report" and "Iron Man" where characters interact with large, projected wall displays through speech, gaze, and gesture. To move towards such futuristic interfaces, we must develop new forms of Natural User Interfaces (NUIs) employing multimodal interactions such as speech, pen, touch, gestures, gaze, and head and body movements. While no specific interaction modality may provide all desired capabilities, combinations of modalities (e.g., speech, gaze, and pen) could ideally provide a more natural, intuitive, and integrated interface experience. This project will explore, design, develop, and evaluate NUIs for data visualization and visual analytics. Developing techniques and systems that provide natural, expressive, multimodal input and interaction for multiple representations of data has the potential to broadly impact a virtually unlimited number of disciplines and areas of society.
Drawing upon prior research on natural language and multi-touch interfaces for visualization, the research seeks to enable a next generation of powerful, expressive, and natural systems that facilitate fluid interaction with visual representations of data. While these interaction modalities harbor great potential, many research challenges exist and must be addressed. For instance, how should a system handle speech input that is ill-formed, incomplete, or ambiguous? What if an intention is misinterpreted or misunderstood? Regarding multi-touch gesture input, what are the "best" touch gestures to use in these interfaces? Do those gestures map well to different types of visualizations and to different display types and sizes and types? How do we make these gestures easier to discover and learn? Combining multiple input modalities may allow systems to counterbalance the weaknesses of one modality with the strengths of another (e.g., ambiguity in selection via speech can be balanced by the preciseness of touch), facilitating a naturalistic user experience. Project objectives include the design, implementation, and evaluation of multimodal interfaces to data visualization systems. In particular, the research will investigate how different interaction methods affect and enable data analysis and exploration. The project will create applications for particular data domains as well as open-source toolkits for other researchers to use in their own work. User studies will identify if and how different types of interaction (speech, touch, gesture, etc.) make analysis faster, easier to learn, and more powerful.
This material is based upon work supported by the National Science Foundation under Grant No. IIS-1717111. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Project information:
Award number:
IIS-1717111
Title: III: Small: Creating Natural Data Visualization and
Analysis Environments
Duration: 11/15/17 - 10/31/20 (extended to 10/31/21)
PI: John
Stasko
Students: Arjun
Srinivasan,
Hayeong Song
Last updated: July 18, 2021