Large Language Models (LLMs) and LLM-powered chatbots and virtual assistants like ChatGPT have changed the way that people complete tasks and interact with technology. Even individuals without a technical background are now able to easily use LLMs in their everyday lives to help them compose emails, make decisions, and summarize information. For visualization, LLMs’ unique capabilities have the promise to extend existing threads of research in new and exciting directions, leaving a large space of possibilities still to be explored and many open questions. For instance: What are the capabilities of LLMs on tasks relevant to visualization, and how can we feasibly assess these capabilities both now and moving forward? Can LLMs utilize domain knowledge about particular areas to enable question answering and guidance capabilities in visualization systems to a degree that was previously infeasible? How does incorporating LLMs into visualization systems affect users’ trust in those systems as well as their data? As designers, how can we best support new LLM-powered modes of interaction and data analysis, while also mitigating issues stemming from LLMs’ hallucination or inconsistency issues? This project seeks to uncover answers to these types of questions, helping to produce fundamental knowledge about the capabilities of LLMs on visual data analysis tasks and guidelines for utilizing LLMs in a responsible manner.