Data visualization and journalism are deeply interconnected. From early infographics to contemporary data-driven narratives, visualization has become an integral component of journalism. As data journalism leverages visualization to bridge the gap between vast datasets and society, the field of visualization research has focused on supporting these journalistic endeavors. However, recent developments, particularly the rise of generative AI systems like large language models, have introduced new challenges and opportunities that extend beyond merely presenting data.
Generative AI, such as ChatGPT, has the potential to revolutionize journalism by assisting with tasks like content creation. However, the same technology also poses risks, particularly in the realm of information pollution, where the generation of misinformation becomes a significant concern.
This project seeks to explore these challenges: How can we better equip journalists to navigate the complexities introduced by AI? How can we harness the strengths of LLMs to enhance journalistic tasks while mitigating their risks? What strategies can we develop to ensure the integrity of AI-assisted journalistic content, especially in data narratives? And how can we protect the public from the dangers of data-driven misinformation? Through this work, we aim to understand these intricate issues and formulate effective solutions to address them.