INKE events at CFHSS Congress, Toronto [1 June]: Commons drop-in, Generative AI panel

Hi Everyone, Just a quick note to pass along the details of two INKE sessions at Congress in the days coming up, sponsored by the Federation. HSS Commons Drop-in Session Location: George Brown College Waterfront Campus, Fifth Floor, Room WF 527 Date and Time: Sunday 1 June, 9.30 – 10.15; https://www.federationhss.ca/en/congress/hss-commons-drop-session Panel: Generative AI, LLMs, and Knowledge Structures Location: George Brown College Waterfront Campus, Sixth Floor, Room WF 607 Date and Time: Sunday 1 June, 10.30 – Noon; https://www.federationhss.ca/en/congress/generative-ai-llms-and-knowledge-st... * Amanda Lawrence (RMIT U), “Observations on Wikimedia, LLMs and Information Ecosystem Observability” * Geoffrey Rockwell (U Alberta, Amii), “Forging Interpretations with Generative AI” * Lai-Tze Fan (U Waterloo), "Ethical Data Collection for AI: Bridging Knowledge Platforms and Biometric Datasets" Further details below. Hope you’ll consider joining us! All best wishes, Ray HSS Commons Drop-in Session Location: George Brown College Waterfront Campus, Fifth Floor, Room WF 527 Date and Time: Sunday 1 June, 9.30 – 10.15 https://www.federationhss.ca/en/congress/hss-commons-drop-session Curious about the HSS Commons (https://hsscommons.ca/)? Whether you're brand new to it or already a member, drop by our informal session during the 2025 HSS Congress in Toronto! This is a chance to: Learn how the Commons supports open scholarship in the humanities and social sciences; Set up your profile and share your work; Explore tools for collaboration, teaching, and publishing; Chat with members of the team and fellow researchers; Ask questions or just have a coffee and say hello! No registration needed—just stop in when you can. We’d love to meet you and help you get the most out of the HSS Commons. Panel: Generative AI, LLMs, and Knowledge Structures Location: George Brown College Waterfront Campus, Sixth Floor, Room WF 607 Date and Time: Sunday 1 June, 10.30 – Noon https://www.federationhss.ca/en/congress/generative-ai-llms-and-knowledge-st...; community pass registration via ow.ly/onwv50VrMNl<http://ow.ly/onwv50VrMNl> Sponsored by the Federation for the Humanities and Social Sciences and the Implementing New Knowledge Environments Partnership (inke.ca) This panel explores the evolving relationship between commercial platforms, generative AI, and digital public knowledge infrastructures. As community-led initiatives like Wikimedia operate alongside increasingly closed and opaque commercial systems, the need for greater transparency, data access, and research tools becomes critical. Generative AI introduces new dynamics—shaping how knowledge is accessed, interpreted, and produced—while also raising ethical concerns around automation, transparency, and positionality in research. Additionally, the use of sensitive data, such as biometrics, in AI training prompts important questions about how classification systems are formed and how they impact representation and fairness. The panel addresses these challenges and emphasizes the importance of responsible, ethical approaches to studying and shaping AI-driven knowledge ecosystems. This panel is followed by the Big Thinking panel “Technologies of togetherness: Shaping an equitable future with AI.” Some light refreshments will be available at the session and nearby. Opening Remarks Karine Morin (President, CFHSS) Panel Chair Ray Siemens (U Victoria) “Observations on Wikimedia, LLMs and Information Ecosystem Observability” Amanda Lawrence (RMIT U) Dr. Amanda Lawrence (RMIT U) is the 2024–25 Honorary Resident Wikipedian, a residency co-sponsored by the Electronic Textual Cultures Lab, U Victoria Libraries, the Implementing New Knowledge Environments Partnership, and the Federation for the Humanities and Social Sciences. Dr. Lawrence is Director of the Australian Internet Observatory and former Director of the Analysis & Policy Observatory, as well as the outgoing President of Wikimedia Australia. An Affiliate of the ARC Centre of Excellence for Automated Decision-Making & Society, her research focuses on open access, Wikipedia, public knowledge infrastructure, and digital platforms. She has led major research initiatives, developed numerous online platforms, and published widely. Although the internet is dominated by large commercial platforms increasingly operated as walled gardens, community-led initiatives like Wikimedia, open access publishing, open source software, and open source LLMs continue to carve out space in the digital public sphere. Understanding the complex, often symbiotic relationship between these domains is difficult due to limited transparency and data access from commercial platforms. Generative AI systems have been extensively trained on public resources like Wikipedia, but may now be diverting traffic from them and raising concerns about feedback loops that could affect both content quality and future training. To address these challenges, we need new ways to access and observe data from platforms like Google Search and GenAI systems. Such observability is essential for analysing the impact of digital technologies on public knowledge infrastructure. Emerging research infrastructure is beginning to support this work, using LLMs as both tools and subjects of study—an essential step toward sustaining a healthy digital information ecosystem. This paper explores how these tools can offer new insights into the evolving relationship between LLMs and Wikimedia. “Forging Interpretations with Generative AI” Geoffrey Rockwell (U Alberta, Amii) Dr. Geoffrey Rockwell, appointed as a Canada CIFAR AI Chair in 2024, is a professor in the Department of Philosophy and Media and Technology Studies at the U Alberta. His research encompasses video games, textual visualization, text analysis, and the ethics of technology and artificial intelligence. He co-authored Hermeneutica: Computer-Assisted Interpretation in the Humanities (MIT P, 2016) and co-developed Voyant Tools, an award-winning suite of text analysis and visualization tools. Dr. Rockwell is currently working on a book examining dialogues with AI, particularly focusing on interactions with chatbots like ChatGPT. Using large language models we can now generate fairly sophisticated interpretations of documents using natural language prompts. We can ask for classifications, summaries, visualizations, or specific content to be extracted. In short we can automate content analysis of the sort we used to count as research. As we play with the forging of interpretations at scale we need to consider the ethics of using generative AI in our research. We need to ask how we can use these models with respect for sources, care for transparency, and attention to positionality. "Ethical Data Collection for AI: Bridging Knowledge Platforms and Biometric Datasets" Lai-Tze Fan (U Waterloo) Dr. Lai-Tze Fan is an Associate Professor at U Waterloo, holding a Canada Research Chair in Technology and Social Change. She also serves as an Associate Professor at U Bergen, Norway. Her research focuses on media studies, digital culture, and critical approaches to AI, including the development of tools and methods to explore storytelling, media materiality, and technological infrastructures. Dr. Fan directs the U&AI Lab, which examines inequities in AI systems. She is also involved in various academic and artistic communities, serving on advisory boards and as an editor for open-access journals. This talk will examine ethical approaches to data collection within AI knowledge ecosystems, focusing specifically on biometric data that informs AI training datasets. For example, applications could include biometric datasets that train AI facial recognition technologies. While these specialized collections differ from traditional knowledge structures like those found on scholarly platforms or educational systems, they play a crucial role in how we categorize people, which can inform these broader platforms. The presentation will explore this symbiotic relationship, demonstrating how biometric datasets can enhance or mislead knowledge structures' representational accuracy and fairness.
participants (1)
-
Ray Siemens