Information Literacy in the Age of GenAI
For the final session of our series on Machine Learning and Artificial Intelligence for Information Professionals, Library Futures hosted Dr. Andrea Baer, Associate Professor of Practice in the School of Information at the University of Texas-Austin; Dr. Damien Patrick Williams, Assistant Professor of Philosophy and Data Science at the University of North Carolina Charlotte; and Mita Williams, Law Librarian for the Don & Gail Rodzik Library at the University of Windsor for a lively conversation about what happens when generative AI enters the spaces where students research, learn, and work.
Below is a write-up of their conversation and links from both the panelists and our community.
Dr. Andrea Baer
Andrea started by noting that AI hype in universities seems contrary to librarianship’s core values of equity, privacy, and sustainability. Instead, she stated, AI technologies cause tremendous harm. In addition to their detrimental effect on information literacy, they also cause environmental harm, exploit labor, promote inaccuracy and bias, devalue human thought, and violate IP. We know these harms, but we are surrounded by a narrative that says AI is inevitable and that we must become proficient in it: the inevitability narrative dictates how we think and feel, suggesting we can use AI ethically without explaining how. We can, however, consider alternate approaches to AI, to pedagogy, and to student and teacher agency.
Links from Andrea
- Investigating the “Feeling Rules” of Generative AI and Imagining Alternative Futures
- Unpacking Predominant Narratives about Generative AI and Education: A Starting Point for Teaching Critical AI Literacy and Imagining Better Futures
- Critical Perspectives on AI in Education
- Teaching Critical AI Literacies: “Explainer” and Resources for the New Semester
- Artificial Intelligence and Academic Professions
Dr. Damien Patrick Williams
Damien began by asking what happens when we overrely on AI. The answer? Real and negative impacts on the development of critical thinking skills and the retention of knowledge. People prefer an AI that tells them what they want to hear and makes them feel good–and all a large language model (LLM) is doing is telling you the thing you are statistically most likely to accept. Damien also raised the concern that people using these tools are less likely to help or be helped by other human beings, which creates perfect grounds for misinformation and disinformation. He has hope, though, that we can build different systems more aligned with human values.
Links from Damien
- Bias in the System
- Failures of “AI” Promise: Critical Thinking, Misinformation, Prosociality, & Trust
- Damien on Bluesky
Mita Williams
Mita began with two quick learning stories: First, after we learn how to tie shoelaces, we just do it: we don’t think in steps anymore. Second, Mita recounted a 2019 study from the University of Richmond in which rats were taught to drive small vehicles in an enriched environment. The rats learned quickly, enjoyed learning, and experienced less stress. Unlike people and rats, generative AI systems are not intelligent and do not learn the way we do–although, she noted, they are not without good uses. Mita suggested several hands-on environmental approaches to teaching–from moot courts to art critiques–to promote convivial, social learning.
Links from Mita
GenAi and Learning
- Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet).
- “Very very good letter in the FT magazine this morning pinpointing what humans definitely do when we ‘learn’ and why machine LLM “learning….”
- Why A.I. Isn’t Going to Make Art
GenAi and Teaching
- We go to school to better understand problems[
- AI Killed the Take-Home Essay. COVID Killed Attendance. Now What?
- How ‘Art School’ teaching avoids a losing battle with technology
GenAi and Librarianship
Find much more from Mita around the internet.
Q&A
Moderator Michelle Reed posed several questions to the panelists, though we did not have time to address all the excellent questions from the audience!
People say it’s important to use AI in order to learn about it. How do you respond to this line of thought and how do you approach your own learning on this topic?
Andrea and Damien noted that the requirement to use AI tools to understand them removes agency and consent, and that we must recognize resistance and refusal as legitimate responses to AI. If, as Damien asked, refusal isn’t an option, then how is that consent? Mita added that it is in fact possible to learn a lot about AI tools by observing visualized models of how an LLM takes text and breaks it up. You can then see how powerful the tool is, and then ask “Where should we actually use this power?”
Grad students are very often asked to use genAI in the classroom, occasionally with an opt-out but a daunting responsibility to provide very good justification for that opt-out. Considering the power dynamics in that situation, what is your advice for students who feel ethically opposed but also feel that they don’t have a choice?
Damien and Andrea talked about both student and librarian resistance to AI technologies and the difficulty of navigating resistance when universities are pushing AI tools and when policies are unclear. As Andrea noted, the whole system is set up to not make meaningful consent possible. Damien pointed to examples of UNCC students organizing to discuss how to resist AI mandates and encouraged students to see collective action as their best tool.
How can librarians resist when patrons are looking to them for assistance with this technology?
Andrea suggested having resources on hand that you can go to to clarify basic facts about these technologies so you can illustrate harms. People may assume these are the best technology, but have them think about what they want to achieve and whether this really is the best tool.
Mita asked why people use these systems? Sometimes a lack of time, lack of energy, a sense of shame for not knowing something. People feel alienated from their words; a meaningful experience in education comes from us listening to them and reinforcing learners on what they want to do on their journey. There are already tools that do the work we want to do. How can we reinforce their use?
Damien continued with the theme of using existing tools, noting that disabled authors were already writing before AI. The “gloss” of AI is just that–a gloss. It’s not true. There are people who are trying to think about different ways of building AI outside of the GenAI model/framing–the best thing any of us can do is find them (i.e. Distributed AI Research Group). Hugging Face is doing interesting things. AI Now Institute is doing this from a public policy stance. Once we find those people, bring them into the conversation with your universities, libraries, social media, etc.
Is AI information literacy even possible given that the models obscure their sources? How do we evaluate the information that it gives us?
Mita started by saying that before this hype cycle with ChatGPT, there were folks in the digital humanities using machine learning in novel, specific ways in which they were training on discrete sets of info (usually scanned materials) and that work is still there. Those projects aren’t comparable with today’s LLMs, but it will be interesting to see what small tools will develop in the future.
Damien agreed there are a number of people doing smaller-scale projects that let people know how tools are being built. It’s possible to understand what the systems are and what they do without knowing how exactly they were trained; you can infer a lot of what is likely present in the data, you might not know it exactly without a court order or injunction, but you can understand the type of material going into it, and that gives us the power to correct it.
Andrea said that systems are not transparent, so you can’t dissect how a particular output occurred. But, she asked, does everyone need to have a deep technical expertise about how the models are working in order to critique the systems or refuse to use them? What are the things, when we are so focused on AI, that are getting deprioritized because of the focus on “GenAI is everything”? What are we losing when we don’t pay attention to the rich ways we can teach that don’t involve these tools?
Links from the community
Webinar attendees held a lively chat alongside the discussion. Here’s a roundup of the links they shared.
- Against the Uncritical Adoption of ‘AI’ Technologies in Academia
- Engaging With AI Isn’t Adopting AI - by Marc Watkins
- Against Technoableism
- The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers
- Discourse Depot: A Workshop in Operationalizing Discourse Analysis as Machine Instructions
- AI-Generated Citations or Hallucinations
- Thinking Smarter, not Harder? Google NotebookLM’s Misalignment Problem in Education
- Cataloging the World
- Empire of AI
- We Already Have an Ethics Framework for AI
- What Uses More
- The Tokenizer Playground
- Elon Musk’s Grok AI tells users he is fitter than LeBron James and smarter than Leonardo da Vinci
- Balancing the ethics, concerns and use of AI in the classroom
- Douglas Adams’ Technology Rules
- Deep Background: Using AI as a Co-Reasoning Partner with Mike Caulfield
- Attention Is All You Need
- Is It Safe to Talk about NaNoWriMo and AI Yet?
- Resistance, Rebellion, Refusal: 7 Tips from Librarians for Navigating “AI”
- Civics of Technology
