In designing and framing the day's activities, Boba and Michael took cues from the University of Calgary’s Sarah Eaton (2025), who advocates for a “wraparound approach” to tackling both the tensions and possibilities of Generative AI—a model of inquiry that involves multiple stakeholders who bring a range of perspectives. That diversity is both a source of vitality and, as Michael reminded us in his opening remarks, the reason GenAI workshops can often be so difficult. Convening an interdisciplinary group means inviting diverse literacies, ethical stances, and learning contexts into the same room. For some, GenAI poses an existential threat: an “animal we can’t control,” as one attendee put it. For others, these tools are already making academic life richer, presenting opportunities to refine and streamline practice.
What also makes GenAI workshops challenging is that discussions are easily abstracted from the contexts of teaching and research practice. Our own growing data on students’ engagement with AI tools illuminates a range of uses: from understanding an assignment to brainstorming ideas, from summarizing readings to polishing a draft of written work. Eaton reminds us that the purpose of interrogating these practices is to “cultivat[e] ethical decision-making rather than pursuing an unwinnable academic integrity ‘arms race’ of detection and punishment.” Such was our intention, too, as we invited colleagues to discuss both ethical tensions and pedagogical possibilities grounded in real-life experience.
“Discussion,” for many participants, is precisely what made our event so generative—the sentiment echoing across their reflections on the day. Interdisciplinary, small group discussions animated the morning sessions, as new and familiar colleagues grappled with questions related to LLM-generated writing in high-stakes contexts like admissions, grant writing, and publication, as well as GenAI’s evolving implications for assessment, pedagogy, and policy. Reconvening as a full group in the afternoon, our discussion was anchored by case studies that reflect some of the central tensions and paradoxes texturing the current landscape: an accomplished student’s article flagged for undisclosed AI use several months post-publication; an instructor who crafts clinical simulations using Copilot that, for better or worse, reflect discriminatory practices; a graduate seminar enlivened by students’ improved comprehension and fluency thanks to AI-generated roadmaps of challenging readings. There are no simple solutions to these puzzles. As one attendee said, “I think you can argue on both sides.”