Recap: Symposium on Generative AI in Health Sciences Writing & Assessment

On May 27th, 2025, the HSWC convened over thirty faculty members, instructors, program administrators, and leaders from across University of Toronto’s health sciences faculties for its inaugural Symposium on Generative AI in Health Sciences Writing & Assessment. Generously supported by the Council of Health Sciences, and facilitated by Dr. Boba Samuels and Dr. Michael Cournoyea, the symposium aimed to:  

  • Build cross-disciplinary networks to discuss the impact of Generative AI on academic programs, teaching, and learning.  

  • Consider guidelines, program adaptations, and policies for AI-assisted health sciences education.  

  • Explore transparency and accountability in AI decision-making.  

  • Assess changes in high-stakes writing (e.g., comprehensive exams, grants) and possible adaptation strategies. 

Our attendees’ overwhelmingly positive feedback suggests that our symposium was a success on these fronts. The Faculty of Information’s Learning Hub, our home for the day, was abuzz with rich discussion punctuated by provocations and insights drawn from across our respective fields of practice. 

Michael and Boba share their opening remarks

In designing and framing the day's activities, Boba and Michael took cues from the University of Calgary’s Sarah Eaton (2025), who advocates for a “wraparound approach” to tackling both the tensions and possibilities of Generative AI—a model of inquiry that involves multiple stakeholders who bring a range of perspectives. That diversity is both a source of vitality and, as Michael reminded us in his opening remarks, the reason GenAI workshops can often be so difficult. Convening an interdisciplinary group means inviting diverse literacies, ethical stances, and learning contexts into the same room. For some, GenAI poses an existential threat: an “animal we can’t control,” as one attendee put it. For others, these tools are already making academic life richer, presenting opportunities to refine and streamline practice. 

What also makes GenAI workshops challenging is that discussions are easily abstracted from the contexts of teaching and research practice. Our own growing data on students’ engagement with AI tools illuminates a range of uses: from understanding an assignment to brainstorming ideas, from summarizing readings to polishing a draft of written work. Eaton reminds us that the purpose of interrogating these practices is to “cultivat[e] ethical decision-making rather than pursuing an unwinnable academic integrity ‘arms race’ of detection and punishment.” Such was our intention, too, as we invited colleagues to discuss both ethical tensions and pedagogical possibilities grounded in real-life experience. 

“Discussion,” for many participants, is precisely what made our event so generative—the sentiment echoing across their reflections on the day. Interdisciplinary, small group discussions animated the morning sessions, as new and familiar colleagues grappled with questions related to LLM-generated writing in high-stakes contexts like admissions, grant writing, and publication, as well as GenAI’s evolving implications for assessment, pedagogy, and policy. Reconvening as a full group in the afternoon, our discussion was anchored by case studies that reflect some of the central tensions and paradoxes texturing the current landscape: an accomplished student’s article flagged for undisclosed AI use several months post-publication; an instructor who crafts clinical simulations using Copilot that, for better or worse, reflect discriminatory practices; a graduate seminar enlivened by students’ improved comprehension and fluency thanks to AI-generated roadmaps of challenging readings. There are no simple solutions to these puzzles. As one attendee said, “I think you can argue on both sides.” 

A small group engaged in discussion around a table

Colleagues engage in small group discussions to surface ethical tensions and pedagogical possibilities

As well as offering concrete provocations, we also invited participants to get their feet wet and their hands dirty by making use of tools like CoPilot and Otter.ai in their discussions. We did, too: the Symposium’s skeleton was partly drawn and fleshed out using similar tools. Asked at the close of the day whether they played in the proverbial sandbox, nearly every participant raised their hand. “That’s frighteningly good,” someone said of Otter.ai’s rendering of their group’s conversation. “Maybe it’s not as big of a problem as we really thought.” 

If one refrain best captures the spirit of our collective deliberation it may be this, echoed over and over: “I hadn’t even thought of that.” One attendee shared that they especially “enjoyed the camaraderie of mixed health sciences groups”—a rarity in an increasingly siloed and splintered ecosystem. For us, that means we accomplished what we hoped for by bringing different voices and experiences to the same table. If another phrase best summarizes the feedback we received afterwards it’s that “future sessions would be welcome.” “This could be a multi-day event,” one colleague suggested. We think so, too, and we hope to reconvene sooner rather than later. 

Reference 
Eaton, S. E. (2025). A Wraparound Approach to Academic Integrity: Centering Students in the Postplagiarism Era (April 20, 2025). SSRN. https://doi.org/http://dx.doi.org/10.2139/ssrn.5223911