Ai-powered Assessment in IPE: Using Large Language Models to Enhance Reflection, Rubrics, and Real-world Relevance
Abstract:
As Interprofessional Education (IPE) evolves to meet the complex needs of collaborative healthcare, educators are challenged to assess student learning in ways that are meaningful, consistent, and scalable. This interactive seminar explores how artificial intelligence (AI)—particularly large language models (LLMs) such as ChatGPT and Microsoft Copilot—can be applied to enhance IPE assessment processes. This includes streamlining rubric development, evaluation of student reflections, and contributing to robust IPE program evaluation and streamlined workflows.
This session directly aligns with the Summit's theme of "Building the Evidence Base for Interprofessional Practice and Education" by demonstrating how data-informed AI tools can improve IPE assessment practices. Effective assessment is an essential component of program implementation and educational innovation, providing the necessary data to demonstrate impact and inform continuous improvement. By leveraging LLMs, educators can gain support in designing outcomes-based assessments that not only drive interprofessional collaboration and learning but also generate valuable evidence of educational effectiveness. This practical seminar assists educators in contributing to the evidence base through enhanced assessment methodologies.
Objectives:
After attending this session, the learner will be able to:
• Explain how LLMs can be applied to assessment and evaluation in IPE.
• Utilize AI tools to co-create or refine rubrics aligned with interprofessional competencies and facilitate consistent evaluation.
• Employ AI to analyze student reflections for themes, gaps in understanding or other issues which could contribute to program improvement.
• Strategize ways to integrate the data collected from AI-assisted assessment and into comprehensive IPE program or course evaluation.
Immediately Actionable Skills and Knowledge:
Participants will gain hands-on experience with LLMs and AI tools to:
• Practice crafting effective prompts using clear instructions, key criteria, and specific language for authentic assessment and optimal results.
• Create or adapt rubrics for IPE reflection assignments, enhancing their alignment with learning objectives and interprofessional competencies.
• Evaluate sample student reflections using AI-assisted prompts, to assess learner data for the purpose of assessing progress toward assignment objectives and/or overall course improvement.
Active Learning Strategies:
This is an interactive, “bring-your-own-project” seminar. Participants are encouraged to bring a current or past IPE project—such as a reflection assignment, a draft rubric, or assessment task. Following an introduction to the technology and its ethical implications, participants will use AI models in real time to refine or co-create materials aligned with their projects. Peer-sharing and guided feedback will ensure collaborative learning and reinforce practical takeaways. Participants should be prepared to use an LLM of their choice during the workshop for hands-on activities.
Relevance to Priority Criteria and Theme:
This seminar directly supports evidence-informed assessment practices in IPE by integrating innovative tools (AI/LLMs) with interprofessional learning outcomes. It promotes scalable solutions for reflection assessment and rubric development, while maintaining human oversight and pedagogical integrity. It exemplifies the use of informatics and data tools in interprofessional education and supports faculty development in emerging digital competencies essential for future-ready healthcare teams. This directly addresses the priority criteria related to measurable learning outcomes, quality improvement initiatives, and the use of informatics for interprofessional innovation.