Associate Professor University of Massachusetts Amherst, Research and Evaluation Methods Program, United States
I.C.E. Exchange Program Description: This interactive workshop seeks to investigate the practical applications of artificial intelligence (AI) in the realm of test item generation, emphasizing its potential to transform conventional approaches to assessment development. Through active audience participation, we aim to illustrate how AI-driven item writing software can streamline the creation of assessment items in real-time scenarios.
Participants will be immersed in a live demonstration showcasing the capabilities of AI-driven exam development tools. Throughout the session, attendees will observe the step-by-step process of generating exam items, constructing a test, administering the test, and conducting subsequent data analysis – all within the span of an hour.
This workshop is designed to appeal to both seasoned professionals seeking to explore innovative methodologies in assessment design and newcomers eager to gain insights into emerging trends in the field of testing. By fostering an environment conducive to collaboration and active engagement, participants will have the opportunity to witness firsthand the potential implications of AI technology on the future landscape of assessment practices.
Join us for an intellectually stimulating session where theory meets practice, and where participants can actively contribute to the exploration of AI-driven approaches in assessment development. Gain valuable insights into the challenges and opportunities presented by AI technologies in the context of real-time assessment, and contribute to the ongoing discourse surrounding the integration of AI in educational and professional testing environments.
Learning Objectives:
Compare traditional test development methods and AI-driven approaches, highlighting efficiencies gained and potential challenges faced with the integration of AI technologies.
Understand the requirements for implementing AI-driven item writing software within participants' respective organizations, considering factors such as resource allocation, staff training, and potential implications for assessment validity and reliability.
Utilize AI tools to generate assessment items, construct a test, administer the test, and analyze resulting data within a constrained timeframe.