Creating Guidelines for the Use of Gen AI Across Campus
The University of Kentucky has taken a transdisciplinary approach to developing guidelines and recommendations around generative AI, incorporating input from stakeholders across all areas of the institution. Here, the director of UK's Center for the Enhancement of Learning and Teaching breaks down the structure and thinking behind that process.
Last year, the University of Kentucky announced the formation of a new task force to study generative AI and make recommendations for its responsible use. Dubbed UK ADVANCE (Advancing Data utilization for Value in Academia for National and Campuswide Excellence), the committee brings together experts from all over campus to provide ongoing guidance on use of the technology in teaching and learning, research, and more. The group has published guidelines for faculty and researchers, with plans to update the recommendations as the technology evolves. We sat down with Trey Conatser, director of the Center for the Enhancement of Learning and Teaching and co-chair of UK ADVANCE, to find out more.
Campus Technology: How did the UK ADVANCE committee come about?
Trey Conatser: When ChatGPT was first publicly released in November 2022, one of the things we at the Center for the Enhancement of Learning and Teaching noticed over that winter break was a sudden rise in chatter about this new technology and what it might mean for teaching and learning. Some of those early concerns were of course around academic integrity, but we also saw in it something that could profoundly change the behaviors of writing and learning. So in January 2023 we started having town halls and workshops and different kinds of trainings around generative AI in the classroom space. We were thinking about it broadly, across all the different areas of study, the professions and disciplines, from the more liberal arts side of education to the professional schools and to the STEM classes, because this is a phenomenon that manifests in many different ways across our work life, our education life, even our personal life.
The president asked our provost to form a university-level task force to address the rise of generative AI, because it was clear that this wasn't going to be one of those passing fads or trends or flashes in the pan. The idea behind the task force was that it would be transdisciplinary in nature. We involved all stakeholders from across campus — including students, staff, faculty, and administrators from a wide range of areas of expertise — to address the problem at hand. We have folks from informatics and computer science but also philosophy, communication, writing, and leadership studies, as well as representation from IT, Legal, PR and Communications — all these areas that that are touched by AI.
The charge was to make recommendations and provide guidance and advice around what we should be doing as an institution with respect to this technology, knowing that there would be some rapid developments that we might have to respond to with a sense of alacrity. Something new comes out, and then all of a sudden, we might have to rethink things and respond to those moments as well. There were lots of conversations around navigating the difficulties of making global statements at an institution as big and diverse as UK — you can't have a simple rule about generative AI that equitably serves all areas of the university. So we were navigating this need for flexibility, but also the need to give some concrete and actionable guidance around the technology.
That resulted in a set of instructional guidelines that we released in August of 2023 and updated in December of 2023. We're also looking at guidelines for researchers at UK, and we're currently in the process of working with our colleagues in the healthcare enterprise, UK Healthcare, to comb through the additional complexities of this technology in clinical care and to offer guidance and recommendations around those issues.