Creating Guidelines for the Use of Gen AI Across Campus
Because generative AI impacts each person differently, and each field and discipline and area of study differently, we found we need to have dialogue with people. The guidelines are meant to be used by people from a wide range of areas, and then in dialogue, we can work with those people. We can talk about what constitutes a good policy for generative AI in their specific course. How does that manifest for you? Let's talk about your area of study. What does learning look like in your class? What are the goals that you have for student learning? How do we set you up to succeed in a world where this technology now exists?
CT: What's your approach to faculty training on generative AI?
Conatser: Our teaching center enjoys a unique level of connection and collaboration across our entire campus. There's not a single college or unit that we don't work with here at UK, despite the large size of our university. And our teaching center is a voluntary unit: It's not mandatory for anyone to work with us, and that's a critical part of our success and our citizenship at the university and our ability to be colleagues with faculty, both intellectually and organizationally. We make sure to be clear with people that our teaching center and the ADVANCE team are ongoing spaces where they can find community and advice and assistance and camaraderie.
Our trainings on generative AI have by far been our most well-attended trainings and workshops over the last year. We've done in-person trainings and we've done online. We do some sessions that are more of an introduction to generative AI: What is it? How does it work? What does this mean for higher education? We'll have sessions that focus specifically on writing assignments and generative AI, or sessions that focus on assignments that aren't writing-based. We'll have sessions that focus more on course policy and academic honesty. And then we'll have other formats: We've had play sessions, for example, where the objective is not to learn a great deal of conceptual information, but rather to play around with the technology with some guided help and start to get more of a sense of efficacy at using the tools. We've found that once people use the tools and gain first-hand knowledge of how they work and what they can do, they feel a lot more comfortable addressing with students what these technologies are and what usage is going to be appropriate for that course.
One of our campus events that involves students, community members, etc., is called the Curiosity Fair. There are a bunch of different stations on interesting things in different disciplines, just to get people enthusiastic about learning — and we had a station on generative AI. We had different computers set up with some big monitors, and we had an activity for students, faculty, staff, and some community members to play with image-based generative AI. We started with one image, and across the four hours of the event, the point was to iterate upon that image as much as possible to make it the best possible image by the end of the night. People would look at the image, type in a prompt to try to make it better, and reflect on the output. What changed and why? Was it surprising?
This got into some really deep conversations about prompting and the kinds of data that the generator was trained on. And it got at that idea of developing critical AI literacies. Regardless of what discipline you're teaching in and regardless of what your course policy is around generative AI, that overriding goal for students, and all of us really, is the development of critical AI literacies. In other words, the increased understanding of how the technology works, but also how it's being deployed, who has developed it, and what are the issues, challenges, or concerns there. How does that impact our sense of this technology and the way that we use it? Can responsible use mitigate those risks or not? The development of those critical AI literacies is the undercurrent of all our trainings, workshops, documents, and recommendations, because generative AI is akin to the rise of the internet in the 90s: It's a new technology that disrupts our notion of what knowledge is, where it is, and how we develop it. And our overarching goal over time, particularly as an institution of higher education, is to hone in on how we can get those skills around critical literacies and uses, so that we can adapt ourselves over time as the technology changes — and still be capable of engaging with it as lifelong learners in ways that are responsible, appropriate, and effective.
About the Author
Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].