7 Questions on Generative AI in Learning Design

In terms of overall structure, the policy begins with a scope and some basic definitions of generative AI, along with definitions of what is proprietary information and what should and should not go into an AI model. We're also trying to provide a lot of guidance for what is responsible usage. If I am a staff member at a college or a university, I should not be putting confidential or FERPA-protected information into any generative AI tool, period. There are certainly some exceptions; for example, if you're working with a vendor, and you have protections in place, and you know how that data is going to be used. But otherwise, treat the AI like a human being who doesn't work there. Would you give this information to them? If not, don't give it to the AI either.

It's important to give clear examples of what appropriate applications of AI would be — and also what are the inappropriate applications. I like to include the idea of a transparency clause: When should you be telling people that you're using AI? And also enforcement: What are you going to do when AI is misused? What happens if I break the rules?


The policy should include information on how to obtain training and support. If you're going to let people use these tools, if you're going to provide a policy for appropriate use and inappropriate use, then I think you need to provide a baseline amount of training for folks so that they understand these technologies at a very basic level. How do they work? When you're using them, what does that look like? Alongside that, there are multiple hard conversations that need to be had around the drawbacks of these technologies. The ethics of AI is very complicated.

And finally, we have a review and update section, noting that the policy will be regularly reassessed to make sure that it's still accurate.

CT: With this technology evolving so rapidly, as soon as you lay out a policy it could become out of date within minutes. How can institutions design these policies to be future-ready?

Vaughn: It may be that some of these technology-related policies need to be reviewed more frequently. I know a lot of organizations will look at their policies on an annual basis — I think an AI policy should probably be reviewed every six months. And it might not hurt to look at it every three months or so. I would also recommend that folks cast a broader view when developing a process like this. For example, a broader view that says, "Don't use these tools to harass or bully someone else" — that covers an awful lot of use cases, including some things that might come down the road that we just can't possibly think of yet. And then when you're updating along the way, it might just be the definitions that need addressing as the policy grows and evolves to meet new changes in technology.


About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured