7 Questions on Generative AI in Learning Design

CT: What are some of the ethical considerations around using generative AI in learning?

Vaughn: With generative AI, the big question is: Did learners actually create this on their own? There are also some larger ethical considerations to take into account if we're going to choose to engage with generative AI tools. Are they accurate? Are they actually giving you responses that are factually correct? There have been many well documented instances of chatbots — I think Google Bard got in some hot water over this — getting even basic math calculations wrong, and they would double down on being wrong. If you have a learner who doesn't know how to critically analyze a response from an AI, like an instructor who is an expert in their field or discipline would, then you are in a spot where the AI is doing more harm than good.

These tools are also trained off a very, very large data sets. And when you engage with these tools, you are training them to become better, you are feeding more data into the system, and very often you are doing it for free or you may even be paying the service for access to it while at the same time providing your free labor to improve that product, which is inevitably going to be turned around and eventually sold somewhere else. So that is certainly a thing, because we do care a lot about intellectual property and copyright concerns, right? You would be really upset if someone took one of your articles and republished it with their name on the byline, with no attribution, and just pretended that they wrote it. The same is true for these generative AI tools.


One prominent example from last year is the mobile app Lensa. The way Lensa works is that you upload a bunch of photos of your face, and it generates these cool AI avatars. I got caught up in it — I thought it was really cool. My face is out there on social media, so I had no problem uploading 20, 25 pictures my face for this application to analyze and generate some avatars for me. A week or two after I did that, it came out that Lensa was being accused — very credibly, by multiple artists — of having stolen their artistic style. And Lensa was not clear about how they trained their AI model and what art they used to do that.

The last thing that I would mention about generative AI tools in particular is that you can usually access them for free, but if you want quality results, you have to pay for access. For example, I subscribe to ChatGPT Plus for $20 a month. In exchange for that $20 I get access to the newest model of GPT, GPT-4, which has passed the bar exam, two of the three GRE exams, multiple AP exams, the written exams to become a sommelier — which is a little ridiculous. It's even met the passing thresholds for most of the USMLE, which is the licensing exam for doctors in the United States. This is an incredibly powerful model I get access to for $20 a month. And not only that, if ChatGPT is overwhelmed by all the free users, the folks who have free accounts lose access to the service temporarily, but I still get access because I'm a Plus subscriber. When I look at this from an equity perspective, we are in a spot where folks who can afford high-quality access to high-quality tools are going to be capable of doing really impressive things — and folks who cannot afford that will not. Equity and access are going to be long-term issues with some of these generative AI tools as they become more and more popular.

CT: What kinds of policies do institutions need to put in place around the use of generative AI?

Vaughn: For the past two months or so I've been working on a generative AI use policy. The idea behind the policy is to clearly communicate to folks what is and is not appropriate, and what is and is not safe. These tools hold incredible promise, and you would be losing a huge competitive advantage if you were to just outright ban everyone from using a tool like ChatGPT. If you put some guidelines in place for your folks to follow, they'll have a much better idea of: How is this used? How can we use it responsibly? That's why I believe in having a policy like this, so that it clearly communicates to folks what is expected.


Featured