Campus Technology Insider Podcast February 2024
Rhea Kelly 10:41
So it's really like a matter of someone who is going to be the author of a textbook, maybe part of the job is training the AI model, so that it is optimized for learning that material.
David Wiley 10:56
Yeah, it's, you know, my, my graduate training is partly in instructional design. So those of us that come from that world have been kind of championing the value of having clear learning objectives associated with, you know, whether it's a lesson that you're doing in class, or it's a chapter that you're writing, or whatever it is you're doing, if you don't have a really clear sense of what the learning goal is for the student, what should they be able to do at the end of this experience, then it's hard to create good assessment. It's hard to create good activities and good content for students to engage with to get them there, if we're not really clear about where there is, right? I'm teaching a class on generative AI in education this semester. And this class, we're looking at a couple of prompt engineering frameworks. And it's been kind of interesting, the degree to which those prompt engineering frameworks mirror some of the frameworks that were developed for writing learning objectives back in the 60s and 70s, right? That each objective, you should think about an objective as having multiple parts, and those parts are this and each of them plays a specific role. And in a learning objective you cover, who's the audience? What's the behavior they should engage in? What conditions should they have to perform it under? And to what degree do you want them to perform it? So for example, maybe I want a third grader to be able to multiply two-digit numbers without a calculator, 90% accuracy. Right? That's the ABCD kind of framework for writing outcomes that Mager developed way back in the day. It turns out that prompt engineering is a lot like that. There are these frameworks that specify what kind of the parts of an effective prompt are, and what needs to go into a prompt. And it seems like there's going to be this kind of, again, a kind of transformation of these learning outcomes into prompts in a way that can drive generative AI to create activities, to create static content, to create assessments, to create feedback, according to that prompt that, that comes through. Back in the day, when you were done creating your learning objectives, then you took a deep breath, because you knew you had to go develop assessments for every single one of those learning objectives. And then once you developed all the assessments, then you had to develop the content and the activities that were going to prepare people to succeed on the assessments, so they could demonstrate to you, to you that they had mastered, you know, the outcome that you're hoping they had mastered. Whereas in the future, it may be that there may be a handoff that happens much more quickly, right? When I have a good learning outcome written, and I've transformed that into a good prompt, it may be that now the next 80% of the work is done by the model. And you'll, you might still have peer review and some of those other kind of quality assurance processes on the back end. But the big chunk of work is going to be done by generative AI.
Rhea Kelly 14:06
Well I have two questions now. But first, can we dive into maybe an example of the prompt engineering? Like one of the things that I've found is that it's helpful to ask the AI to roleplay, so that they, you know, they're acting as, say, an instructional designer. Is that like one of the steps that you've found too? Or, like I want some more details.
David Wiley 14:27
Yeah, another, there are a couple of different frameworks. I'll just pull one up while we're talking here. There is a, I mean, OpenAI has posted a, kind of their own guide to prompt engineering, the way that kind of they think it should be done, which has some great advice like that in it. But there's a really, there was a really great article on Medium a couple of weeks ago, from a woman who won — here I've got it pulled up — the article was called "How I Won Singapore's GPT-4 Prompt Engineering Competition." And here, in here, she talks about what she calls the CO-STAR framework. So CO-STAR stands for context, objective, style, tone, audience, and response. And then in this article, she goes through, you know, for her, what that means. So the context is, you know, you, you are a teaching assistant in an undergraduate statistics course. So, you know, you're setting up that context in some way. And then the tone is that you're going to be very supportive and really encouraging. And the audience is you're talking to first-year undergraduates who've never had, haven't had a math course in several years, because they stopped out of school for a while. The thing that's maybe interesting about these prompt engineering models is this idea of the response at the end, because you can ask the model to respond in different ways. Maybe you want it to respond just with a paragraph of text that a human being could read. But maybe you want it to respond in JSON, or in XML, or in some format that a program could pick up and process and do something else with.