Campus Technology Insider Podcast August 2024

Shannon Brenner  15:36
For me, a lot, I'll echo a lot of the same things in, particularly in the college experience course that I teach, I've moved to a lot of audio/video in terms of discussions and assignments, and asking students to kind of reflect. So even if there's a written assignment, I'll ask them to record something to go along with it, just to talk about the, you know, the, the assignment process, or how it connects back to their own experiences. And so I found that to be really useful. Since I've moved to audio/video and kind of forcing students to defend their choices in audio and video, I have not really had any issues with students using AI in that particular course. I've also, and I'm trying to work on getting this added to all of our college experience courses, because they reach almost all of our students, is to add an AI literacy unit to that course, because I think it's really important for our students to understand what it is, how it works, what its limitations are, and how it might apply to their industries in the future. We're going to have to move to a point where we teach students in our courses how to use AI in a way that will help them when they get into the, into their fields, right? So that's kind of in the back of my mind always as well. In terms of composition, a little bit, a little bit more of a challenge, right? Because students have to write, and we're supposed to be teaching them these skills. But I've used AI to generate sample pieces for students to critique, which has been really useful. So I have students really dig in and find all the flaws in it, but I don't tell them where it's from until after they've done the exercise. And they're always surprised to hear that it's from AI. And so that's really useful, because when students generate something with AI, they think it's good. They tend to think because the, it's got big, big words in it, and the sentence structure looks really good, that it's going to meet the needs of the assignment. And when they really dig in, they realize that there's no substance and that it doesn't actually connect back to our learning materials, and it doesn't do what I've asked it to do in the assignment prompt. So that's been a learning experience for them. And I also like the fact that I can generate sample content for them to critique, and they don't feel like they are hurting someone's feelings. So sometimes I do tell them it's AI generated early on, and it works better than, say, a peer review, or it's a good way to kind of introduce them to peer review before they actually look at what a peer has written, right? They understand how to provide feedback in a way that's constructive and won't hurt someone's feelings. So I've used it both of those ways, and I use it sometimes to help provide feedback on student writing. I tell them this because I think transparency is important, but it's another time-saving tool. It allows me to spend less time providing inline feedback on things like sentence structure, and then I can focus more on recording feedback to them and talking more about the content of the piece, and, you know, whether it, whether it's clear that they're meeting learning objectives, which I think is more important and a better use of my time, and a better use of their time as well.


Rhea Kelly  18:30
Are there any challenges that you've run into along the way, in incorporating generative AI into your work?

Jordan O'Connell  18:38
I feel like the world's our oyster now with these tools, right? And they're low-cost, they're free, incredible, as long as you're paying attention and using them responsibly. Actually, one of the first things Shannon and I did, we have, Northeast Iowa Community College, we have a quality course design site. It's a Google site. Anyone could go to it. We wrote a lot about AI initially, because we needed to really wrap our heads around it. We wrote about the ethical considerations. We actually tried to consider the student perspective, staff perspective, and faculty perspective, because we needed to sort of understand these dynamics on some basic level. So we really had to wrestle with those things first. But once we did that work, I think our fear of what the technology was or would become was really muted, because we again, we realized that we were still in control. We were the ones making the choices. I think students, one challenge I'm seeing is obviously students are trying to use it, they've tried to use Wikipedia, 20 years ago I was trying to use Google probably, right, but I was, actually it was drilled into me not to do that, right, by a lot of faculty. And I said, to echo Shannon's point, I think we have a huge responsibility to talk to students about responsible use of these tools at every opportunity we can, in the college experience course and elsewhere too, everywhere we can. But students do feel like if they've generated, a lot of students will make this argument, that because they are a part of the, they crafted the prompt, you know, they can…. They haven't done that work that I described of thinking through the ethical sort of ramifications of what they're doing, and if they feel like they did contribute to part of it, sometimes they can feel like they contributed to all of it. Or if they put a little bit of themselves into what they did with AI, it's all of them. And that to critique that, or to suggest that that's not the outcome that you're looking for, students don't know exactly, they don't have a framework for how to deal with that, because they feel like they were involved, right? It feels like googling to them, which is a part of the research process sort of already, to some extent. So we, I think we are going to have to make that case to the students about why, why and how, again, to use these tools responsibly. It won't come easy or natural to them, and so we have to, I think we have to be leaders, and again, talk to them constantly about it. That's the number one challenge I see, is students trying to use it, even reading what I'm telling them to do, but again, not really thinking through, or really, again, philosophically thinking through, what's me, what's the computer? And where's the bright line between these two? It's super gray, even from my perspective. But for them, it's really tempting to say, "This is me, you know, I worked on this." This is my, that's my number one challenge, I would say.


Featured