Campus Technology Insider Podcast July 2024

Rhea Kelly  05:07
It strikes me that that first report, I mean, that you started it before the big explosion in generative AI. And what was it like for that to throw a wrench into everything you were thinking and writing about?

Kevin Johnstun  05:19
Well, I mean, if you understand, I mean, I think it helped us to situate the explosion in a trajectory, right? So it was like, we had been tracking and seeing the ways in which, you know, branching had given way to kind of deep learning. And we were like, okay, so we were tracking that kind of thing, and then, you know, LLMs are in a similar kind of family, and we were like, could see the ways in which one thing had led to another. But it also, I think, helped us to help the community think outside of just LLMs, because they were not the only technology that has seen tremendous progress in the last several years, and we knew that because we had been tracking the family of technologies that exist under AI.

Rhea Kelly  06:09
That's super interesting. So the new report, one of the things that comes through really strongly is an emphasis on a shared responsibility for building trust in AI tools. So can you talk more about the key issues there?


Kevin Johnstun  06:24
Yeah, so I think it's really important to think about an ed tech tool as entering an ecosystem. That is, it's not just about kind of dropping it into a classroom, but those classrooms are supported by a whole bunch of figures, including educators, but also district administrators and state folks and even federal folks. And so what we wanted to say was, ultimately, there's a shared responsibility amongst all of those members in the ecosystem to make sure that this, at the end of the day, delivers a great service to kids, or to learners in the higher ed, higher ed setting. And so we really wanted to push developers' thinking on like, what are my responsibilities and what are the responsibilities that I share? And in cases where I share those responsibilities, how am I supporting people in executing on those responsibilities? So one of the things is like, are you building AI literacy among the people who you're working with? Because this might be, even though it's not new to you if you're an experienced AI or ML developer, it's going to be new to them. So how are you helping them understand, how are you helping them make informed decisions in the process of approaching the implementation of AI technology?

Rhea Kelly  07:44
Could you maybe walk through the main recommendations for developers, sort of at a high level?

Kevin Johnstun  07:51
Yeah, happy to. So in the report, we have this kind of like circles thing. It's gonna be hard to describe on a podcast, but this is a good reason for why you should go look at the report, so you can see the circles. But the, has, you know, a series of overlapping circles, and on the first one, it says, like, ultimately, these have to be designed for education. As we know, AI is a broad-use technology. In fact, it may be one of the most broad, like broad-use technologies. And so what we want to think about is, how are you really making sure that whatever AI system you're using is purpose built for an education context? That means a lot of things, but one of the things I would particularly draw attention to is it means working with educators in the design and development of education materials. So these things that, like, the platforms have to be able to integrate into classrooms. Educators are still going to, educators are still going to ride their e-bike, right? They're still going to be in charge. So we've got to work with them to make sure that it can work and that it's built for the right kind of classroom environment. And then also, we really wanted to make sure that people knew and were grounded in the literature around modern learning principles. This is not something that we're just expecting AI systems to drop knowledge into learners' heads, but that learning is socially situated, that it's context driven, you know, that it's best when it's, when it's authentic, when you have opportunities for low-stakes formative assessment, all that kind of stuff. And so we really wanted to make sure that we were saying, hey, if you're a new entrant especially, take some time familiarize yourself with the education literature so that you can build that into your tool, rather than just doing what's perhaps the most convenient thing with the set of technologies. Then in the middle, there's kind of these three overlapping pieces that we think are kind of the core of how you actually get to earning trust. And that's providing evidence, so that's both the evidence of why you think this is going to work, what, you know, pilot studies or other studies are you drawing on to say this is the right way to go, but then also that you have a plan for how you're going to build evidence as you move along, and how are you going to help people know that this product is, in fact, improving learning experiences, and for whom and under what conditions is it doing that. And then there's the safety and security piece, which I think is just absolutely crucial for developers to understand. And this is evolving. We understand that it's evolving. As new tools emerge, they have different vulnerabilities, and so it's really important that developers, you know, obviously stay on top of those and that they're building in products that we know are going, and they have assurances for folks that they know that these products are going to be safe and reliable, and they're not going to have, you know, vulnerabilities, both for the larger infrastructure, but also kind of in terms of, like toxic outputs from things like, and things like that. And then there's the third one, which is advancing and protecting civil rights. So anyone who's followed the history of AI, you know, knows that absolutely they can be programmed with algorithmic bias in them, and we have to make sure that we're clear that civil rights laws absolutely apply to education settings and that people are thinking about that front and center. I think it's important here to offer that we're not offering any new rules here. This is simply kind of helping people think about some of the things that already exist, and helping them kind of start that process of, what does this mean? And, but absolutely, it's crucial that we protect civil rights and that we advance equity with these tools. And then at the end of this process, once you've designed for education, you do the middle three things, we think you have a chance to really be transparent about what you've done and to ultimately earn trust in doing that. And so working closely with your stakeholders so that they can know what it is that's going to happen, and they can know why they're, you know, how their interests were secured. And then ultimately, you can, you can have a kind of trust throughout the ecosystem.


Featured