Campus Technology Insider Podcast March 2024
Rhea Kelly 18:53
So were there any findings from the survey that like surprised you?
Jenay Robert 18:58
Every time somebody asks me this, I think I'm not sure what would surprise me anymore. When you do survey research for quite some time, you start to realize that anything is possible, anything can happen. But definitely some things that were really interesting, just, oh, okay, yeah, that I can, I can see that, that's cool, or not cool as the case may be in some of these instances. But one thing is that I think stakeholders in frontline roles, so thinking about faculty and staff, may not have a very accurate perception of their institutional leaders' perceptions of AI. The data it kind of points to this idea that leaders are feeling this, this mix of optimism and, you know, speculation, and they're cautious, and some of them are pretty optimistic, and a few of them are super pessimistic about it. But then when you disaggregate those data by job role, you can see that the frontline folks are more likely to say that their leaders are pessimistic. So this is interesting and a little concerning in the sense that if people in frontline roles are working really hard to react to AI challenges, to retool their curriculum, to whatever the case may be, they should know that they have the support of their leadership. And perhaps, and again, this is a great place to collect that local data because at your institution, it, it might be different, but at least the large study is pointing to this potential for miscommunication about that general orientation towards AI. And linked to that it was a very similar difference in terms of what folks, who folks thought were leading the, were leading in terms of strategy at their institution for AI. So each stakeholder group, we broke it down to like leaders versus frontline folks, it really was the leadership were more likely to say that the leaders were in charge of AI strategy. And then people on the frontlines were more likely to say that people on the frontlines were in charge of AI strategy. So again, that mismatch that could seem kind of inconsequential at a surface level, I think if you dig deeper into it, it could lead a little bit more to that sense of isolation, the disconnection from resources that you would need, the disconnection from support that you would need. So yeah, that, and that "don't know" across each category, I would love for people to kind of read through the report and just every time they see that "don't know," like, jot it down. Oh, 30% don't know this, 28% don't know that, you know. So just reinforcing that need for communication and collaboration across units and job roles.
Rhea Kelly 21:38
In a way, seeing all those "don't knows" kind of, I imagine, it would make me feel better about not knowing things.
Jenay Robert 21:46
Yeah, right? I mean, it's the general state right now. We all are kind of figuring this out. And in higher education in particular, we're in an industry where we, we value expertise, we value this idea that some people are very much in the know. So I can't tell you how many times in my work over the last year related to AI, I would ask Educause members for their input on various AI topics, and one of the first things they'll say is, well, I'm not an expert. And so now over the course of the year, I've learned that one of the first things I have to say is no one's an expert in this, there's truly no one that I can go to that claims expertise in this particular question, whether that's, you know, the best way to implement generative AI tools in a writing course, or — there's just no right answer to some of these things. And that, that too is reflected in the study. So towards the end of the report, we talk about appropriate uses of AI in higher ed and inappropriate uses of AI in higher ed. And I specifically created that question because I understand that these things are fluid, that the word appropriate means different things to different people, but as a community, we are starting to try to figure out how we want to use these technologies. So there again, I think in the spirit of "I don't know," the interesting thing is we saw things pop up on, on both sides of that coin. Assessing student work for, for example, was listed in both columns: People were saying it's a great grading tool, and other folks said, this should never be used to evaluate student work. And there were people who said AI should never be used for anything at all in higher ed. So, you know, I think, I think in many ways, we're all just figuring it out together. And there's not a single expert in this just yet.