Campus Technology Insider Podcast January 2024

Howard Holton  28:42
Well, the worst decision is no decision.

Noble Ackerson  28:44
Right.

Howard Holton  28:45
That's the hard thing to get over. It's like, how do we know we're gonna make the right decision? Okay, cool. I don't, but I can tell you 100% you're making the wrong one right now. Right? Okay. So we need information to make a good decision. Great, well, here's the seven different ways we can get better information so you can make a decision. I would rather you walked out into traffic, and read tea leaves, and made a decision, than not, continue to, you know, continue to not make a decision.

David Weil  29:09
I do want to challenge what you said, though, about, that, the has to come from the top. I don't necessarily agree with that. I think it depends on what your objective is. If it's to turn the whole institution in a different direction, yes, that has to come, you know, from the top leadership. But you can make a big, you know, I'm not at all, you know, advocating that our entire institution suddenly becomes artificial intelligence driven on everything it does. No. It's a tool. It's a tool, just like we have all these other tools out there. And I think those tools can be adopted in various places throughout the organization. And, you know, you'll have some organizations, some areas that will be further along than others. They'll be able to show how it's changing how they're doing work or how it's adding efficiency. Some will fail as well. And so I think it's a collection of things, and you know, what is the overall objective that we're trying to achieve at our institutions. Sure, from the top has to be at least embracing acceptance of change or something there. But I think you can actually create significant culture change throughout the organization other ways too.


Howard Holton  30:12
But if the president came out two weeks later and said, "I'm opposed to AI, this is evil, this is wrong, no institution should adopt it," you've now created that rift again. Right? So, so sure, they don't have to be the one teaching AI. But they have to be supportive of all of that change, and an advocate for that change. "We don't know what's right, we don't know what's perfect, right? You know, look to David, he's got to, he and his team have a really good plan, here's seven places it's been successful." You still have to have that support. All too often, they're not actually supportive. And even unwillingly, they make a statement that's in opposite, opposition of the change and, and it just crumbles and falls away. Right? So, so, you know, even, even when it's a small change, you still have to have buy-in at that level, and you have to have, everybody's kind of got to be on the same team.

David Weil  31:04
Depends on the institution.

Rhea Kelly  31:05
I think like, like anything else in change management in higher education, you're gonna need your champions. And those can come at a lot of different levels, is what I'm hearing from you all. I wanted to make sure we talk about the product side, because another thing I've noticed over the past year is that every single ed tech company out there is trying to market the fact that they have some sort of AI. So I was wondering if you could all talk about how this is changing how higher education should evaluate their product choices, and what to, what to, you know, considerations. And Noble, I feel like this is right up your alley, so I'm gonna throw that one to you.

Noble Ackerson  31:45
I just did a whole talk on that. But if the objective function of your organization is to, say, improve the efficiency of x, or help increase student success by y, then that, you know, your evaluation tactics should sort of be anchored on that objective function, right? You have your metrics that are going to sort of guide that as well. So earlier, I talked about a couple, there was a question that was asked, can't remember who asked it, that made me sort of enumerate a couple types of archetypes of customers and where they are and how they sort of deal with this madness, this deluge of "We need AI." The first one is the enterprising, you know, companies or organizations that have gone out, tried it, realized really quick that their fancy demo is brittle in production, and they say, "This is moving too fast. We cannot keep up with this. No, like we're not doing this. We're just going to wait." Number two are the ones that go, experienced number one, and then they go, "Wait a second, why did we go through all of this? AWS Bedrock, GCP, Google Cloud Platform, Microsoft Azure, Open AI, Salesforce does all this for us at the application layer. And so let's build on top of what's already there." Stand on the shoulders of giants, as it were. And the evaluation tactics, again, sort of bind to your objective function. So if it is, "We have a lot of data, and we want to make decisions," to get there step one is, "Do we have the right data? So let's fix that first." And there's a third group that are, that just don't learn. Right? It's just a patchwork of tech debt that is a model that obviously shouldn't be modeled. And their evaluation, don't really bind to any set of goals, would be it student success, be it operational efficiency, whatever that is. It's all over the place. And so just to sum up, if you were to sort of break it up into, "Do we have the right data? Do we have the, you know, the confidence to deliver? Do we have, have we, do we understand the core problems and needs that require us to invest in this thing now, or is it better to hold and wait for Microsoft Copilot to help solve that problem for us? That may be the best thing to do.


Featured