Campus Technology Insider Podcast May 2024
Rhea Kelly 16:10
It kind of harkens back to the liberal arts approach to education, you know, it's kind of ironic that technology would be the thing that brings back that mindset.
Shlomo Argamon 16:22
Very much so, very much so. I mean, I, you know, that's, that's, that's exactly the case. I mean, going back to your first question, one of the things that I love about doing research in AI and teaching AI, is that it's a field which brings together technology with many of the liberal arts: philosophy, linguistics, psychology, and so forth. So very much so.
Rhea Kelly 16:48
What are some of the challenges that you've encountered so far? Like, especially when you're integrating AI education across so many different disciplines that I imagine are traditionally sort of in their own silos, is that a challenge?
Shlomo Argamon 17:05
It hasn't been a challenge so far, but I've been, I've been in this job for two months so far. So I haven't had much time to get too much done yet. But I expect that there will be some challenges moving forward along those lines. The biggest challenge that I've had so far is, actually, it's a wonderful challenge, is everybody, pretty much without exception, that I've spoken to about what we're trying to do with AI across the university, is tremendously enthusiastic and very, very positive about it. And many, many people have lots and lots of ideas on how to do it. So my biggest challenge is your earlier question about prioritization of trying to see, you know, what are all the things that we could try to do and could figure out, and well prioritize, you know, what should we be focusing on? And, you know, what should we not be focusing on yet? So, I mean, that's, that's actually, that, I mean, that's a wonderful problem to have. I do, you know, I do anticipate that there probably will be some, some questions in terms of, you know, balancing off the needs of some units versus other units when we're looking at integrating things. But so far, I've, I've really been very happy to discover that that hasn't been too much of an issue yet.
Rhea Kelly 18:28
So you're not running into any opponents of, of the, you know, the focus on AI? Or have you?
Shlomo Argamon 18:36
Well no, actually, not yet. I mean, it's possible the opponents of AI are laying low right now and waiting to attack at the right time, but hopefully not.
Rhea Kelly 18:48
What would you say are kind of the most important AI skills for students to master going forward? I know you've talked about a lot of things, but can we kind of synthesize that into the, the vision of the future, so to speak?
Shlomo Argamon 19:03
Well, if we want to sort of really focus in and say, you know, sort of what are the main things that students need to learn about AI, you know, across the, across all of the different disciplines, I think, well, the first thing is, how to, how to use generative AI usefully for their work. And this involves a couple of different things. One is understanding the capabilities of it, the area that's, that's often called prompt engineering. How do you, you know, craft inputs to these systems that will get you the kinds of outputs that you're looking for? And this is an interactive process. So learning how to, how to manage that process effectively is an important thing to learn. Part of that as well, which is not often talked about, is how do you evaluate the results, and how do you, how do you develop confidence that what you're getting is indeed what you want, both in terms of, you know, a specific product, or if you are trying to develop it and trying to use it in a way that can be used again and again for similar questions, well, how do you know that your result on this question will work for the next question that you ask it? Or the next question that you ask it? So evaluating these results is, is also fundamental. And this, this, this relates back to a more general point, which is the issue of critical understanding of what you're seeing and what you're getting. So not just in terms of using AI as a tool, but this critical understanding is, it's a critical tool for just being an informed citizen today, in the age of AI, much more so than in the past. One of the things that generative AI has done is it's destroyed, in a way, one fundamental assumption that we've always had for thousands of years about communication, which is: If you see a, you know, you see a text, you know, a document that's well written, fluent, coherent, well organized, seems to express expertise in the topic, and so forth, you can assume that it was written by somebody with intelligence and knowledge. And unless they're actively trying to trick you, which does happen sometimes, you can, you have some reliance on the source of this document, because the document is well written and put together well. That's no longer true. AI can produce well written things that are total rubbish, or, you know, or, you know, more than that, if that can be said. So we need to learn and we need to teach our students, you know, how to be critical, how to, you know, look at these things and say, "Okay, this looks good, but what is the argument that's actually being made here? What are the sources? What's the evidence that's being presented?" And these are, these are classical things that have been taught in, you know, liberal education for hundreds of years, if not more, but the importance is much greater. And we now need to separate it from the fluency of what we're evaluating to really be looking much more deeply at the specific evidence and whether we can really trust what we're seeing. What can we trust? What can't we trust? So, so that, those I think, are really the key things that students in all areas really need to learn about AI.