Can Artificial Intelligence Expand Our Capacity for Human Learning?
Grush: So looking even more deeply into education specifically, what are a few more of the questions you find yourself asking as you see AI emerging in teaching and learning?
Campbell: I ask myself how we can teach students not only how or when or why to use these technologies, but how to exercise their own good judgment in using them. The checklists and guidelines we offer our students are good, but there's no substitute for building wisdom. In fact, building wisdom should be at the center of what we term "education."
Building wisdom should be at the center of what we term "education".
And, I ponder over how we can use this AI phenomenon as an opportunity for re-examining the ways we think about our institutional missions, and indeed about how education might best contribute to meaningful human flourishing in the present and for the future.
And, I wonder how can we use the advent of AI as a "teachable moment" about what it means to be human, and about how human beings' innate search for understanding might be better encouraged and supported.
It's an interesting side note that as humans, we have worked to establish some safeguards and shared understanding internationally around nuclear weapons. How might we try to do something similar in education to get our human minds around AI, quickly and at scale, before our teaching and learning systems, higher education institutions, and even society itself may suffer irreversible damage?
Grush: You've been tracking most of the conversations about generative AI in education. Of course, we can't cover all that research in a brief Q&A, but are there select resources that you think might be potential guideposts for educators? And allowing for change in this developing environment, can we use this set of resources like a compass, not a map?
Campbell: One of my go-to experts is Gary Marcus, whose newsletter "The Road to AI We Can Trust" has been greatly helpful. Rowan Cheung's "The Rundown" gathers many sources of information in one newsletter, and that's also tremendously helpful. And I continue to read David Weinberger, a clear-headed writer whose work on the Web influenced me greatly. Of course, there are many other thoughtful, articulate writers who should be included in a comprehensive bibliography of AI literature.
Two recent New Yorker essays by Ted Chiang are essential readings, in my view. One [https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web] is a particularly thoughtful analysis of the cultural erosion and impoverishment that may result from generative AI, even if the "bad actors" don't take advantage of it. The other [https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey] warns that "…The desire to get something without effort is the real problem".
One more important resource that I'd urge my colleagues to use is their own judgment, based on their experiments with these technologies before they employ them in teaching and learning. Be careful not to reveal any personal information — yes, read the privacy policies and terms of service carefully! But get onto these platforms and try asking questions and follow-up questions, as well as posing difficult problems. See what you think. Take a look at experiments like the Khan Academy's "Khanmigo" prototypes, discussed by Sal Khan in a TED Talk. And then read Peter Coy's New York Times op-ed on that project, in which Coy finds out he can manipulate Khanmigo into simply giving him the answers, instead of tutoring him Socratically, merely by "playing dumb".