Can Artificial Intelligence Expand Our Capacity for Human Learning?

Generative AI's smoothly written, personable answers go down easy. Too easy!

Second, there are what researchers call emergent phenomena, by which they actually mean, "Oh, we didn't expect that to happen!" We've already seen troubling instances of generative AI making up false "facts" with spurious citations; at worst, suggesting that people should leave their spouses or commit suicide. The generative AI developers insist they're continuing to improve "guardrails" that will prevent false or otherwise harmful or polarizing answers from appearing, but significant damage has already been done and I am not convinced that human beings will agree on what's "harmful" or "polarizing", especially when complex issues are involved. These are complex issues that must be addressed by human beings deliberately, openly, and deeply.

And third, the organizations using humanity as a test bed for these transformative leaps into perilous territory are huge for-profit corporations that, historically speaking, do not always aim for the betterment of humankind, to put it mildly and somewhat euphemistically. We're already hearing about AutoGPT, the capability to use generative AI to execute an entire sequence of tasks or problem-solving assignments. These technologies don't understand context, implications, or connotations, yet they'll be presented as tremendous time-saving conveniences. Can we trust them?


These technologies don't understand context, implications, or connotations, yet they'll be presented as tremendous time-saving conveniences. Can we trust them?

Grush: What you've been talking about here are all areas that should concern higher education institutions, but are there other concerns you might express that are even more specific to the teaching and learning context?

Campbell: Another emphatic yes! One thing that's coming up in the teaching and learning context is that the potential for superficial, thoughtless work or, at worst, cheating, increases dramatically as the cost goes down with machine-based generation. We're already seeing this happen. And along with that sticky potential, we will also see education institutions relying on generative AI to automate their own communication and education tasks in ways that will drain meaningful human interaction from what we will continue to call, less and less authentically, "teaching and learning". In the end, so-called "evergreen" courses will run themselves, and automated grading of machine-generated assignments will result in self-certifying meaninglessness. Many of these things are already happening, but generative AI will continue to promote those issues by bringing costs down dramatically while permitting vast duplication of template-based course design.

So-called "evergreen" courses will run themselves, and automated grading of machine-generated assignments will result in self-certifying meaninglessness.

And finally, as we become accustomed to machine-generated language, images, and so on, there are huge implications for the worth of human intellectual labor, and for the ability of creators of any kind to earn a fair profit from their labor. So, higher education will no longer be able to say either that it encourages human creativity and thoughtfulness, or that it prepares learners for the workplace, as both of these areas will be substantially changed.

Grush: In all these concerns, are you referring mostly to "bots" — or are there other forms of generative AI that may emerge with potential issues for education?

Campbell: I've been referring to chatbots primarily so far, but these concerns extend to image generation — Midjourney, DALL-E, and the like — as well as to the emerging video and voice generation technologies. Deepfakes are especially concerning, but the larger issues always involve the ways we as human beings define and share what we consider reality.


Featured