Can Artificial Intelligence Expand Our Capacity for Human Learning?

And second, because human beings are social animals, we are always looking for companions. In fact, human culture results from the way individual intelligences share, with others, an environment of thought and creativity. Generative AI bots are engineered to present themselves to us as companions.

Human culture results from the way individual intelligences share, with others, an environment of thought and creativity.

No matter how many times these bots repeat their scripted warnings that they are not in fact human — that they have no intentions, motivations, or original thoughts — they continue to use "I" to refer to themselves. And they construct answers in the form of smoothly intelligible language — out of their statistical analyses of how human beings use language, mind you! The extent to which these "I" statements and smoothly crafted language constructs resemble not only human speech but expert and thoughtful human speech is shocking to anyone interacting with them the first time, and even many times afterward.


When we encounter this use of language, we can't help inferring personality — and usually competence or authority — because human language represents not only the world but also the personality and perceptions of the person who's communicating something about the world. Even in my description here, I've slipped into language suggesting that generative AI technologies "do" something, that they have a "self," though of course I know better.

That's the historical context, though all too brief, that I wanted to include here. We've seen all this before, to a lesser extent, with the "Eliza effect", in which Joseph Weizenbaum invented a computer program, modeled on the nondirective therapy developed by Carl Rogers, that appeared to converse with you simply by reflecting aspects of your questions and answers back at you. People who knew and wholeheartedly believed that Eliza was nothing more than a clever computer program nevertheless found themselves spending hours with Eliza, engrossed in what seemed to be the companionship of a tireless conversational partner. Weizenbaum [https://www.historyofinformation.com/detail.php?id=4137] was so alarmed by this that he ended up writing a book, Computer Power and Human Reason, about his concerns.

Grush: Is anything different now? Why are we still so drawn into these generative AI interactions?

Campbell: It's the sheer scale and elegance of the thing that's inspiring a very rapid uptake among millions of users. You enter a question and get back a short, apparently well-written essay answering your query in personal and professional-sounding prose. Or if you experiment with Bing, as an example, you'll get back answers that are more chatty, sometimes even a bit sassy or edgy, peppered with emojis. Those types of exchanges, done frequently enough, overcome our awareness that we are actually "conversing" with a computer program.

And the illusion of companionship is irresistible, in some cases because there's also narcissism at work on the human end of the exchange. Because the illusion is so pervasive and convincing, people tend to believe that it's not only real but somehow accurate, like the daily horoscope but infinitely customized to whatever is on your mind.

Grush: So the scale of AI adoption and acceptance is at least starting to uncover new potential in known issues. What else are we going to encounter that may raise entirely new issues as we move deeper into AI?

Campbell: Three things, for starters. First, there's the potential for a destructive and irreversible detachment from reality as the culture becomes a hall of mirrors, some of them fun-house distortions. And human beings may in turn normalize the illusions because we have a need to believe in something. Geoffrey Hinton, who pioneered the ideas on which generative AI is based, recently resigned from Google [Early May, 2023] over his concerns [https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html], stating that "It is hard to see how you can prevent the bad actors from using it for bad things". Generative AI's smoothly written, personable answers go down easy. Too easy!


Featured