How Can Schools Manage AI in Admissions?

Many questions remain around the role of artificial intelligence in admissions as schools navigate the balance between innovation and integrity.  

Artificial intelligence has officially taken root in higher education. Students are turning to ChatGPT and other generative AI tools for assignments and exams. Professors are assigning coursework that allows or requires AI tools, or even using AI as tutors for formative feedback for students, all while institutions are scrambling to adopt policies and standards to govern the use of AI. 

With AI increasingly prevalent across higher education, schools have had to quickly adapt and are gaining greater clarity and comfort around how students, faculty, and staff can leverage AI tools responsibly and ethically.

However, one area where institutions still face many open-ended questions is what role AI plays in admissions. How do schools know if an applicant uses AI tools when he or she applies? What can they do to enforce new policies and rules before a prospective student steps foot on campus? Does AI give an unfair advantage to applicants? Or can tools be used to improve access for historically underserved populations?


As admissions leaders and administrators contemplate these difficult questions, they will need to find the right balance between innovation and integrity. By establishing clear, practical guidelines for AI tools from the start — and reexamining traditional admissions practices — schools can set expectations for prospective students and ensure AI tools are used ethically and effectively throughout their academic journey.

How AI Is Transforming the Admissions Landscape

There's a lot we still don't know about how students use AI when they apply to schools. But one thing is certain: Applicants are tempted to use AI tools to help write personal essays and other application materials, and many are already doing so.

In fact, nearly half of current undergraduate and graduate students in a recent survey said they would have used ChatGPT and other AI tools to help complete college admissions essays if these tools had been available. That's despite 56% who said using AI tools provides an unfair advantage on college applications.

How can schools maintain a fair and robust admissions process in an academic environment where AI tools are increasingly prevalent and, many times, acceptable? So far, admissions teams have remained generally skeptical of AI tools. In fact, many institutions have adopted AI detection tools that can identify and reject AI-generated content in personal essays and statements. However, these detection tools are imperfect and have been known to falsely accuse students of cheating.

While schools are rightfully concerned about ethical standards and academic integrity, a zero-tolerance approach to AI is becoming increasingly difficult to enforce in a fair, consistent manner.

Moreover, these rigid admissions policies are out of step with the ways that students are allowed or even encouraged to use AI tools in their academic and professional careers. For instance, a candidate who relies on ChatGPT to write their entire essay, from start to finish, is much different from someone who uses it to help with research or organizing their thoughts. It makes little sense to reject an otherwise qualified student who uses AI in the exact same way as they would for a class or in a future job.

When used responsibly, AI can actually level the playing field for applicants from historically marginalized backgrounds or under-resourced communities, who may lack access to resources to help with applications or are unfamiliar with expectations. First-generation college candidates, for example, could use ChatGPT to review their papers or other assignments before submission.


Featured