Ditch the DIY Approach to AI on Campus

Although it's starting to pick up steam, artificial intelligence (AI) adoption by education institutions still lags behind other industries. According to one recent survey, this can be largely attributed to a lack of expertise and strategy among faculty and administrators. The technology can be daunting, and when combined with concerns about academic integrity, lack of security and the tendency of AI to "hallucinate" or give faulty answers, AI faces significant headwinds at universities and colleges.

However, just like in any other industry, institutions that do not adopt AI will quickly fall behind. Large language models (LLM) are here to stay, and are rapidly becoming part of the toolkit used in STEM work, such as coding, data analysis, engineering, biomedical research, and other fields. On the administrative side, AI can help reduce cost and improve the student experience with IT help desk issues, registration and class schedules, and other parts of day-to-day operations. It's vital that educational institutions seize this opportunity to provide a safe and secure AI experience for students, faculty, and staff.


The question is, how can educational institutions do this systematically, securely, cost-effectively, and efficiently?

Building a Secure Yet Scalable AI Solution Fit for Your Institution

Many institutions are hesitant to use some of the more prevalent AI tools, such as ChatGPT, due to how the training models operate. These tools pose significant security challenges to university and researchers alike, since all input data becomes part of the dataset from which the AI can pull. This presents a risk that proprietary or private data will be inadvertently made public through the LLM, creating challenges from both a compliance and an IP standpoint.

However, building your own LLM from scratch is prohibitively expensive for all but the wealthiest corporations or research labs, as it requires vast GPU server farms and millions of dollars of compute time to train the model. It's a non-starter for most colleges or universities, particularly those under budget and staffing constraints.

Utilizing open source LLMs, such as Granite, Llama, or Mistral AI, is a viable third way for universities and colleges to build a scalable AI solution that can be tailored to an institution's needs. While training and building an LLM model is expensive, the models themselves are lightweight once they are deployed and they can be run off of a single server or endpoint on the network. These solutions can then be customized to a university's needs with varying degrees of guardrails put in place about how external information can be accessed, who has access, and how information can be stored and made available.

It's not enough to keep information secure. Colleges and universities must be assured that the information they're receiving from the AI is trustworthy and accurate. Retrieval augmented generation (RAG), an approach that uses your own custom data to confine the results of open source models, can help. It can be customized to alleviate concerns about data being leaked or misused, and can create new user experiences based on per-app or per-user profile prompting. Implementing an open source project on an internal server with limited or no external access and built out with RAG entry points and other augmentations can be done in just a few weeks.

Single Point or Platform AI?

Choosing an open source AI solution will likely come down to either a single-point or platform approach. The choice depends on your priorities.

A single point deployment is useful if you're just getting started on your AI journey. It consists of having a solitary LLM run on an internal server where it may help with tasks such as IT support desk, student registration or student support services. It's efficient and allows IT teams to dip their toe into the world of AI, get comfortable, and scale from there.


Featured