Addressing Non-Generative AI Tools Used On Campus
While a great deal of attention is being paid to generative AI, it is also important to consider how non-generative AI is being used on campus, and how to address it in a campus AI policy or guidelines. AI is embedded in many higher education products, such as learning management systems, and systems used for admissions, student alerts, and human resources. It is important to ensure data privacy, transparency in processes, and appropriate oversight and accountability. Institutional policies on data classification, retention, storage, and distribution should be followed while using or incorporating these technologies into campus operations. Institutions should ensure compliance with relevant privacy legislation (FERPA, GDPR, HIPPA).
Below are some examples of Higher Education tools that use AI and potential policy considerations.
Predictive Analytics
Early warning systems that analyze students’ data to predict retention or dropout risks use predictive analytics. These include platforms like Civitas Learning and EAB Navigate. These can be powerful tools in supporting student success and retention, but there are important considerations for addressing them in AI policies and guidance.
Ethical Use and Intervention
AI policies and guidance should support ethical use of these tools. For example, policies and guidelines should recommend that predictions inform supportive interventions rather than punitive measures. They should also establish limitations and safeguards to prevent misuse or overreliance on predictive scores of classification. Finally, they should call for training for faculty and staff on how to interpret and ethically act on predictive analytics data (Wargo & Anderson, 2024; Ekowo & Palmer, 2017).
Equity and Bias Mitigation
Predictive analytics can produce biased results. For this reason, it is important that AI policies and guidelines call for regular audits of predictive models to identify and mitigate potential biases that disproportionately impact certain groups of students. Moreover, they should recommend the implementation of review processes to ensure that predictive interventions support equitable outcomes for historically underrepresented or marginalized students (Lee, Resnick, & Barton, 2019; AI Now Institute, 2018).
Student Engagement
Students are directly affected by the use of predictive analytics, and it is important that AI policies and guidelines recognize and address these effects. Policies and guidelines should call for involving students in discussions about predictive analytics, allowing feedback and incorporating their insights into policy improvements. They should also recommend strategies to ensure that students have the option to understand, challenge, or request a review of predictive outcomes of recommended interventions (Ekowo & Palmer, 2016; Ekowo & Palmer, 2017; Piepgras & Gandara, 2024).
AI-Powered Admissions and Enrollment Management
Many admissions and enrollment tools use AI-powered algorithms for admissions screening, ranking, or recruitment targeting. These tools include enrollment forecasting software like Slate or Salesforce Education Cloud. These can be powerful tools for recruiting students, but there are important considerations for addressing them in AI policies and guidance.
Ensuring Fairness and Avoiding Bias
AI policies and guidelines should address potential biases in AI-based admissions and enrollment tools. First and foremost, AI policies and guidelines should clearly state the institution’s commitment to diversity, equity, and inclusion, ensuring that algorithms reinforce these values. They should also call for regular audits of algorithms for biases that may disadvantage applicants based on race, gender, socioeconomic background, or other demographic factors. Finally, policies should establish clear criteria for evaluating fairness and explicitly outline actions to rectify identified biases (Ekowo & Palmer, 2016; Piepgras & Gandara, 2024).
Human Oversight to Prevent Unintended Discrimination
Biased algorithms, left unchecked by human judgment and intervention, can lead to unfair discrimination. Carefully-designed AI policies and guidance can help mitigate these outcomes by including several elements. First, they should outline practices to ensure that AI recommendations supplement human judgment, clearly defining when human review is mandatory. Second, they should call for documenting the decision-making process, particularly in cases where human decision-makers override AI-driven outcomes. Third, they should assign responsibility for overseeing admissions AI tools to clearly identified individuals or committees. Fourth, they should clearly define applicants’ rights concerning AI-influenced admissions decisions. Finally, they should establish accessible processes for applicants to request explanations, appeal AI-influenced decisions, or correct erroneous data.
Automated Grading and Assessment
Many AI-powered tools can be used by instructors to streamline assessment and grading. These range from platforms like Gradescope or Turnitin’s similarity detection tools to automated scoring for standardized tests or homework problems. These can be powerful tools to help instructors manage their workload, but AI policies and guidance should ensure that there are mechanisms in place for students to appeal AI-driven grading decisions. This includes developing a formal, accessible mechanism that allows students to appeal grades assigned or influenced by automated systems. Policies and guidance should outline clear procedures for manual re-evaluation and human review upon student request, including timelines and responsible contacts (Wargo & Anderson, 2024; Strunk & Wilis, 2025).
AI-Driven Academic Advising and Career Guidance
AI-driven tools can be used to advise and guide students by suggesting courses or majors based on student interest and academic performance. These include tools like Stellic and Degree Compass. While students can benefit from the support these tools provide, there are important considerations for addressing them in AI policies and guidance. First, policies and guidance should call for clear boundaries between AI guidance and human advising. They should make provisions for oversight to avoid reinforcing existing stereotypes and biases. Finally, they should provide clear and specific recommendations for securing student data and ensuring student confidentiality.
Facial Recognition and Biometric Monitoring
A variety of AI-driven tools have entered the marketplace that use biometrics to identify and monitor students. These include attendance tracking tools that use facial recognition and remote proctoring services like ProctorU, Examity, and Respondus. Instructors and administrators may see value in these tools for protecting academic integrity, but there are important considerations for addressing them in AI policies and guidance. First, policies and guidance should call for procedures for ensuring privacy, informed consent, and data storage security. They should address issues around bias and accuracy, particularly for underrepresented groups. Finally, they should address ethical considerations of surveillance and student rights.
Campus Security and Surveillance Systems
Many security and surveillance systems use AI. These systems may include campus security cameras that detect anomalies or suspicious behavior or automated access control systems with facial or biometric recognition. These tools may improve security on campus, but there are important considerations for addressing them in AI policies and guidance. First, policies and guidance should address the ethical implications of these systems and call for protecting the privacy rights of students and staff. Second, they should call for transparency regarding the scope and usage of surveillance data. Finally, they should ensure provisions for data storage, access rights, and retention.
Smart Campus Infrastructure
AI is part of many infrastructure systems on campuses. Smart campus infrastructure includes energy management systems that use AI, including building automation systems (HVAC, lighting, and occupancy monitoring) used to optimize energy consumption. These tools can support efficiency on campus, but there are important considerations for addressing them in AI policies and guidance. These considerations include data privacy concerning location tracking or behavior profiling and clear standards for responsible data collection and use. Policies and guidelines should also address sustainability and environmental impacts.
Research Analytics and Impact Evaluation
AI-powered tools that make use of research analytics can be used to help faculty understand and document the impact of their work by evaluating research productivity, citations impacts, and performance metrics. Examples of these tools include Elsevier Pure and Clarivate Web of Science analytics. While use of these tools can support faculty in assessing their work, there are important considerations for addressing them in AI policies and guidance. First, AI policies and guidance should call for transparency and fairness in evaluation metrics. Second, they should make clear that promotions or tenure decisions will not disproportionately rely on algorithmic judgments. Finally, they should recommend safeguards against bias and unintended negative impacts.
Chatbot and Virtual Assistants (Non-generative)
Chatbots and virtual assistants are widely used on campuses, including customer service-style AI tools that answer student questions about administrative processes, scheduling or support services. Examples of these tools are AdmitHub and Ivy.ai. While these tools can support students’ success by helping them navigate campus resources, there are important considerations for addressing them in AI policies and guidance. First, policies and guidance should distinguish between types of student interactions that are appropriate for AI and those that they should have with humans. Second, they should establish standards for accuracy, accountability, and clarity in automated responses. Finally, they should call for procedures to ensure data protection and privacy around sensitive student inquiries.