"

Addressing Generative AI in Existing Institutional Policies

As institutions consider how to address the use of AI, it is worthwhile to examine existing policies which may be applied to AI use. These policies were not originally developed with AI in mind; in fact, most were developed many years prior to the introduction of generative AI. For instance, many institutions have policies on academic dishonesty and academic computing, which pre-date the use of generative AI in higher education. Many of these policies can be directly applied to AI use; some institutions have updated these policies to address acceptable use of AI. As campuses think about developing universal AI guidelines or policies, it can be helpful to identify campus policies that already address (or can be used to address) generative AI use. These existing institution-wide policies provide a foundation for consistent standards, ethical considerations, and risk management. In many cases, these policies are accompanied by professional development and training. The examination of these policies may help institutions identify concerns which can be alleviated using existing guidelines and determine whether additional AI-specific guidelines or policies are needed. Examples of existing campus policies are included below. See Section VII for more examples.

A. Academic Integrity

Many institutions have updated their academic integrity policies to account for the potential use of generative AI tools for unauthorized purposes.

University at Buffalo

The Office of Academic Integrity at the University at Buffalo has provided Artificial Intelligence Guidance, which is a campus-wide policy to allow instructors to determine their policy on artificial intelligence use. This policy provides flexibility but risks creating inconsistencies in AI usage across campus:

UB has no universal policy about student use of artificial intelligence. Instructors have the academic freedom to determine what tools students can and cannot use in pursuit of meeting course learning objectives. This includes artificial intelligence tools such as ChatGPT. (University at Buffalo)

SUNY Canton

SUNY Canton revised its existing Academic Integrity Policy to include generative artificial intelligence as part of the definitions of academic dishonesty:

The State University of New York at Canton is dedicated to holding its academic community to the highest standards of academic integrity. We believe that in order for students to have successful careers in their chosen fields, they must master their own course work and not imitate or copy human or computer-generated content and claim it as their own. Academic integrity is essential to the success of the College’s educational mission, and violations of this policy are considered a serious matter. (SUNY Canton)

SUNY Cortland

SUNY Cortland offers faculty and students Generative AI Resources, including recommendations and ways to get started. However, the Academic Integrity Policy in the 2024 SUNY Cortland Handbook lists the use of Generative AI under “other infractions” (vs. plagiarism).

Obtaining a paper or assignment from an online source, paper mill, another student, Generative AI, or other source and submitting it, wholly or in part, as one’s own work. (SUNY Cortland)

B. Campus Computing Policy

Many institutions have acceptable use computing policies, but these may need to be revised to specifically address AI usage. Academic integrity policies may address some issues, but an acceptable use policy may list prohibited uses of AI outside the classroom, such as Governor Hochul’s ban on DeepSeek (New York State).

Boise State University

Boise State University provides a list of existing campus policies which inform AI use. This list includes the Information Technology Use Policy, which states:

Boise State University IT Resources are provided to support the university’s academic, research, and service missions; its business and administrative functions; and its student and campus life activities. Use of University IT Resources must comply with state and federal laws and regulations, executive orders, and policies of the Idaho Technology Authority (ITA), the Idaho State Board of Education, and University policies. (Boise State University)

Alfred University

Alfred University’s Responsible Use of Computing Resources Policy describes activities which are permitted and prohibited using the university’s computing resources (computers, network, provided software, etc.). It opens with the following statement:

The computers and networks at Alfred University support our educational mission and promote communication among members of the AU community. Appropriate technology use can enhance your experience at AU. Unlawful or inappropriate use may result in the loss of privileges. The guiding principle for the use of computing resources at Alfred is respect for the rights of others. (Alfred University)

While the policy does not specifically mention AI, it goes on to describe the nature of prohibited activities (harassment, or illegal behavior, for example). This behavior is never permitted, regardless of the specific technology tool or program used.

C. Data Privacy and FERPA Requirements

Many AI tools may technically comply with policies but are often not responsible for users’ unauthorized use of AI. Without proper training and awareness, a user can unintentionally compromise FERPA-protected data. The complexity of data privacy is growing exponentially, as it is unclear how to make data unidentifiable when AI tools can make connections from multiple data sources. AI tools may still be able to identify students from data indirectly related to students. State-level guidelines are available but have not been adopted by SUNY (see Resources for this section).

In 2023, the U.S. Department of Education released Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. The report stresses the importance of protecting student data, as many of the AI tools being used in education were not developed for that purpose.

A central safety argument in the Department’s policies is the need for data privacy and security in the systems used by teachers, students, and others in educational institutions. The development and deployment of AI requires access to detailed data. This data goes beyond conventional student records (roster and gradebook information) to detailed information about what students do as they learn with technology and what teachers do as they use technology to teach. AI’s dependence on data requires renewed and strengthened attention to data privacy, security, and governance (as also indicated in the Blueprint). As AI models are not generally developed in consideration of educational usage or student privacy, the educational application of these models may not be aligned with the educational institution’s efforts to comply with federal student privacy laws, such as FERPA, or state privacy laws. (U.S. Department of Education, 2023, p. 8)

Additional Resources

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

AI in Action: A SUNY FACT2 Guide to Optimizing AI in Higher Education Copyright © 2025 by SUNY FACT2 Task Group on AI in Action; Kati Ahern; Nicola Marae Allain; Abigail Bechtel; Angie Chung; Billie Franchini; Meghanne Freivald; Ken Fujiuchi; Dana Gavin; Jack Harris; Keith Landa; Alla Myzelev; Victoria Pilato; Ahmad Pratama; Russell V. Rittenhouse; Carrie Solomon; Angela C. Thering; and Shyam Sharma is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.