Examples of Institutional Policies/Guidelines that Address AI Use
The second edition of Optimizing AI in Higher Education provides a sampling of several institutions’ guidance on syllabus statements and AI use as it relates to academic dishonesty policies, beginning on page 30. While many of these institutions provide helpful guidance for course-level policies, many do not have AI-specific university-wide policies which do not directly conflate the use of Generative AI with academic dishonesty (Aaron et al., 2024).
Some institutions do not have an overarching AI policy, in order to grant faculty members the academic freedom to determine which tools are appropriate for students to use as they seek to meet learning objectives. Mentioned earlier in this guide, University at Buffalo is an example of one institution that provides guidance for faculty but does not have an institution wide-policy (University at Buffalo).
During its Winter 2025 Webinar, the National Council of Faculty Senates invited institutions from SUNY and across the country to share their AI policies and the procedures they implemented in order to develop those policies. Many of the participating institutions require faculty to share an AI statement in their syllabi, and share information and guidelines related to AI. However, they do not have institution-wide policies that specifically police AI usage (National Council of Faculty Senates).
Empire State University
Empire State University is one institution that provides an informational AI Toolkit but does not currently have a university-wide policy:
Empire State University has not yet adopted university-wide AI policies. Therefore, AI policies and practices will vary between professors, courses, projects, and class assignments. Some faculty members may encourage or require the use of AI in an assignment. Others will prohibit it. Those decisions are based on the learning goals for the course and are at the heart of Academic Freedom and professional judgment. Be sure to communicate your expectations for each course so your students understand what is expected. (Empire State University)
Empire State University is in the process of developing and proposing an updated university-wide Technology Acceptable Use Policy which includes more restrictive guidelines for AI use.
Boise State University
Boise State University has a policy for AI use, and they require individuals to use only AI tools that have been approved by the University. They also reference a list of existing institutional policies that can be applied to AI use (referenced earlier in this guide). Data security is a top priority:
Boise State University has a number of policies that safeguard institutional data, which university faculty, staff, students, and affiliates must follow. Those using generative AI in their work should consider what data they are using and whether or not such data usage is prohibited by university policy or otherwise generally cautioned against.
Boise State University supports the responsible use of AI tools and has approved the education editions of the following: Zoom AI Companion, Google Gemini (Education edition only), Gemini for Google Workspace, and OpenAI ChatGPT (Education edition only). These tools have been vetted to meet the University’s standards for security, privacy, compliance, and legal requirements. If you wish to use other AI tools, they must first be submitted for review through the appropriate University processes, including Procurement, SARB, and legal review, and receive approval in accordance with those procedures. To ensure the safety and integrity of University data and systems, please avoid using unapproved AI tools on the University’s network, devices, or with your Boise State credentials (your Boise State username and password).
Even if your use is authorized, you should not enter personally identifiable information, confidential, sensitive, private, or restricted data into any generative AI tool or service. (Boise State University)
Southern Utah University
Many institutions have provided a set of guidelines for AI use, rather than a strict policy. Southern Utah University has created a set of principles that guide the institution’s use of Artificial Intelligence. SUU has committed to infusing its operations with the use of AI.
With a forward-thinking approach, Southern Utah University has consciously chosen to incorporate generative AI into its operations, guided by the following principles that encourage its responsible and ethical use. These principles reflect the University’s commitment not only to advance technologically but also to enrich SUU’s academic community, balancing progress with purpose.
-
Responsible AI Development: The responsible design, development, and usage of generative AI are essential for its ethical applications and societal benefits.
-
Human-Centered AI: AI should enhance human learning and creativity. AI should be primarily assistive and include human interaction.
-
Academic Freedom: SUU recognizes that these principles should not be interpreted to diminish academic freedom.
-
Accountability: Humans must be held accountable for their decisions and actions, even when assisted by AI.
-
Purpose-Driven Learning: AI literacy and education should recognize the value of human knowledge, experience, emotion, and imagination, as well as foster fulfilling career paths and opportunities for students, faculty, and staff.
-
Interdisciplinary Collaboration: Responsible AI development and implementation requires diverse expertise from fields such as ethics, law, social sciences, arts, sciences, and humanities.
-
Equitable Access: Generative AI tools in higher education should be accessible and inclusive. This commitment to equitable access also includes ensuring that AI technologies are developed and implemented with consideration for diverse perspectives and experiences.
-
Ethical Usage and Disclosure: Appropriate disclosure of AI-assisted work is essential to ethical usage. AI usage should align with the University’s applicable policies. Users of AI must be aware of their individual level of authorization to disclose information.
-
Legal and Privacy: AI usage will adhere to data privacy and other applicable laws. Users of AI must be aware of the privacy risks.
-
Continuous Assessment: A flexible and evolving response to the rapid advancements in AI technology will ensure SUU keeps pace with advancements in generative AI and that our policies remain effective and relevant. This continuous assessment includes conducting evidence-based, ongoing assessment of AI usage in higher education, evaluating its positive and negative impacts on learning outcomes. (Southern Utah University)
Stanford University
Stanford University’s Office of Community Standards has shared Generative AI Policy Guidance. This guidance states that unless instructors specify otherwise, using generative AI is like receiving help from others. Students may use AI for support but cannot use it to complete most assignments or exams. Additionally, they must acknowledge AI use. Instructors can set their policies in their syllabi. This example keeps the flexibility of allowing instructors to set their policies but has a default policy in place when there is no instructor guidance.
Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person. In particular, using generative AI tools to substantially complete an assignment or exam (e.g. by entering exam or assignment questions) is not permitted. Students should acknowledge the use of generative AI (other than incidental use) and default to disclosing such assistance when in doubt.
Individual course instructors are free to set their own policies regulating the use of generative AI tools in their courses, including allowing or disallowing some or all uses of such tools. Course instructors should set such policies in their course syllabi and clearly communicate such policies to students. Students who are unsure of policies regarding generative AI tools are encouraged to ask their instructors for clarification. (Stanford University)
Arizona State University
Similar to the institutions referenced above, Arizona State University encourages faculty to determine whether AI use is permitted or prohibited in their courses and to state this in their syllabi. This statement takes a different approach to providing faculty/staff support through resources, professional development, and guidance.
ASU’s approach to artificial intelligence is rooted in Principled Innovation, empowering you to use AI thoughtfully and responsibly.
We’re committed to providing the tools and support you need to harness AI’s potential while prioritizing ethical and inclusive practices in your teaching.
AI can enhance your teaching and enrich the student experience. This website shares knowledge and resources to help you bring AI into your classroom with confidence and purpose. Here, you’ll find best practices, case studies, and workshops on incorporating AI tools in ways that foster academic integrity, engagement, and inclusivity. (Arizona State University)
While each of the institutions referenced above approach Generative AI a bit differently, there are themes that many of them have in common, such as academic freedom and the requirement of AI syllabus statements. These and other themes will be discussed further in the next section.
Common Themes
As this subcommittee examined dozens of institutional policies and guidelines on the use of generative AI, we noticed several common themes. Overwhelmingly, institutions value academic freedom and allow faculty to determine the role that AI will play in their courses. For this reason, many institutions chose not to provide a university-wide policy governing how AI may or may not be used. Rather, these institutions provide a set of guidelines for use; many require faculty to include a syllabus statement which clearly communicates generative AI permissions and expectations within each course. This demonstrates an acknowledgement of generative AI and its potential for teaching and learning.
It is common for institutions to reference existing institutional policies which may impact or be impacted by AI use. Increasingly, institutions explicitly address AI in their academic integrity policies. Despite this, many institutions have declined to adopt AI detection software in favor of clear course-level policies and education. Many AI detection tools, such as Turnitin, include disclaimers due to a high level of inaccuracy. Many of the policies we encountered included an instructive element including a glossary of AI-related terms, a list of suggested guidelines, and data security considerations. Training is frequently available for faculty so they may learn how to successfully incorporate AI into their teaching practices.
Additional Resources
- Artificial Intelligence (AI) Policies, Guidelines & Resources, St. John’s University
- Generative AI Guidelines for MSU, Mississippi State University