Determining What Educational Information Policies and Guidelines Should Include
Many of the policies and guidelines that we encountered contained information aimed at educating the campus population on topics related to AI use. This includes but is not limited to a glossary of AI terms, ethical use guidelines and considerations, copyright, data privacy information, and training resources. Further explanations and examples are included below.
A. Definitions and Glossary of Terms
It may be helpful to establish a common language and clearly define essential AI-related terms and concepts from reputable sources. Two examples of glossaries are the NYS Office of Information Technology Services Glossary (NYS Office of Information Technology Services, 2025) and The Language of Trustworthy AI: An In-Depth Glossary of Terms from the National Institute of Standards and Technology (Atherton et al., 2023). In addition to the resources above, many examples of key terms such as algorithm, artificial intelligence, and machine learning, are included in the appendices of the first and second editions of the SUNY FACT2 Guide to Optimizing AI in Higher Education.
B. Ethical and Responsible Use Guidelines
Many institutions share information regarding responsible use of AI and data concerns. This information may be included in a computing/ acceptable use policy, or it may be provided on its own. For example, Arizona State University has created a set of Digital Trust Guidelines for the use of AI in and out of the classroom. This set of guidelines covers information related to awareness of what information is being submitted to an AI tool, the ownership of the data, the nature of the usage, and whether the tool vendor provides policies on privacy, ethics, accessibility, etc.
Arizona State University is committed to the practice of Principled Innovation, embracing innovation with curiosity and wisdom. Exploration of AI tools is vital to keeping up with the rapidly evolving generative AI landscape. It is important to explore and respect university policy responsibly, protect your own privacy and the privacy of others, and keep in mind important intellectual property considerations. Navigating all of this can be challenging. The purpose of the Digital Trust Guidelines is to foster confident, responsible exploration.
These guidelines have been reviewed and accepted by Enterprise Technology, Office of General Council, and the University Provost. (Arizona State University)
Each campus should consider whether it would be helpful to clearly articulate roles, responsibilities, and ethical considerations tailored to each stakeholder group. A comprehensive section on ethical considerations is included in the first and second editions of the SUNY FACT2 Guide to Optimizing AI in Higher Education. Below are some examples of potential guidelines for specific campus populations.
Administrators and Staff
Guidance for administrators and staff should include AI ethics in administrative processes (e.g., admissions, evaluations, hiring), accountability and transparency guidelines, and recommended training and resources.
Faculty and Instructors
Guidance for faculty and instructors should address integrating AI into teaching and research. Teaching-specific guidelines should emphasize maintaining academic integrity in coursework and assessments while avoiding rashly punitive measures, such as judging students’ work based on solely AI detectors that are not not reliable (Weber-Wulff et al., 2023; Pratama, 2025), that are prone to false positives (Giray, 2024), or that have been trained in ways that incorporate bias against certain groups of people (Liang et al., 2023; Pratama, 2025). Teaching guidance should also include recommended methods for disclosure of AI use in course syllabi.
Students
Guidance for students should include ethical guidelines on using AI for coursework, requirements on disclosure and attribution when using generative AI, information explaining that each course may have different policies in terms of generative AI use, and resources to improve data and AI literacies.
C. Information on Copyright Law (Related to AI-Generated Images and Content)
Below is an overview of information, including a recent court case regarding copyright and examples of protectable human contributions in creating content with AI, that campuses may find helpful to inform their AI use guidelines. The copyright landscape around AI is evolving rapidly as it is informed by case law. The information below was gathered in April 2025 by subcommittee member Jack Harris, with the assistance of ChatGPT (OpenAI, 2025).
Human Authorship Requirement
As of September 2025, only human-created content is eligible for copyright protection. Fully AI-generated material, where a machine determines all expressive elements, is not copyrightable. However, human contributions—such as selecting, arranging, modifying, or integrating AI outputs—may qualify for protection (U.S. Copyright Office, 2025).
In a March 2025 ruling, the U.S. Court of Appeals for the DC Circuit upheld the Copyright Office’s denial of copyright for a full AI-generated work. This ruling reinforces the principle that copyrightable works must involve human authorship (Orru, 2025).
Examples of Protectable Human Contribution
- Zarya of the Dawn
Zarya of the Dawn is a short comic book written by Kris Kashtanova and illustrated entirely with the artificial intelligence Midjourney. When a copyright dispute arose in 2022, the US Copyright Office ultimately determined that the written text and arrangement of AI-generated images received copyright protection. However, the AI-generated illustrations themselves were not granted copyright protection (“Zarya of the Dawn,” 2025). - AI-Assisted Image Collage
In this hypothetical precedent, a composite image with significant manual selection and editing would be protected by copyright. However, the raw AI-generated AI image components would not be protected (“Zarya of the Dawn,” 2025). - AI-Aided Writing
When humans use AI in writing, only the human-edited and structured versions of AI-generated drafts can be copyrighted. Unedited AI-generated content is not protected by copyright (U.S. Copyright Office, 2025).
Determining Copyright Protection
An applicant for copyright protection must disclose the use of AI and document human contributions. The U.S. Copyright Office reviews applications on a case-by-base basis and may grant partial registration, as seen in the examples above. The Federal Courts rule on disputes, determining the validity and originality of human contributions (Gewirtz, 2025).
Unresolved Copyright Questions
Laws and policies continue to develop, and some issues remain unresolved. These include questions of the legality of using copyrighted works to train AI and defining authorship in human-AI collaboration (Gewirtz, 2025). The legal landscape will undoubtedly continue to evolve as use of AI grows and additional forms of human-AI collaboration emerge.
D. Data Privacy and Security Education
Each AI tool has its own data privacy and security policy. It is best to assume that unless stated otherwise, any data and interactions with AI tools might be reviewed by humans and can be used to train and further improve their models. It is the responsibility of institutions and individuals to ensure that use of AI tools does not violate legal and ethical considerations, including FERPA compliance.
Some AI tools provide different terms of service for enterprise-level products vs. consumer-level products. For example, enterprise-level products offered by ChatGPT (OpenAI, 2024) and Gemini (Google, 2025) include security and privacy policies ensuring that users’ data and interactions with these AI tools are not shared outside of participating institutions or used to train the company’s models. However, it is critical to understand that this enterprise-grade security and privacy policy is only applicable with an institutional level agreement between an institution and the AI tool vendors. It is not applicable if individuals at your institution use these AI tools without this specific arrangement, even if they are signing up for these services by using their institutional accounts.
It would be advantageous for policies to include recommendations for secure handling and storage of data, especially sensitive or student-generated data. It would also be helpful to provide guidelines on what types of data can and cannot be processed using AI tools.
E. Training and Educational Resources
It is important to direct stakeholders to resources for help, clarification, or additional information as part of an AI policy or guidelines. For example, many AI policies/guidelines include recommendations for both internal and external training, workshops, or modules from reputable organizations. They may also include resources or repositories for continuous learning on responsible AI use, or alternatives to using AI, updated on a regular basis.
Additional Resources
- Welcome To The Generative AI Short Course by NLM/NIH (Network of the National Library of Medicine, 2025)
- Google AI Essentials (Coursera, 2025)
- Microsoft’s Introduction to generative AI for trainers (Microsoft, 2025)
- Belgian AI scientists resist the use of AI in academia (Walraven, 2025)
- Against AI, by Anna Kornbluh, Eric Hayot, and Krista Muratore (2025)
- Higher Ed’s Rush To Adopt AI Is About So Much More Than AI | Defector, by Justin Raden (2025)