"

Engaging Stakeholders

As AI tools are integrated into institutional systems and practices, policy development must be guided by thoughtful, inclusive consultation. It is essential to involve a diverse group of stakeholders from the outset to ensure that AI adoption is ethical, equitable, and aligned with institutional mission, values, and priorities.

Stakeholder inclusion is not a procedural formality: each decision regarding AI policy has implications for multiple campus populations—faculty, students, staff, and leadership alike. Including these voices early minimizes unintended consequences, strengthens transparency, and fosters trust across the institution.

Key Stakeholder Groups and Considerations

Leadership and Upper Administration

University leadership plays a central role in setting the institutional vision for AI. Leaders are tasked with achieving innovation through governance and ensuring that AI tools are deployed responsibly and sustainably. AI presents opportunities for improving campus operations through predictive analytics, enrollment management, and resource allocation. However, without input from a wide range of campus constituencies, decisions may overlook ethical concerns, compliance risks, environmental impacts, or the broader student experience.

Administrators must establish clear governance frameworks, aligning AI implementation with legal standards, data privacy regulations, sustainability commitments, and public trust. Moreover, they have the opportunity to position the institution as a leader in ethical, public good, and Green AI research initiatives, workforce preparation, and industry partnerships. Engaging a wide range of stakeholders ensures that strategic decisions reflect both institutional and community needs.

Teaching and Research Faculty

Faculty members are directly impacted by AI tools in both instructional and research contexts.

For teaching faculty, AI offers opportunities to enhance pedagogy and assessment, personalize learning experiences, and streamline administrative tasks. However, questions of academic integrity, assessment standards, transparency and disclosure of AI use, and intellectual engagement require clear policies and discipline-specific guidance.

Research faculty increasingly utilize AI for data analysis, modeling, and interdisciplinary collaborations. However, AI-driven research raises complex issues around data ethics, reproducibility, intellectual property, and authorship. Policies governing AI-assisted research must address these challenges while supporting innovation and academic freedom.

Faculty participation in AI policy development ensures that policies are practical, responsive to disciplinary differences, and consistent with institutional values. Institutions should also invest in professional development to equip faculty with the skills and knowledge necessary for responsible AI integration in teaching and research.

Professional Staff and Student Support Services

AI-driven automation has the potential to enhance efficiency across administrative functions such as admissions, finance, human resources, IT services, and student support. AI-powered chatbots, virtual assistants, and data analysis tools can improve student services, providing timely support and reducing workload for frontline staff.

However, these changes may raise concerns about job displacement, shifting skill requirements, and over-automation. Staff representatives from administrative units, libraries, writing centers, and academic support services should be included in policy discussions to ensure that AI supports, rather than diminishes, their roles. Institutions should prioritize staff development programs, emphasizing reskilling and ensuring that AI adoption enhances—not replaces—human decision-making and student-centered services.

Accessibility and Accommodations Offices

AI technologies have significant potential to advance accessibility through features like real-time captioning, speech-to-text tools, and adaptive learning platforms. However, these benefits will only be realized if accessibility considerations are incorporated from the start. In addition, AI-powered accessibility tools should be assessed for FERPA compliance and general privacy protections.

Representatives from the institution’s accessibility or accommodations office play a critical role in ensuring AI tools meet current electronic information technology accessibility standards (including compliance with the Americans with Disabilities Act (ADA)/Title II and NYS Executive Law Section 170f) and universal design principles. Failure to consult accessibility leaders early in the process may result in policies or tools that unintentionally reinforce barriers for students, faculty, or staff with disabilities.

Students and Student Groups

Students are among the most active adopters of AI technologies, utilizing AI tools for coursework, content creation, problem-solving, and everyday tasks. Including student representatives—especially those from diverse disciplines, backgrounds, and interest groups—ensures that policies address concerns related to privacy, academic integrity, equity of access, and digital literacy.

Students are encouraged to engage directly with institutional AI policy processes. This may include participating in student government initiatives, serving on AI advisory committees, or organizing forums to discuss student perspectives on AI usage. Students should feel empowered to ask how AI policies affect their academic rights, data privacy, and learning environments, and to advocate for clear communication and ethical guidelines around AI adoption. Institutions should also offer AI literacy workshops and support student-led initiatives to cultivate responsible and informed AI use.

Ethics, Compliance, and Legal Affairs

AI adoption raises a range of ethical, legal, and regulatory questions. Issues related to data privacy, algorithmic bias, intellectual property rights, and compliance with local, state, and federal laws require careful attention. Representatives from the institution’s ethics committee, legal counsel, and compliance office should be engaged early to provide expertise on risk mitigation and institutional responsibilities.

In addition, Institutional Review Boards (IRBs) should be consulted to ensure AI-related research involving human subjects adheres to ethical standards, particularly regarding informed consent and data protection. Institutions should also raise awareness of evolving policies by external entities—including publishers, businesses, and digital platforms—that may incorporate AI-generated content clauses or open-access provisions into their privacy policies, potentially impacting ownership, confidentiality, and intellectual property rights.

The AI Legal Institute at SUNY (ALIS) is a pioneering initiative at the intersection of artificial intelligence and the law, furthering SUNY’s mission to harness AI for the public good. ALIS is a critical resource, providing comprehensive legal guidance and best practices for the responsible implementation and utilization of AI tools. A collaboration between industry leaders and legal scholars, ALIS develops and provides expert legal resources that organizations may adapt to maximize AI’s transformative benefits while enhancing institutional integrity and workforce empowerment. The ALIS Playbook, which can be requested from the ALIS website, provides template policies and guidance documents that any organization may adapt and tailor to their specific needs when implementing and utilizing generative AI tools.

Environmental Sustainability Offices

AI systems, particularly large-scale machine learning models, often require substantial computational resources, contributing to increased energy consumption and environmental impact. Including representatives from the institution’s sustainability or environmental affairs office ensures that the environmental implications of AI adoption are considered alongside other policy concerns.

These representatives can provide guidance on how to integrate green AI principles—such as energy-efficient computing practices, responsible data management, and carbon footprint monitoring—into campus AI strategies. Institutions should explore options for sustainable AI usage, including the use of renewable energy sources for AI operations, limiting unnecessary computational tasks, and educating the campus community about the environmental costs of AI technologies.

Information Technology Services, Computing Services, and AI Institutes

Information Technology Services (ITS), computing services departments, and any campus-affiliated AI research institutes play a crucial role in AI policy development. These units maintain the infrastructure and security protocols necessary for AI integration while ensuring compliance with data privacy, intellectual property, and cybersecurity standards. Their involvement is essential to address AI-specific challenges such as algorithmic bias, data confidentiality, and the ethical use of AI tools in teaching, research, and administration.

Institutions should regularly update IT policies to reflect the unique issues introduced by AI technologies. This includes establishing clear guidelines on data privacy, copyright, intellectual property, confidentiality, and responsible AI use. Involving AI research institutes can also ensure other stakeholders and policies remain informed by current developments in AI ethics and innovation.

Tailoring Stakeholder Involvement to Institutional Context

While the stakeholder groups outlined above form the foundation of inclusive AI policy development, each institution will have unique structures, cultures, and priorities. Additional voices may include representatives from Diversity, Equity, and Inclusion offices, community partners, alumni, external advisory boards, or specialized research centers, depending on the nature of AI initiatives and institutional goals.

The successful integration of AI in higher education depends not only on the technology itself but on the collaborative process guiding its adoption. Institutions must prioritize meaningful, sustained engagement with stakeholders across all areas of campus life. Through inclusive policy development, institutions can maximize the benefits of AI while safeguarding equity, transparency, and academic integrity.

This section was written with the assistance of ChatGPT (OpenAI, 2025).

Additional Resources

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

AI in Action: A SUNY FACT2 Guide to Optimizing AI in Higher Education Copyright © 2025 by SUNY FACT2 Task Group on AI in Action; Kati Ahern; Nicola Marae Allain; Abigail Bechtel; Angie Chung; Billie Franchini; Meghanne Freivald; Ken Fujiuchi; Dana Gavin; Jack Harris; Keith Landa; Alla Myzelev; Victoria Pilato; Ahmad Pratama; Russell V. Rittenhouse; Carrie Solomon; Angela C. Thering; and Shyam Sharma is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.