Using Student Input in AI Adoption and Use
To encourage the most ethical and effective implementation of AI across the curriculum, students must be central to shaping AI policy and usage in their learning experiences. Recent student surveys from Western Governor’s University (WGU) reveal both opportunities and concerning disparities in how students perceive and engage with AI technologies. Understanding these nuanced perspectives is essential for developing inclusive AI strategies that serve all learners effectively.
Current Student Perspectives on AI
The most recent research from WGU’s Student Insights Council, which surveyed over 4,500 students, reveals that students approach AI with cautious optimism. Fifty-nine percent of students expressed positive attitudes toward the use of AI in education, while 24 percent remained neutral and 17 percent reported negative views (WGU Labs, 2025). This overall optimism, however, masks gender disparities that institutions must address. Women are 12 percent less confident in their ability to use AI tools than men, revealing a concerning gap that could exacerbate existing inequities in technology access and career opportunities.
These confidence disparities become particularly important when considering students’ comfort with different AI applications. While 58 percent of students expressed that they are comfortable receiving AI-generated feedback and 66 percent were open to real-time feedback during exams or assignments, only 35 percent trusted AI to grade their work (WGU Labs, 2025). This distinction between AI as a supportive tool versus evaluative authority suggests students maintain clear boundaries about where human oversight remains essential.
Students also expressed thoughtful skepticism about AI’s role in personal support systems. Only about one-third of students supported the use of AI for social or emotional support, with just 32 percent seeing AI as beneficial for emotional or mental health guidance (WGU Labs, 2025). This finding underscores students’ preference for human connection in areas requiring emotional intelligence and personal understanding.
Building on Local Insights
These student sentiments are reflected in smaller-scale institutional experiences that provide additional context for policy development. In Fall 2024, business students at Monroe Community College completed projects designed to address existing challenges with AI and recommend process improvements. Seven student groups each defined a question related to AI use, surveyed students and faculty (sample sizes ranging from 50 to 85 respondents), analyzed data, and proposed solutions.
The MCC findings aligned with broader national trends while revealing specific implementation preferences. Overwhelmingly, students and faculty supported the use of AI integration into teaching and learning, with both groups using AI primarily for brainstorming and reviewing writing. However, students diverged from faculty in usage patterns—the percentage of faculty who did not use AI was double that of students (27% of students versus 55% of faculty). This usage gap highlights the importance of parallel training initiatives that simultaneously address both student and faculty competency development.
The MCC students proposed four high-level solutions to encourage desired AI use in higher education:
- Develop clear guidelines and policies that govern the ethical use of AI in the classroom.
- Have mandatory training for staff and students on the ethical use of AI.
- Position AI as a supportive tool to enhance learning rather than a replacement for academic work.
- Implement AI plagiarism- and cheating-detection tools.
Students also suggested possible implementations of courses on AI and its proper usage, content within courses to review AI, clubs for AI usage, tutors that specialize in AI, resources for AI on the college website, and accessible support and training.
Translating Student Input into Institutional Strategy
Understanding student perspectives should assist institutions in the development of responsive AI policies. The WGU research reveals five key areas where student input should directly inform institutional decision-making. First, transparency in AI-supported learning is a top priority for students, with 92 percent stating it’s important to know when they are interacting with AI (WGU Labs, 2025). This overwhelming consensus suggests that clear disclosure policies should be foundational to any AI implementation.
Student preferences also emphasized the importance of choice and human access. Additionally, 84 percent wanted the option to opt out of AI-driven experiences, 83 percent believed access to a human is essential, and 79 percent wanted clear disclosure when content is AI-generated (WGU Labs, 2025). These findings indicate that successful AI integration requires maintaining human alternatives and ensuring students retain agency in their learning experiences.
Systematic Data Collection Strategies
To effectively incorporate these insights into ongoing policy development, institutions need systematic approaches to gathering student feedback. Campus-wide surveys can capture broad institutional trends while preserving anonymity, particularly important given confidence and competency gaps that might otherwise limit honest responses. Course-specific surveys, meanwhile, provide granular insights into how AI tools function within specific disciplinary contexts and pedagogical approaches.
Both MCC students and the broader WGU research point to similar implementation strategies that institutions can pursue based on student recommendations. These include developing clear guidelines governing AI use, providing mandatory training for both students and faculty, positioning AI as learning enhancement rather than replacement, and implementing robust oversight systems that maintain human connection and evaluation authority.
Addressing Equity Through Student-Centered Design
The gender confidence gap revealed in the WGU research requires immediate institutional attention. Institutions must rapidly expand AI training and support specifically designed to address confidence disparities among underrepresented groups. This approach aligns with student preferences for personalized learning while ensuring that AI tools enhance rather than perpetuate existing educational inequities.
Student feedback consistently demonstrates that successful AI integration depends on centering learner voices throughout the design and implementation process. By systematically gathering and responding to student input, institutions can develop AI strategies that align with learner goals, maintain necessary human connections, and prepare students for AI-enabled workplaces while preserving the educational values students prioritize most.
Student Coursework on Topics of AI
Student coursework offers us a non-traditional channel for learning from students. When students research, explore, and reflect on AI topics, they contribute to the scholarship that is available to inform AI policy and usage. Instructors can ensure that students develop AI literacy by designing assignments that help students learn to do the following:
- Locate AI-related information effectively using tools appropriate to their academic discipline (e.g., prompt engineering in ChatGPT, evaluating AI-generated summaries, comparing search vs. synthesis tools like Perplexity or Copilot).
- Evaluate AI outputs from a variety of tools and sources, with attention to authority, bias, origin, and validity (e.g., comparing human-written vs. AI-written texts; analyzing bias in LLM responses).
- Demonstrate ethical awareness of AI tools by reflecting on the implications of using generative AI in academic work, including privacy concerns, authorship, and misinformation.
- Create or critique AI-generated content to explore its value and limitations in communication, research, or creative expression.
- Engage in discussions or reflections about how AI is shaping knowledge creation and access, and how that connects to issues of digital equity and academic integrity.
- Use AI literacy projects (e.g., annotated prompt logs, source-checking
activities, or tool comparison reports) to apply critical thinking across disciplines.
These approaches ensure that student coursework reflects not only familiarity with emerging AI tools, but also a thoughtful, ethical, and critically informed approach to their use.