Diversity, Equity, and Inclusion (DEI) Evaluation Strategies
As generative AI becomes integrated into higher education, institutions must ensure that its use aligns with diversity, equity, and inclusion goals. The strategies described below can help faculty in making equitable, reflective, and inclusive decisions about AI tools and assignments.
Assess Representation and Bias
AI tools can inadvertently reinforce systemic bias due to limitations in their training data (including lack of any knowledge on many non-mainstream issues as well as lack of accurate, adequate, or unbiased knowledge about marginalized communities), the nature of model architecture, or problems in deployment or use. In higher education, where inclusive pedagogy and academic integrity are essential, it is necessary to evaluate AI tools for equitable representation across race, gender, culture, nationality, language, ability, and other identity categories. Since content generated by AI cannot reflect a wide range of lived experiences, it must be used with the awareness that it could perpetuate false narratives, tokenism, or harmful stereotypes; users must take responsibility for AI-assisted content and actions.
Faculty and students should be made aware of the ability to provide structured feedback on generative AI outputs—flagging inaccuracies, biased language, or exclusionary content—through integrated feedback mechanisms within the platform. This user feedback loop plays a critical role in model refinement and offers a tangible way for academic communities to participate in the ethical shaping of AI systems.
Institutions should also consider building internal protocols for reviewing AI-generated content used in curriculum or student work to ensure alignment with inclusive education goals and ethical academic standards. By embedding these practices into the educational framework, higher education institutions can harness the potential of generative AI while upholding their commitment to diversity, equity, and inclusion.
Use Strategic Prompt Engineering
In addition to reactive feedback, faculty can employ strategic prompt engineering to proactively guide AI outputs toward more inclusive and representative results. This involves crafting prompts that specify diverse perspectives, inclusive language, and culturally responsive framing. For instance, educators might prompt AI tools with questions like, “How does this content align with our institution’s DEI principles?” or “Does this output consider diverse cultural perspectives relevant to our student body?” Teaching students to formulate such prompts fosters digital literacy and models responsible engagement with AI tools. While expecting AI tools to produce accurate, adequate, unbiased, or equitable just by “learning to ask” is naive at best, the ability for critical interaction, to doubt, to reject, and to cross-check with human sources is far better than the mere ability to “use” AI without such critical and cultural literacy elements.
Promote Inclusive Pedagogy and Andragogy
Inclusive pedagogy encourages educators to examine how power, privilege, and identity shape both the formal and hidden curriculum. It calls for intentional design of classroom norms, assignments, assessments, and interactions that foster equity and belonging. In higher education, particularly when teaching adult learners, inclusive pedagogy builds on this foundation by emphasizing principles such as learner autonomy, relevance to life and work, the importance of prior experience, and the need for immediately applicable knowledge.
Faculty can actively apply these principles by designing learning experiences that integrate generative AI tools in ways that promote personalization, autonomy, and critical engagement. For instance, instructors might invite students to use AI to draft case studies based on their own professional contexts, then analyze the output for bias, accuracy, or cultural sensitivity. These activities both validate learners’ experiences and develop their capacity for digital and ethical literacy.
By using AI in ways that honor adult learning needs and inclusive design, faculty can co-create more equitable, relevant, and empowering learning environments. Ultimately, inclusive pedagogical and andragogical use of AI invites educators to reflect on how their design choices shape access, engagement, and outcomes for a diverse community of adult learners.
Engage in Continuous DEI Reflection and Training When Integrating AI
Faculty are encouraged to bring a DEI lens to the way they evaluate and use generative AI tools in their teaching. This includes considering how AI platforms may encode bias, whose perspectives are centered or excluded in AI-generated content, and how tool selection and assignment design impact equity in student learning. Given the rapid pace at which AI technologies evolve, faculty should approach each use as a new opportunity for critical evaluation. A tool or output that once seemed appropriate may shift significantly due to system updates, training data changes, or new use contexts.
Developing and sustaining this critical lens requires ongoing reflection, peer dialogue, and professional learning. DEI-focused training can help faculty enhance their ability to assess tools, design inclusive assignments, and model ethical AI use. Rather than a one-time intervention, DEI development is best treated as an iterative process that supports inclusive innovation in higher education.
Faculty might reflect on questions such as:
- Who benefits most from this tool’s design?
- Whose perspectives or voices might be missing?
- How transparent is the tool about its sources and limitations?
- Would this tool produce equitable results for all learners in my course?
Establish Feedback Mechanisms
Creating formal channels for students and faculty to report DEI-related concerns is essential for iterative improvement. Feedback mechanisms can include surveys, focus groups, and anonymous reporting tools. These insights should inform institutional policies and the evaluation of learning technologies, including AI. Faculty can contribute by participating in feedback channels and collaborating with administrators to ensure AI use remains aligned with shared DEI values.
Align AI Use with Institutional DEI Goals
The use of AI tools in education must align with broader institutional goals around diversity, equity, and inclusion. Evaluations should consider whether a tool supports or hinders campus-wide DEI efforts and accords with evolving commitments to social justice. Regular review and alignment ensure that technology enhances, rather than undermines, inclusive learning environments. Faculty can play an active role in advocating for alignment and participating in institutional review efforts. They can foster critical AI literacy that foregrounds DEI goals. To get the best results from AI tools, students need more advanced prompting skills beyond simple queries. Techniques like context-rich, chain-of-thought, and critique-based prompting help clarify context, demand reasoning, and reveal weaknesses in AI responses—to students who are also educated and sensitive about cultural and other biases. These (“prompt engineering”) skills not only improve output quality but also train students to detect bias, simplification, and dominant assumptions in the AI’s responses. By reframing prompts or asking for multiple perspectives, students can expose cultural blind spots or disciplinary limitations. In this way, prompting becomes a form of critical literacy—essential for using AI both effectively and ethically.