Implementation, Deployment, and Management
When instructors consider implementing an AI tutor, they need to approach it with a comprehensive understanding of its capabilities and limitations. They also need to recognize the investment of time and energy that is required to ensure that the tutor functions in ways that will support students’ learning and mitigate the concerns that have been described throughout this section. While using an AI tutor may save time during a course, the initial setup and deployment will be time-consuming and requires iterative work.
To design, implement, and manage an AI tutor effectively, instructors must plan to do the following before deploying the tutor to students.
Master prompt engineering
Instructors must craft well-designed prompts that clearly define the AI’s specific role (e.g., “an upbeat, encouraging tutor”) and its goal (e.g., “help students understand concepts”). Providing step-by-step instructions, like asking one question at a time and waiting for a response, is essential. Prompts should also guide the AI to adapt explanations to the student’s learning level and prior knowledge, and crucially, to lead students to their own solutions rather than providing direct answers. Adding constraints and specific domain knowledge helps mitigate shallow responses or inconsistencies. Instructors should anticipate that AI can be unpredictable. It might refuse prompts, get stuck in loops, or even become argumentative, so it is wise to instruct students on how to redirect or restart interactions when needed. Prompt refinement is an iterative process that requires repeated testing across different LLMs to achieve the desired behavior and avoid “hallucinations.”
Prepare content
Instructors should fine-tune AI tutors with curated datasets, including lesson plans, research findings, or past student assessments. This ensures the AI tutor provides specific expertise and aligns with the instructor’s goals. For complex subjects like math, providing comprehensive, step-by-step solutions is necessary to guide the AI in delivering accurate explanations. Generating a diverse set of questions, ranging in difficulty and format, from lecture materials is also important, with each question needing individual review and validation by the course instructor to ensure accuracy and relevance.
Conduct thorough testing and evaluation
Instructors should personally interact with the AI tutor multiple times with course topics or concepts to observe its reactions. It is beneficial for instructors to intentionally “break” the AI by asking for direct answers (pedagogical breaking) or making common student mistakes (conceptual breaking) so they understand AI’s limitations. A critical review of AI-generated responses for accuracy and appropriateness, especially for diverse student needs, is vital. If the AI consistently provides incorrect or fabricated information, it shouldn’t be used for that specific topic. Instructors should also test for consistency and adaptability, checking if the prompt works reliably across multiple attempts and for students with varying proficiency levels, as different AI models may behave differently with the same prompts. This design and development process can involve significant time commitment.
Guide students on responsible use
Instructors must be transparent about the use of the AI tutor in syllabi and other course materials, providing clear guidelines and expectations for its proper usage and citation. They should also plan for how the course will help students learn how to critically evaluate AI outputs rather than passively accepting them, understanding that they are ultimately responsible for the accuracy of their work. As part of their interactions with an AI tutor, students should be educated about the risks of AI, including confabulation and bias. They should also be aware of privacy concerns and know the risks of entering personal data into the AI. Assignments should be designed to promote productive struggle and thoughtful engagement with AI content, ensuring it serves as a supportive tool, not a “crutch” to circumvent learning. Finally, emphasize that human oversight is always necessary, advocating for an AI-human partnership, and consider offering an opt-out option for AI assignments if feasible.