Developing Syllabus Statements for AI

Generative AI is now part of many students’ daily lives, and it is increasingly integrated into the tools they use to read, write, and complete the everyday tasks of learning. Students also encounter vastly different expectations around AI use as they move from course to course, or between internships and academic work. Clear, course-specific guidance helps students understand what is permitted, what is not, and, most importantly, why those boundaries matter for learning.
Students benefit when you explain not just what the rules are, but why certain uses of AI might harm (or support) the specific learning outcomes of your course or an assignment (e.g., This example policy pairing allowable/non-allowable uses with the learning rational). When students can see that the goal is to protect skill development, critical thinking, and authentic learning, they’re better able to make choices that follow the policy in meaningful ways.
When I posed the question, “Can we come to a single, consensus class policy we all agree on?” several students said no. That moment, of disagreement, was a gift. It opened up a space to explore divergent opinions and values. - Demian Hemmel, Ai in the Classroom
AI guidance is most effective when it is reinforced through course design. Alongside a syllabus statement, consider adding assignment-level directions that clarify how the policy applies to that task, sharing brief explanations of the learning tradeoffs involved (e.g., CELT's Podcast How is AI Helping (or Hindering) Learning?), and building in low-stakes opportunities for students to reflect on their process and learning. You can also revisit expectations during the term and, in some cases, refine them with student input on their experiences with AI use.
Example AI Policies
Below you’ll find three example AI policy approaches restricting, direction & encouraging AI use. Each is followed by examples from Tufts. As you review them, consider:
- Values & Goals: What does this policy communicate about what the instructor values in learning?
- Student Perspective: If you were a student in this course, what questions or confusion might arise?
- Implementation: How would an instructor know if the policy is being followed? What assignment designs or structures might support it?
An Example Statement Restricting AI Use
|
The central learning goals of this course are for you to develop your own thinking, writing, and analytical skills. These capacities are built through the cognitive work of grappling with ideas, drafting and revising your own prose, and developing your own analytical voice. Using AI tools to generate text, arguments, or solutions bypasses this essential work and prevents you from building these skills. For this course, AI tools may not be used at any stage of your work—including brainstorming, drafting, outlining, revising, or editing. All submitted work must be entirely your own. What counts as generative AI? For this course, we are referring to tools such as ChatGPT and other chatbots, as well as AI features embedded in software that generate, rewrite, summarize, or synthesize text. If you are unsure whether a tool or feature is considered generative AI, please ask before using it. Why does this matter for my learning? Building these skills requires productive struggle—the work of trying, revising, and clarifying your thinking over time. That process is how you develop critical thinking patterns and the ability to articulate complex ideas in your own voice. If AI does that cognitive work for you, you miss opportunities to practice and strengthen the skills the course is meant to build. Accessibility and support: If you use technology as part of an approved accommodation or accessibility support, please connect with me early so we can clarify expectations in a way that supports your learning. My commitment to you: I’ve designed assignments in this course to help you build skills that will serve you long after you graduate. If you’re struggling, please reach out—I’m here to support your learning. |
Artificial Intelligence (AI) has recently gained academic attention both for its ability to facilitate cheating and its potential to facilitate learning. Tufts University does not have an institution-wide policy on AI use in classes, on assignments, etc. Therefore, each class will have its own policies; it is your responsibility to be aware of the differing policies amongst your classes.
“Generative Artificial Intelligence” (GAI) includes, but not limited to: Bing Chat Enterprise, ChatGPT, Google Bard, any other Large Language Model (LLM), DALL-E, Midjourney, any other stable diffusion method, and other algorithms/models/methods that can generate text, images, video, music, voice, program code, or other things. Submitting work created by a Generative AI as your own in any assignment is considered plagiarism, and therefore an academic integrity violation, just the same as copying work from any other source. (The only exception to this is if the assignment instructions explicitly tell you to.)
While there is potential for GAI to benefit learning, just as you would in collaboration with peers (brainstorming ideas, getting feedback, revising or editing your work, etc), the concern is the output of GAI replacing your own voice and thoughts, reducing your ability to analyze ideas, and shortcoming the learning process. Because of the difficulty in self-determination of when GAI is facilitating-vs-hampering your own learning, the current rule in this class is to NOT allow the use Generative AI on assignments. If a more refined approach is determined, this statement will be updated and an announcement will be made in class.
An Example Statement Directing AI Use
|
In this course, AI tools may be used as learning aids to support your process, similar to how you might collaborate with peers or a TA for brainstorming, clarifying concepts, and getting feedback. However, all submitted work must reflect your own understanding and effort. This means you should be able to explain your reasoning, defend your arguments, and recreate your work without AI assistance. Permitted uses (as learning support)
If you use technology as part of an approved accommodation or accessibility support, please connect with me so we can clarify expectations in a way that supports your learning. Not permitted
If you're unsure whether a particular use is appropriate, ask me! I'm here to support your learning. Disclosure (AI acknowledgment)If you used AI beyond basic spelling/grammar checks, add a brief note at the end of your assignment:
Never include AI-generated text verbatim without quotation marks and citation. Important cautionsAI can produce confident but inaccurate information (including invented references) and may reproduce bias. You are responsible for verifying accuracy and ensuring your work reflects your own understanding and voice. You should also reflect on the impact of any AI use on your learning process. In the end, you should be able to walk through your problem-solving process, explain the choices you made, revise or extend your work without AI, and defend the arguments in your work without referring back to AI. |
What you should know about AI platforms
AI writing platforms have become savvy enough to write essays, create apps, and nearly any writing situation that relies on linguistic patterns. They may be particularly helpful in the following situations as outlined in AUA’s ChatGPT (AI) in Education guide:
- improving equity, since students can get personalized learning and scaffolding;
- saving time, e.g., when brainstorming or troubleshooting;
- motivating learners when they feel stuck with a certain task; and
- developing certain critical thinking skills.
You should also be aware that:
- AI platforms are not 100% accurate and may be trained on outdated data. They will even make up information that sounds convincing but is not real.
- AI platforms rely on language patterns to predict what an answer to a prompt should look like. They do not “think” about the right response like you do.
- AI platforms are biased. They have been trained with datasets that contain dominant worldviews (many of which follow several planes of power that you may not want to ascribe to) and will replicate those ways of thinking.
- AI platforms often depersonalize your writing. Overreliance may lead to a lack of voice and distinctive style, which hurts effective written communication.
Guiding Principles for AI Use in PSY 32
- Cognitive dimension: Working with AI should not reduce your ability to think clearly or shortchange your learning process.
- Ethical dimension: You should be transparent about your AI use and make sure it aligns with academic integrity and research ethics.
Our PSY 32 Course Policies for using AI
- AI use is allowed with the brainstorming and preparation of assignments and project milestones. Students may use AI platforms to help prepare for cumulative assignments and project milestones (but not quizzes or QALMRIs).
- E.g., you may use ChatGPT to create a list of 50 possible research ideas, edit grammatical mistakes, or correct your APA citations.
- Examples of unacceptable use of AI in PSY 32: asking ChatGPT to answer your assignment questions or outline a project proposal.
- AI use must be tracked and acknowledged in your submissions, whether at the interim or final stage. Include that information as part of your CRediT statement and highlight the AI-generated text in a different color.
- Any writing, media, or other submissions not explicitly identified as AI-generated will be assumed as original to the student(s). Submitting AI-generated work without identifying it as such is a violation of Academic Integrity.
PerPer ChatGPT’s current data usage policy, you must explicitly opt out of letting ChatGPT record your data for model training. But of course, our decision to opt into or out of technology has societal and environmental implications. Part of the 2023 Writers Guild of America strike was about the potential of AI tools to replace human writers. Works by artists have been used to train AI algorithms without consent or compensation. If your use of AI tools in PSY 32 is consistent with our course policy, your work will not be graded differently from the work of those who opt out of said tools. What matters is the broader narratives that you align with, and that is a ball I leave in your court.
[NOTE] A large portion of the text in this section is adapted from Dr. Joel Gladd’s sample AI policy. I also consulted Teaching@Tufts’ ChatGPT guidance and BU’s CDS Generative AI Assistance (GAIA) Policy.
An Example Statement Encouraging AI Use
|
This course treats generative AI as a professional tool you'll increasingly encounter in academic and workplace settings. You are encouraged to use AI strategically to explore ideas, get feedback, and strengthen your work. The goal is to help you develop critical judgment about when and how AI use deepens learning versus when it short-circuits the work an assignment is designed for you to practice. This judgement is meant to be transferable to professional environments where you may be increasingly asked to integrate generative AI into your work. However, it's essential that your own thinking, judgment, and disciplinary reasoning drive all work you submit, and it should reflect your own understanding and voice. You should be able to explain and defend any AI-assisted work in your own words, including the key choices you made and why. Recommended Uses: These align with practices that support deep learning:
Inappropriate uses:
Your Responsibilities:
Required Disclosure: Documenting AI use is an important professional practice, though norms are continually evolving. In this course, if AI influenced your thinking, structure, or content in any substantive way, document it. Routine spell-check doesn't require documentation, but if you're unsure whether your use crosses that line, include a brief acknowledgment—it's better to over-document than under-document. For any assignment where you used AI beyond routine spelling/grammar, add an AI Acknowledgment that includes:
Place any AI-generated text in quotation marks with attribution. If you paraphrase or build on AI ideas, acknowledge this and describe how it shaped your work. Learning Resources: If you're looking to develop your AI skills, explore resources like Anthropic's AI Fluency course or discuss strategies in office hours. |
Policy on the Use of Artificial Intelligence (AI)
In this course, the use of AI is encouraged to enhance your learning. We are hoping that you adopt a growth mindset around AI, continuously exploring new technologies and best practices on how to apply them. You should feel free to use AI tools to deepen your understanding of learning materials, brainstorm, get feedback, synthesize, revise and/or edit your work. However, it is important to emphasize that AI is a supporting tool rather than a replacement for human creativity and critical thinking. Hence, emphasis should be placed on integrating AI-generated outputs thoughtfully into your work, particularly for course assignments and deliverables. Submitting any work generated by an AI program as your own is a violation of Tufts Academic Integrity policies.
There are specific recommendations on how to use or not use AI on each assignment. In addition, please consider the following guidelines if/when using AI:
- Familiarize yourself with how AI works and its limitations, including bias and production of false or inaccurate information.
- Identify and cite the AI tools you use. If the AI tool you use allows you to generate and/or share a link to the conversation, you should include it in the reference. Here is an example without the conversation link:
- Reference: OpenAI. (2024). ChatGPT (Mar 14 version) [Large language model] http://chat.openai.com/chat
- Be transparent about how you used it, and include an acknowledgment section and/or as footnotes. Here is an example:
- AI Acknowledgment: We collectively authored this text and used Bard to 'review for cohesion, grammar and spelling, and suggest 3 revision recommendations.' The AI's review corrected our grammar and spelling, and it provided us with a few ideas on how to structure our introduction and conclusion sections. We didn’t agree with one suggestion, but incorporated the other two into our writing, as it made the content clearer.
Additional Resources
Planning and Assessment Design
- Artificial Intelligence and Academic Integrity: Expectations Worksheet - A tool to help clarify AI boundaries for specific assignments
- The AI Assessment Scale - A practical framework to guide the appropriate and ethical use of generative AI in assessment design, empowering educators to make purposeful, evidence-based decisions
- Designing an Aligned Generative AI Course Policy - Sarah McCorkle's article on learner-centered, equitable approaches to AI policies
- What We're Learning About Learning from the Latest AI Studies - Teaching@Tufts blog post reviewing research on AI and learning
- CELT Podcast: How is AI Helping (or Hindering) Learning? - Discussion of research findings on AI's effects on student learning
University Resources
- Draft University-Wide AI Guidelines - Institutional framework being considered January 2026
- Addressing Academic Integrity in the Age of AI from Teaching@Tufts
- Generative Artificial Intelligence (AI) Resources - From Tufts Technology Services (includes privacy and security considerations)
- Support for AI and Teaching - a guide from Educational Technology Services
- Tufts' AI Resource Hub for students, faculty & staff
Return to Artificial Intelligence Resources for Tufts Faculty and Staff