Open Menu Close Menu Open Search Close Search

*Note - this information will continue to evolve and change

An AI is a computer-based system which can process large volumes of data, make calculations and inferences, and generate content. AIs are designed to interact with humans and assist with a wide range of tasks. The software behind AIs’ capabilities includes many types of advanced computation including artificial neural networks, algorithms, and other processes. AI systems are already part of our daily life through map and navigation tools (eg. Google Maps), writing aids (grammar/spelling check and suggested completion), social media platforms (which use algorithms to curate feeds and content), entertainment apps (which use algorithms to suggest a piece of music or video to watch next), through security and surveillance software including face detection, fraud detection for financial services apps or cheating detection software.

Large language models are AI systems which are designed to converse using text, and can be interacted with via a simple chat interface. New interfaces allow for easy access to the model by a wide range of users including those without technical skills. These large language models are trained on very large data sets of published and online content, and are designed to distill and present information in coherent language. In  teaching and learning we are particularly interested in large language models because they respond almost instantly to requests for information, and can generate outputs which mimic many of the information formats students are asked to create. For example, these large language models are adept at generating responses to some types of exam questions or essays.  Some of the most well-known examples of large language models currently being made available to the public are: OpenAI’s ChatGPT, Microsoft’s Bing Chat, Google’s Bard, and Anthropic’s Claude., DALL-E2 and Midjourney are other examples of large language models which can create images from text descriptions.

Faculty are particularly concerned with the ability of AI tools to mimic tasks students are commonly asked to do in course assessments. Tools such as Chat GPT can respond to prompts and questions, generate creative works, revise written or coded content, and solve certain types of problems. These capabilities do not always produce accurate results yet as some responses include incorrect information, biased content and even fabricated references.  However, these capabilities are constantly and quickly evolving to be more sophisticated.  Right now many faculty are concerned with how some students might use these tools to avoid engaging intellectually with assignments and graded course activities. While some tools exist for detecting AI generated content, they are not reliable plagiarism detection tools.  

What do recent advances in AI mean for higher education?

Technologies which replicate work we associate with the human brain and human skills are not new. Ironically, very smart humans have been creating these technologies for a long time. What is different right now? The field of artificial intelligence has been evolving for years, but in the last two years significant advances in the artificial neural network technology behind these systems have greatly increased their pattern recognition and language abilities. Now that these models are being used by millions of humans, that user data will get incorporated into even newer models, greatly improving their performance. In higher education these rapid advances in AI tools raise important questions about our role moving forward as to what our students need to learn, and how they can best learn. It also raises questions about the safety of these tools, which may have embedded bias, or other harmful traits. The new availability of AIs requires a paradigm shift, and dialogue at all levels of the university as we think through what this means for Tufts. For more see Resilient and Equitable Teaching and Assessment Require a Paradigm Shift.

A Continuum of Resisting and Deflecting - Reflecting and Adapting - Embracing and Redesigning

Change is hard, and we all respond in different ways, which is normal. Individual faculty responses will fall somewhere on a continuum from resistance and wanting to prevent “cheating” to reflection on whether their assignments and assessments (as they are currently designed) will continue to nurture and assess student learning, and how they might adapt them, to embracing new tools and trying to understand how to leverage them for learning, possibly redesigning their courses or assessments. At CELT, we view this moment as an opportunity for interesting dialogues in community to not only improve learning and student engagement, but to reflect on and be creative in our teaching.

Medford/Somerville, Mass. - A class meets outdoors on the Academic Quad on October 13, 2021. (Alonso Nichols/Tufts University)

Below, we address how we might engage dialogue around the implications of AI for Tufts at different levels. At each level, engaging faculty, staff and students will enhance the conversation, improve our understanding, and lead to shared approaches for how we think about new AI tools and how we will manage them.

When AI tools are able to replicate or mimic some or much of the work we ask our students to do in higher education - say write an essay or a term paper, code, an admissions essay, or the preface to a grant proposal - we need to reflect deeply about our role in educating students and our own professional standards. Important questions need to be addressed globally, but also across higher education as an important agent of change, progress, and democracy. 

  • What does this mean for Tufts’ role in graduating an educated citizenry?
  • What opportunities do these new tools offer us? 
  • Are there equity concerns or affordances for different students? 
  • What ethical, legal and safety considerations should we be wrestling with vis a vis AI?  in education, in society?
  • How does this change how we think about academic integrity?
  • What policies for academic integrity do we need to put in place regarding its use that are meaningful and flexible enough to evolve with these tools? Who should be working on them?
  • What will be the impact on intellectual property?
  • What are the privacy issues we need to address for our students?
  • What other big questions should we be asking ourselves?


At the school level, it will be useful to create cross-cutting working group(s) and structured spaces for open dialogue among faculty, staff and students. Soliciting questions from your department chairs, faculty, staff and students will be important in having a robust path for inquiry and dialogue. 

These learning groups might focus on overarching questions such as:

  • What is the relationship between AI tools and learning - and what does that mean for our teaching, assessment, and the role of content?
  • Which aspects of how we are currently teaching will still be effective in the age of AI, and which are not?
    • What curricular or pedagogical approaches will no longer serve students’ learning?
    • Which of our current approaches might need to be repositioned or strengthened? 
    • What new pedagogical approaches might make use of AI as a learning tool?
  • How might we define, communicate and engage students in dialogue about the value of academic integrity?
    •  What policies for academic integrity and professionalism do we need to put in place that are meaningful and flexible enough to evolve with these tools?
  • What will our students need to be prepared to enter their chosen field of study in an age of AI, and what does that mean for our school’s curriculum? 
    • How will AI be used in the field, and what do students need to learn to be prepared for that work?
    • What might need to change in our curriculum, and what will stay the same?
    • What ethical and safety considerations should we be wrestling with vis a vis the use of AI at Tufts and society?

The department level is where we believe the greatest opportunity lies for reflective and critical dialogue, developing shared language and approaches, and brainstorming ideas to increase curricular and pedagogical resilience. In this case, curricular and pedagogical resilience means that we are evolving our pedagogies with these tools and in response to changes in the larger landscape of available AI tools.

As we have tried to live our institutional commitment to antiracism in recent years, we have held ongoing important dialogues about disciplinary curricula through the lenses of equity, inclusion and justice. Changes toward these goals have required that we ask hard questions of ourselves, and that we critically examine our practices, attitudes and beliefs. Hopefully this process has prepared us somewhat for dialogues centered on AI that are rooted in our commitment to equity, inclusion and justice. 

Questions at the department level might include: 

  • How are our beliefs about what is necessary and important for learning in our discipline challenged by AI? 
  • What could be the relationship between AI tools and learning in our discipline? 
  • How might our learning outcomes for our majors change or be focused differently to reflect new possibilities presented by AI tools?
  • What might this mean for our departmental curriculum?
    • What might we do to strengthen our departmental curriculum, and what might stay the same?
    • How will AI be used in the field, and what do students need to learn to be prepared for that work?
    • Are there new skills we need to introduce and develop - for example, ethical reasoning, information literacy, critical thinking, equity and inclusion, etc.?
  • What does this mean for our teaching and assessment practices? For the role of content? 
    • In our department, what are our dominant forms of assessing learning?
    • What might need to change in our pedagogy, and what will stay the same?
    • Which aspects of how we are teaching and assessment are resilient in the age of AI, and which are not? Which might need to be repositioned or strengthened?
    • If our current assessments or parts of them can be done using AI, what is the importance of students performing these activities? (moving beyond “it’s good for them” toward “why students should do X”, and asking ourselves “is this still true if these activities are able to be performed through AI?”)
    • What resources might a shift toward developing more authentic assessments require?  
    • How does this change the role of an instructor in a course?
  • What are the ethical questions we need to wrestle with together and with our students related to knowledge production and these new tools?

There are a range of ways faculty can begin to address implications of AI tools in their courses.  Some are asking how these tools might change what students need and want to learn, others are exploring ways that these tools can enhance students’ learning. However, most of us are trying to think about how to adapt our individual courses. We recommend starting with the following in prioritizing your efforts as you explore the impacts of AI on teaching:

  • Communicate with your students about AI policies and expectations in your course
  • Revise course assignments to minimize the value of outsourcing thinking to an AI
  • Become Familiar with Generative AI
  • Consider Ways AI Tools Could Enhance Your Course
  • Begin Conversations About What These Tools Might Mean for Tufts and Higher Education

See Designing Courses in the Age of AI  for an example syllabus policy, prompts for designing authentic learning experiences, advice for teaching students to write with AI and discussion of each of the practices above.


Where to start to learn more