Home / Online Teaching Resources / Artificial Intelligence Resources for Tufts Faculty and Staff
*Note - this information will continue to evolve and change
An AI is a computer-based system which can process large volumes of data, make calculations and inferences, and generate content. AIs are designed to interact with humans and assist with a wide range of tasks. The software behind AIs’ capabilities includes many types of advanced computation including artificial neural networks, algorithms, and other processes. AI systems are already part of our daily life through map and navigation tools (eg. Google Maps), writing aids (grammar/spelling check and suggested completion), social media platforms (which use algorithms to curate feeds and content), entertainment apps (which use algorithms to suggest a piece of music or video to watch next), through security and surveillance software including face detection, fraud detection for financial services apps or cheating detection software.
Large language models are AI systems which are designed to converse using text, and can be interacted with via a simple chat interface. New interfaces allow for easy access to the model by a wide range of users including those without technical skills. These large language models are trained on very large data sets of published and online content, and are designed to distill and present information in coherent language. In teaching and learning we are particularly interested in large language models because they respond almost instantly to requests for information, and can generate outputs which mimic many of the information formats students are asked to create. For example, these large language models are adept at generating responses to some types of exam questions or essays. Some of the most well-known examples of large language models currently being made available to the public are: OpenAI’s ChatGPT, Microsoft’s Bing Chat, Google’s Bard, and Anthropic’s Claude., DALL-E2 and Midjourney are other examples of large language models which can create images from text descriptions.
Faculty are particularly concerned with the ability of AI tools to mimic tasks students are commonly asked to do in course assessments. Tools such as Chat GPT can respond to prompts and questions, generate creative works, revise written or coded content, and solve certain types of problems. These capabilities do not always produce accurate results yet as some responses include incorrect information, biased content and even fabricated references. However, these capabilities are constantly and quickly evolving to be more sophisticated. Right now many faculty are concerned with how some students might use these tools to avoid engaging intellectually with assignments and graded course activities. While some tools exist for detecting AI generated content, they are not reliable plagiarism detection tools.
Technologies which replicate work we associate with the human brain and human skills are not new. Ironically, very smart humans have been creating these technologies for a long time. What is different right now? The field of artificial intelligence has been evolving for years, but in the last two years significant advances in the artificial neural network technology behind these systems have greatly increased their pattern recognition and language abilities. Now that these models are being used by millions of humans, that user data will get incorporated into even newer models, greatly improving their performance. In higher education these rapid advances in AI tools raise important questions about our role moving forward as to what our students need to learn, and how they can best learn. It also raises questions about the safety of these tools, which may have embedded bias, or other harmful traits. The new availability of AIs requires a paradigm shift, and dialogue at all levels of the university as we think through what this means for Tufts. For more see Resilient and Equitable Teaching and Assessment Require a Paradigm Shift.
Change is hard, and we all respond in different ways, which is normal. Individual faculty responses will fall somewhere on a continuum from resistance and wanting to prevent “cheating” to reflection on whether their assignments and assessments (as they are currently designed) will continue to nurture and assess student learning, and how they might adapt them, to embracing new tools and trying to understand how to leverage them for learning, possibly redesigning their courses or assessments. At CELT, we view this moment as an opportunity for interesting dialogues in community to not only improve learning and student engagement, but to reflect on and be creative in our teaching.
Below, we address how we might engage dialogue around the implications of AI for Tufts at different levels. At each level, engaging faculty, staff and students will enhance the conversation, improve our understanding, and lead to shared approaches for how we think about new AI tools and how we will manage them.
When AI tools are able to replicate or mimic some or much of the work we ask our students to do in higher education - say write an essay or a term paper, code, an admissions essay, or the preface to a grant proposal - we need to reflect deeply about our role in educating students and our own professional standards. Important questions need to be addressed globally, but also across higher education as an important agent of change, progress, and democracy.
At the school level, it will be useful to create cross-cutting working group(s) and structured spaces for open dialogue among faculty, staff and students. Soliciting questions from your department chairs, faculty, staff and students will be important in having a robust path for inquiry and dialogue.
These learning groups might focus on overarching questions such as:
The department level is where we believe the greatest opportunity lies for reflective and critical dialogue, developing shared language and approaches, and brainstorming ideas to increase curricular and pedagogical resilience. In this case, curricular and pedagogical resilience means that we are evolving our pedagogies with these tools and in response to changes in the larger landscape of available AI tools.
As we have tried to live our institutional commitment to antiracism in recent years, we have held ongoing important dialogues about disciplinary curricula through the lenses of equity, inclusion and justice. Changes toward these goals have required that we ask hard questions of ourselves, and that we critically examine our practices, attitudes and beliefs. Hopefully this process has prepared us somewhat for dialogues centered on AI that are rooted in our commitment to equity, inclusion and justice.
There are a range of ways faculty can begin to address implications of AI tools in their courses. Some are asking how these tools might change what students need and want to learn, others are exploring ways that these tools can enhance students’ learning. However, most of us are trying to think about how to adapt our individual courses. We recommend starting with the following in prioritizing your efforts as you explore the impacts of AI on teaching:
See Designing Courses in the Age of AI for an example syllabus policy, prompts for designing authentic learning experiences, advice for teaching students to write with AI and discussion of each of the practices above.