ChatGPT and GenAI
ChatGPT and other generative AI (GenAI) writing tools can generate convincing and unique text when prompted. Because this content is difficult or impossible to distinguish from human-authored text, it's important for students, instructors, and researchers to have a clear understanding of how these tools can or should be used in a university environment. These are general guidelines to help the UW community understand GenAI; faculty members, departments, and administrators are encouraged to discuss and establish discipline-specific practices. Also note that the focus of this primer is on text GenAI, but similar AI tools exist for other formats as well, such as images, video, computer code, etc.
On This Page:
- Generative AI Explained
- Limitations of Generative AI
- Using GenAI in the Classroom
- Strategies for Recognizing AI-generated Text
- Further Reading
Generative AI Explained
ChatGPT and other GenAI tools use large language models and machine learning to produce text that appears, at least initially, to have been written by a human. To accomplish this, they are first "trained" on a large amount of human-generated content (like books, Wikipedia, websites, etc.). During this training, they are programmed to recognize patterns in how the text is written and the probabilities that certain words occur together. These patterns and probabilities are then used to generate new text that replicates the style and content of human writing, based on a user prompt.
ChatGPT will output text that resembles the text found in its training materials, but the amazing thing is that, due to how the text is constructed around patterns and probabilities, it can also generate completely new and unique combinations of words that don't appear anywhere in the training, but still follow the same patterns of human-generated text. This is how it can generate text of almost any kind about almost any topic.
Limitations of Generative AI
While the capabilities of GenAI are impressive, there are certain factors that, at least for now, limit its uses:
- It's not up-to-date: GenAI will only be as current as its last training date. Prompting it about current events or developments after this date will yield incorrect and unreliable results.
- It's not a search engine: While some GenAI tools will integrate with search engines, by default the text generated will not draw from Internet searches in a way that human authors will.
- It doesn't have experience with the physical world: because its outputs are generated only from patterns in existing writing, it's often unable to produce descriptions of things that have not been described in text.
- It struggles with factual information and quotations: GenAI does not verify its own outputs like a person would. For this reason, it may produce incorrect statements with no basis in fact, or invent information, quotations, or references to non-existent works.
- Some topics/concepts/languages are not reflected in the training data: GenAI can only output reasonable text for things that are adequately reflected in the data. Certain prompts will yield nonsense if there is not a mass of text from which it can draw patterns.
- It has built-in limitations: issues around sensitive topics, privacy, copyrighted works, and intellectual property rights have forced the designers of GenAI tools to add limitations in the programming to avoid legal troubles.
All of these limitations, and others, may result in inaccurate outputs from GenAI. Remember, GenAI is a tool for generating unique text out of the language patterns in a large collection of human-written content. It does not "look up" information or verify its outputs. So the outputs generated by GenAI should always be critically evaluated.
Using GenAI in the Classroom
Whether we like it or not, students are aware of GenAI and many will be interested in using it to ease or improve their work. So, it's very important to provide them with clear guidelines about how they should or shouldn't use it in your course. Here are some course and assignment design principles that you can consider when developing and delivering your course.
- Discuss GenAI tools with your students: Make sure you and your students understand what GenAI text tools are and how they work. Discuss the ethical and unethical uses of GenAI with your class. Agree on guidelines for the ethical use of artificial intelligence with your students.
- Model acceptable uses of GenAI: Demonstrate to your students the uses of GenAI that you consider acceptable for a given assignment, such as brainstorming or generating topics/ideas, and hold yourself to same standard as you do your students.
- Demonstrate the limitations: Show students where GenAI text falters (generating citations and references, math/logic, current events, factual information, etc.)
- Critique the outputs: Generate text on a relevant topic and have students critique and fact-check the output.
- Encourage keeping drafts: Ask students to keep the drafts of their writing (or set up automatic version history), so they can demonstrate their writing process.
- Promote authentic writing: Encourage your students to find ways to demonstrate the authenticity of their writing. Discuss with them how GenAI text differs from human writing, and provide them with examples.
- Use in-class writing: Have students complete writing assignments in class to eliminate the possibility of them using AI for that text. Keep in mind that many students feel more comfortable writing and editing than having to produce text quickly and spontaneously, so avoid using this for every assignment.
- Use oral presentations: Have students deliver oral presentations, either as a standalone assignment or in conjunction with written assignments. This can serve to verify student understanding. Again, many students will feel uncomfortable with this and so it should not be overused.
- Use experiential and service learning: Include assignments that are connected to individual experiences in the real world. These will be harder (though not impossible) for GenAI to complete. This has the added benefit of showing how learning can be applied to students’ social lives and used to improve their communities.
Strategies for Recognizing AI-generated Text
Even if you have given clear guidance to students about the use of AI in your course, you may still receive assignment submissions that you suspect have misused, or over-relied on, GenAI tools. The following strategies are listed from strongest to weakest indicators of AI-generated text. Remember, it's important to exercise judgement when assessing students' work for authenticity in order to avoid making unfounded accusations.
- Student admission: Student admission to using AI for a writing assignment provides the strongest evidence; however, it's advisable to still ask them for details of how it was used. This can provide you with valuable context about how students may approach AI tools, and allow you to make an informed judgement of whether this use constitutes misconduct.
- Discussions with Student: The UW Academic Misconduct Policy requires instructors who suspect misconduct to contact the student to discuss their concerns. In these discussions, you can give the student the opportunity to restate their understanding of the subject matter and explain their research and writing process. If the student is unable to discuss the content of their own writing or reasonably describe their process, the text may have been generated by AI.
- False information, citations, or quotations: ChatGPT has a habit of inventing specific pieces of information. The presence of fake reference and citations is strong evidence that AI may have been involved in writing a piece of text. Even when the references exist, the quotations or paraphrased ideas often won’t align with the original text. You may wish to double-check with a librarian to ensure a source does or doesn't exist.
- Quality of writing: ChatGPT generates technically and grammatically correct text. If a text seems too polished or doesn't align with the student's previous quality of writing, it may be an indication it was generated by AI. Of course, the student may have made efforts to produce excellent writing, or may have improved in their writing abilities, so please carefully exercise your judgement.
- Writing style: ChatGPT and other AI tools will create text in a fairly broad or generic style. Unless cleverly prompted, the text will not address specific elements of the course content and will lack a strong connection to the themes explored in class lectures and discussions. The student's voice, as exemplified by previous writing and in-class contributions, will be absent from the text. Unusual word choice or phrasing is often an indication that a text was written by a human, while bland and perfect writing may have been AI-generated.
- AI-detection tools: AI-detection services do not provide strong evidence of AI-generated text. These tools require a large amount of text to get meaningful results, and even then they are imperfect, often generating false negatives and positives. Additionally, focusing on detection furthers the existing tension between students and instructors and sours the learning environment. Finally, and perhaps most importantly, students didn't consent to have their work pasted into these systems, and they don’t know how it will be used by them in the future. In some cases, submitting assignments to certain tools without permission would be in violation of students’ copyright over their own work. Therefore, it is not recommended that instructors use AI-detection tools as evidence of academic misconduct.
Further Reading
- Generative Artificial Intelligence: Practical Uses in Education / Troy Heaps (Open Educational Resource)
- Generative Artificial Intelligence in the Classroom / UofT Office of the Vice-Provost, Innovation in Undergraduate Education (Webpage)
- Guidelines for Teaching with Generative Artificial Intelligence / Concordia University Centre for Teaching and Learning (Webpage)