Explore AI Fundamentals

Developing AI Syllabus Statements and Communicating with Students about AI

Instructors are encouraged to include guidance in their course syllabi that clearly defines appropriate and inappropriate use of generative AI. Including guidance in the syllabus helps ensure that instructor expectations regarding generative AI tools are clear to students. These statements should be unique to the course, based on the course learning goals and the needs of the students.

Since instructors and academic departments have the ability to develop their own AI course policies, students may encounter different guidelines across their courses. To avoid confusion, we encourage all instructors to include a clear, well-defined generative AI statement in their course syllabi that clearly delineates whether and how students can use AI in their coursework.

As instructors and academic departments develop AI course statements, it may be helpful to consider the three sample syllabus statements shared below, which range from least restrictive (allowing broad AI use) to most restrictive (prohibiting AI use entirely). Instructors may adapt or revise any of these AI policy examples to fit the needs of their courses. Once they have determined guidance, it’s recommended that they discuss it clearly and regularly with their TAs and students so that students understand what is expected of them. Additionally, instructors are encouraged to post the guidance on their CourseWorks site. Please feel free to reach out to the Center for Teaching and Learning (ColumbiaCTL@columbia.edu) or attend one of the sessions listed below to discuss adapting any of these policies for your own teaching context.

AI Syllabus Statement Templates

 

  1. Fully encouraging draft guidance:

This course encourages students to explore the use of generative artificial intelligence (AI) tools for all assignments and assessments. Any such use must be appropriately acknowledged and cited. Each student is responsible for assessing the validity and applicability of any generative AI output that is submitted; you bear the final responsibility for the work you submit. Violations of this policy will be considered academic misconduct. Please note that different courses at Columbia may have different AI policies, and it is the student’s responsibility to conform to the expectations for each course.

2. Mixed draft guidance:

Certain assignments in this course will permit or even encourage the use of generative artificial intelligence (AI) tools. The default is that such use is not allowed unless otherwise stated. Any such use must be appropriately acknowledged and cited. Each student is responsible for assessing the validity and applicability of any generative AI output that is submitted; you bear the final responsibility for the work you submit. Violations of this guidance will be considered academic misconduct. Please note that different courses at Columbia may have different AI policies, and it is the student’s responsibility to conform to the expectations for each course.

3. Maximally restrictive draft guidance:

We expect that all work students submit for this course will be their own. In instances when collaborative work is assigned, the assignment should list all team members who participated. The use of any generative artificial intelligence (AI) tools at any stage of the work process, including preliminary stages, is strictly prohibited. Violations of this guidance will be considered academic misconduct. Please note that different courses at Columbia may have different policies regarding AI, and it is the student’s responsibility to conform to the expectations for each course.

 

Explore AI fundamentals

Browse CTL resources on teaching with AI, real-world use cases from faculty, ethical use guidelines, and Columbia-specific guidance.

Related Events

A selection of upcoming Teaching and Learning in The Age of AI events from the Columbia Center for Teaching and Learning.

Setting AI Expectations and Crafting Your AI Syllabus Statement

Wed, January 21, 2026 | 11:00 AM – 12:00 PM

Join the CTL for a hands-on workshop where you’ll explore example syllabus statements, discuss effective ways to explain your reasoning for allowing or restricting AI, and develop language that aligns your expectations with your course goals. You’ll leave with a draft AI policy ready to include on your syllabi, along with early-semester talking points and strategies for helping students understand and engage with your approach to AI.

Setting AI Expectations and Crafting Your AI Syllabus Statement

Fri, January 23, 2026 | 1:00 PM – 2:30 PM

Join the CTL for a hands-on workshop where you’ll explore example syllabus statements, discuss effective ways to explain your reasoning for allowing or restricting AI, and develop language that aligns your expectations with your course goals. You’ll leave with a draft AI policy ready to include on your syllabi, along with early-semester talking points and strategies for helping students understand and engage with your approach to AI.

Teaching and Learning in the Age of AI Events

The CTL hosts many live workshops, seminars, and clinics that support the purposeful exploration of AI in teaching and learning.

Browse our full list of events open to Columbia faculty, postdocs, and graduate students on our Events page.

Additional Resources

 

Columbia University Generative AI Policy:
The Office of the Provost has convened a working group of faculty and senior administrators from various parts of the University to develop policies and guidelines around the responsible use of these Generative AI tools.

Assignment Breakdowns:
In addition to the overall guidance recommended above, it may be useful to include an assignment-by-assignment breakdown or offer guidance related to specific coursework (e.g., it’s permitted to use AI tools to develop a study guide but not to generate essay responses). For support in considering guidance, please reach out to the Center for Teaching and Learning or review additional teaching and learning with AI resources on this website for support.

Appropriate Use:
Some instructors have partnered with students as they determine what constitutes appropriate use. This conversation and partnership can create opportunities for instructors and students to talk in detail about the evolution of particular tools, their potential benefits in specific disciplines, and their limitations. It is also an opportunity to be explicit about the course objectives and how the use of AI tools might interfere with or aid students’ learning and their achievement of particular goals.

Academic Integrity:
Instructors should encourage students to reach out when they need support rather than risking a potential academic integrity violation. For additional resources for instructors on supporting academic integrity, please see “Best Practices for Faculty: Promoting Academic Integrity and Preventing Academic Dishonesty.”

If an instructor believes a violation of academic integrity occurred or has suspicion that a student used AI on an assignment when it was not permitted, it is essential that they report the alleged violation to Student Conduct via this Academic Integrity Violation Referral form as close to the date of detection as possible. While a conversation between the instructor and student is always encouraged, completing a full investigation into the incident on their own (e.g., running the student’s materials through an AI detection tool, holding onto the suspicious assignment until the end of the course for comparison purposes) is not encouraged.  These actions can compromise the case when it is eventually reported to the Student Conduct office and create challenges in the investigation process, as well as additional stress and anxiety for the student. Reporting early and consistently increases transparency within the process and reduces or eliminates claims of bias made by students against instructors.

AI Detection Tools:
Since the introduction of AI tools, there has been a parallel rise in tools claiming accurate detection of AI-generated work. As with any form of detection software, there are risks of misidentification, which can have consequences in the classroom. These products are best used with careful consideration and as one of many ways to work with students. It is also important to include the potentially problematic use of these tools in any discussion with students around course policies, making clear why and how such services may be used in the course. As with other plagiarism detection tools, AI detection should be treated as a guideline and not a grading metric. In addition, it is important for instructors to note that when submitted as evidence in an incident report of suspected use of AI, a report from an AI detection tool confirming such use can not be used as the sole piece of proof in a Dean’s Discipline case to find a student responsible for a violation of policy (Weber-Wulff D, et al., 2023; Sadasivan VS, et al., 2023; D’Agostino S, 2023; Liang W, et al., 2023; Gegg-Harrison W and Quarterman C, 2024; Holtermann C, 2025; Yildiz FE, et al., 2025).

Data Protection:
Protect confidential or sensitive information: avoid inputting confidential University data into generative AI tools unless you are using CUIT-approved generative AI tools.

Syllabus Design:
For additional resources and information about designing an inclusive, transparent syllabus, please review the CTL’s Designing an Inclusive Syllabus resource. To schedule a consultation to discuss syllabus design with the CTL, please email ColumbiaCTL@columbia.edu.

 

Looking for more resources on AI use in higher education?

We invite Columbia University faculty and graduate students to connect with the Center for Teaching and Learning (CTL) team to discuss how AI can be used purposefully and ethically in higher education. Schedule an in-person consultation with our team, visit our open Office Hours, or log in for our virtual chats to explore which AI tools align with your teaching goals and how to implement them effectively.