Skip to Main Content

Artificial Intelligence

Learn more about generative AI, its potential uses in teaching and learning, and the opportunities and challenges presented by this emerging technology.

The MRU Library, Academic Development Centre and Student Learning Services have collaborated to bring you the information on this living page, whose intent is to provide information and resources about generative AI and higher education to the MRU community. For concerns related to academic conduct, please contact the Office of Student Community Standards.

Last updated February 9, 2024. 

AI vs. GenAI

Artificial intelligence: Machines that imitate some features of human intelligence, such as perception, learning, reasoning, problem-solving, language interaction and creative work UNESCO (2022)

 

Artificial intelligence (AI) is a general term used to describe a number of different, specific systems. We encounter and use AI every day: from navigating maps on Google or Apple, to asking Siri or Alexa to set a timer, to searching a library catalogue. AI is a part of our lives. 

Generative AI (GenAI) is “a type of artificial intelligence that involves creating machines or computer programs that can generate new content, such as images, text, or music. Unlike traditional AI systems that rely on predefined rules or pre-existing data to make decisions, generative AI models use algorithms and neural networks to learn patterns and relationships in data and generate new outputs based on that learning” (Kwantlen Polytechnic University, n.d., p. 1).

Algorithm

The “brains” of an AI system, algorithms are a complex set of rules and decisions that determine which action the AI system takes. Machine learning algorithms can discover their own rules or be rule-based, in which case human programmers input the rules.

Machine Learning (ML)

A field of study with a range of approaches to developing the algorithms used in AI systems. Machine learning algorithms can discover rules and patterns in data without a human specifying them, which can sometimes lead to the system perpetuating biases.

Training Data

The data, generated by humans, used to train the algorithm or machine learning model. Training data is an essential component to the AI system, and may perpetuate the systemic biases of source data when implemented.

For these and related definitions, browse the Glossary of Artificial Intelligence Terms for Educators (CIRCLS, n.d.).

Functions of GenAI

GenAI tools perform a wide variety of practical functions and tasks (examples retrieved from NVIDIA, n.d.; Upshall, 2022). For a comprehensive directory of AI tools, explore Futurepedia.

Examples of Functions

Generate text from a prompt 

“Write a paper on the impact of fake news on education”
“Write a poem about existentialism in the style of Walt Whitman”
“Simplify the following radiology report”

Synthesize information

(e.g., summarize a text, combine information from multiple sources)

Create an image or digital illustration from a prompt

“A Cubist oil painting of a couple lounging next to a creek”
“A photorealistic image of a half-eaten pumpkin pie”

Generate computer code

(e.g., generate new code from a comment, fix flawed code)

Translate text

“Translate the following text from Turkish to English”

Opportunities and Challenges

While emerging GenAI technologies present a number of opportunities for learners and educators, there are also challenges to integrating these systems into curriculum and coursework.

Opportunities

  • AI LITERACY DEVELOPMENT: New GenAI tools offer opportunities to introduce discussion and instruction centering on AI Literacy (Upshall, 2022). For example, instructors could use GenAI output for activities designed to help learners build skills in AI tool appraisal and to practise critical thinking.
  • IMPROVE TEACHING: There is a wide range of potential uses of GenAI to improve teaching, many of which are still being explored (metaLAB at Harvard, n.d.). For example:
    • The rise of GenAI has prompted educators to rethink their assessment practices (Bearman et al., 2023; UNESCO, 2023, p. 37).
    • Educators could use GenAI tools as curriculum or course co-designers (UNESCO, 2023, p. 31). For example, a GenAI tool could help an instructor to draft learning outcomes for a course or for a specific assessment.
    • Educators could use GenAI tools as teaching assistants that could provide learners with individualized support (UNESCO, 2023, p. 31) that is personalized to their learning style, interests, abilities, and learning needs (Kwantlen Polytechnic University, n.d., p. 3).
  • IMPROVE LEARNING: GenAI tools may help augment learning environments. Mike Sharples (Professor Emeritus of Educational Technology, Open University, UK) has devised 10 possible roles that a GenAI tool could play in augmenting learning for students, including Possibility Engine, Socratic Opponent, Personal Tutor, and Motivator (UNESCO, 2023, p. 9).
  • ACCESSIBILITY: GenAI may be used as an assistive tool for those with accessibility needs (Kwantlen Polytechnic University, n.d.). This could include auto-generating captions or sign language interpretation for audio or visual content that lacks it, or generating audio descriptions of textual or visual material (UNESCO, 2023, p. 35).
  • COGNITIVE OFFLOADING: Users may delegate certain tasks to GenAI to reduce cognitive demand, thereby freeing up the user’s time and effort for other tasks (Grinschgl & Neubauer, 2022).

Challenges and Ethical Implications

  • UNRELIABLE CONTENT: GenAI tools have no knowledge of the real world and need to be paired with human verification. For example,  citations or sources provided by ChatGPT will need to be checked to ensure that they are not fabricated and that they actually contain the information that has been attributed to them.
  • ACADEMIC MISCONDUCT: GenAI systems may be manipulated or used in unethical ways, such as when a student uses them to bypass learning. In addition, identifying when a learner has used GenAI generated text in their writing can be very difficult, posing a challenge to educators (Kumar et al., 2022; Fowler, 2023; Elkhatat et al., 2023).
  • BIAS AND DISCRIMINATION: GenAI systems perpetuate existing human biases, as they generate outputs based on patterns in the data they were trained on. For example, GenAI photo editing tools have expressed racial biases (Poisson, 2022), and large language models such as ChatGPT have perpetuated gender biases and stereotypes (Lucy & Bamman, 2021; Snyder, 2023) in its outputs.
  • INTELLECTUAL PROPERTY: Developing laws and ongoing court cases regarding the use of genAI tools and copyright currently lead to significant legal uncertainty in Canada and around the world. Some of the concerns include (University of Toronto, n.d.):
    • Input (i.e. training data): The legality of the content used to train AI models is unknown in some cases. There are a number of lawsuits originating from the US that allege genAI tools infringe on copyright and it remains unclear if and how the fair use doctrine can be applied. As of now, no genAI lawsuits have started in Canada and because of this the uncertainty remains regarding the extent to which existing exceptions in the copyright framework, such as fair dealing, apply to this activity.
    • Output (i.e. text, image, etc. generated by genAI tools): Authorship and ownership of works created by AI is unclear. Traditionally, Canadian law has indicated that an author must be a natural person (human) who exercises skill and judgement in the creation of a work. As there are likely to be varying degrees of human input in generated content, it is unclear in Canada how it will be determined who the appropriate author and owner of works are. Currently, the Federal Government of Canada is seeking public feedback on this concern.
  • SUSTAINABILITY: Concerns have been raised about the environmental costs involved both in initial training of GenAI models and in their daily use once they have been rolled out to the public. Specifically, researchers are analyzing their electricity use and carbon emissions (de Vries, 2023; Luccioni, 2023).
  • PRIVACY: GenAI systems are trained on enormous datasets that may include personal information previously posted to the internet that could be used to identify individuals (Gal, 2023; Kwantlen Polytechnic University, n.d.). Additionally, there are considerable privacy concerns related to the information that users supply when prompting GenAI systems and that user information then being used to train the model in the future (Gal, 2023).

Suggestions for Use

Suggestions for Faculty

  • ETHICS: Ask yourself whether you are comfortable with the ethical implications of using GenAI tools (e.g., environmental sustainability, unethical labour practices by tech companies bias and discrimination). See the Challenges and Ethical Implications section of this page for more information. 
  • CONVERSATIONS WITH STUDENTS: Make time in class to have open conversations with students about GenAI and the implications of its use in their academic work (Ward et al., 2023). Examples of questions:
    • What do you know about artificial intelligence tools?
    • How have you been using them?
    • What potential opportunities and challenges do you see?
  • CLEAR EXPECTATIONS: Mention GenAI tools explicitly in your course outline (see University of Alberta’s sample statements). Each time you introduce an assessment, clarify your expectations with respect to GenAI tool use.
  • ACADEMIC INTEGRITY: Help students acquire a foundational understanding of academic integrity (e.g., have them complete MRU’s academic integrity online training module).
  • EXPERIMENT: Engage, explore and experiment (Eaton & Anselmo, 2023). Trying out AI tools yourself will help you understand what is possible; learning more about prompt engineering may increase the likelihood of generating useful results (Lo, 2023). Be aware of the data you are providing and available privacy settings (e.g., the ability to turn off chat history in ChatGPT).
  • ACKNOWLEDGEMENT/CITATION: Give clear guidance to students on how to acknowledge and potentially cite GenAI outputs according to the citation style used in your course. Openly acknowledge your own use of GenAI tool use in your teaching and scholarship (e.g., how you have used it to design learning materials and assessments).
  • ASSESSMENT: Think more deeply than ever (D'Agostino, 2023) about the learning outcomes of your course and how your assessments align with those outcomes. Identify the cognitive tasks your students need to perform without assistance (Bearman et al., 2023).
  • SPACE FOR FAILURE: Encourage productive struggle and learning from failure by allowing resubmissions/rewrites where feasible (see the linked slide in this resource) (Trust, n.d.). Fear of failure can be a factor in a student’s decision to use GenAI in ways that may bypass learning.

Suggestions for Students

  • ETHICS: Ask yourself whether you are comfortable with the ethical implications of using GenAI tools (e.g., environmental sustainability, unethical labour practices by tech companies, and bias in GenAI training data and discrimination in their output). See the Challenges and Ethical Implications section of this page for more information.
  • INSTRUCTOR EXPECTATIONS: For every assignment and test, make sure you understand your instructor’s expectations with respect to Gen AI use. Check your course outline, and check assignment guidelines documents for this information. If you are unsure, ask your instructor.  Where GenAI use is allowed, be sure to check expectations for acknowledgement of tool use and, potentially, citation.
  • ACADEMIC INTEGRITY:  To learn more about academic integrity and what constitutes academic misconduct, complete MRU’s online training module. (Log in using your @mtroyal.ca credentials, and then select the “Enroll in Course” button. If you’re already enrolled, you’ll see “Open Course.”).
  • EXPERIMENT: Take the time to experiment with GenAI tools to better understand what they can and cannot do. Learning more about prompt engineering may increase the likelihood of generating useful results (Lo, 2023). Critically analyze the output; sometimes it looks great on the surface, but not when you look more deeply. These tools are great synthesizers, but the critical thinker is you.
  • IMPLICATIONS FOR YOUR LEARNING: Before using a GenAI tool for a particular task, ask yourself how it will affect your learning. Will it enhance learning, or diminish it? Will it give you opportunities to think more deeply or less deeply? In the case of using GenAI for writing, be aware of how using the tool could impact your own writer’s voice.
  • PRIVACY: Ask yourself whether the information you are feeding into the GenAI tool is even yours to share. Do you have the appropriate rights or permissions? If you do, could sharing this information impact you negatively in the future?
  • APPLICATIONS IN THE WORKPLACE: Be curious about how GenAI tools are being used by professionals in your discipline. Ask your professors, and ask people in your network.

GenAI and the Law

Copyright Law

Canadian copyright law implies AI cannot own the copyright to creative works. Determining the author of an AI-created work will require a legislative amendment and careful consideration of who (or what) can author AI-generated works. In 2021, the Government of Canada released A Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things (Government of Canada, 2021), which aimed to gather public feedback on potential legislative amendments to the Copyright Act regarding AI. Following this consultation, the Government of Canada has recently released the Consultation on Copyright in the Age of Generative Artificial Intelligence and the opportunity for public feedback has ended. This consultation will be used to inform the government's policy development process.

Terms and Conditions

If you plan to use GenAI tools, ensure you have read and understood the Terms and Conditions of the developer(s). For any clarification, reach out to the MRU Copyright Advisor (mrucopyright@mtroyal.ca).

Artificial Intelligence and Data Act (AIDA)

AIDA is a part of the Digital Charter Implementation Act and is currently working its way through the House of Commons under Bill C-27. AIDA is meant to create a “new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses” (Government of Canada, n.d.). AIDA would require that appropriate measures be put in place to identify, assess, and mitigate risks of harm or biased output prior to the system being available to the public. These obligations would be guided by the following principles:

  • Human oversight & monitoring
  • Transparency
  • Fairness and equity
  • Safety
  • Accountability
  • Validity & robustness

Upcoming Events

(Online) Pedagogical Strategies in the AI Environment

Date: Friday, March 15, 2024
Time: 11:30am - 1:00pm
Presenters: Bill Bunn, Steven Engler, and Nate Wagenaar
Where: Online - Virtual

The viral popularity of generative AI tools follows from their ability to do image and text creation tasks reasonably or very well.  Applications of these tools are multiplying quickly as users recognize opportunities.  How are faculty using an talking about AI in support of pedagogy? Join colleagues Bill Bunn, Steven Engler, and Nate Wagenaar for some insights, and a discussion moderated by David Hyttenrauch. 

Facilitators: Bill Bunn (English, Languages, and Cultures), Steven Engler (Humanities), Nate Wagenaar (Interior Design), & David Hyttenrauch (MRU Arts)

Recommended Readings and Resources

How AI chatbots like ChatGPT or Bard work – visual explainer

Clarke, S., Milmo, D., & Blight, G. (2023, November 1). The Guardian.

  • Clarke et al. provide a visual walkthrough of how large language models work to predict the next word in a sequence of text.
  •  

Teaching and learning with artificial intelligence apps

Eaton, S., & Anselmo, L. (2023, January). Taylor Institute for Teaching and Learning.

  • Advice on using AI apps in the classroom.  “If we think of artificial intelligence apps as another tool that students can use to ethically demonstrate their knowledge and learning, then we can emphasize learning as a process not a product.”  
  •  

Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text 

Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). International Journal for Educational Integrity.

  • This paper is an analysis of the AI content detection tools developed by OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag, and their accuracy at detecting AI-generated text.
  •  

ENAI recommendations on the ethical use of artificial intelligence in education

Foltynek, T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. (2023). International Journal for Educational Integrity, 19. Article 12.

  • The European Network for Academic Integrity shares its recommendations on the ethical use of AI in education.

 

Cohere AI CEO Aidan Gomez on the emerging legal and regulatory challenges for artificial intelligence [Audio podcast episode]

Geist, M. (Host). (2023, April 17). In Law Bytes. Michael Geist.

  • Law Bytes host Michael Geist is joined by Cohere AI CEO Aidan Gomez to discuss complex legal and regulatory issues related to AI.

 

An Indigenous perspective on generative AI [Audio podcast episode]

Hendrix, J. (Host). (2023, January 29). In The Sunday Show. Tech Policy Press.

  • Justin Hendrix interviews Michael Running Wolf, a PhD student in computer science at McGill University and a Northern Cheyenne and Lakota man. Michael Running Wolf is also the founder of the non-profit Indigenous in AI. He provides his perspective on generative AI.

 

AI observatory

Higher Education Strategy Associates. (n.d.). 

  • Higher Education Strategy Associates (HESA) launched this Observatory on AI Policies in Canadian Post-Secondary Education. HESA’s AI Observatory “will act as a Canadian clearinghouse for post-secondary institutions’ policies and guidelines with respect to AI.”

 

The mounting human and environmental costs of generative AI

Luccioni, S. (2023, April 12). Ars Technica.

  • Dr. Sasha Luccioni explores the human and environmental costs of generative AI.

 

Initial guidance for evaluating the use of AI in scholarship and creativity.

Modern Language Association and Conference on College Composition and Communication Joint Task Force on Writing and AI. (2024, January 28). 

  • The MLA-CCCC’s Joint Task Force on Writing and AI offers its “provisional guidance for evaluating the use of AI in Scholarship and Creativity, including basic standards for the ethical use of these technologies.”

 

OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic

Perrigo, B. (2023, January 18). Time.

  • Important reporting in Time about the unethical labour practices that were used to train ChatGPT.

 

ChatGPT and Artificial Intelligence in higher education: Quick start guide

Sabzalieva, E., & Valentini, A. (2023). United Nations Educational, Scientific and Cultural Organization and UNESCO International Institute for Higher Education in Latin America and the Carribean.

  • “The Quick Start Guide provides an overview of how ChatGPT works and explains how it can be used in higher education. The Quick Start Guide raises some of the main challenges and ethical implications of AI in higher education and offers practical steps that higher education institutions can take.”

 

Inside the secret list of websites that make AI like ChatGPT sound smart

Schaul, K., Chen, S. Y., & Tiku, N. (2023, April 19). The Washington Post.

  • Reporters from The Washington Post drill down into Google’s C4 data set, “a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models, including Google’s T5 and Facebook’s LLaMA.”

 

AI dialogues [Audio podcast]

Verkoeyen, S. (Host). (2023–present). MacPherson Institute.

  • “AI Dialogues delves into the ethical and practical questions of generative AI for McMaster University and post-secondary education, bridging the gap between knowledgeable educators, students, and practitioners and those less familiar with AI technology. Each episode, we’ll explore the complexities of AI, its potential for innovation, and the challenges it poses. We’ll tackle questions like: How can AI enhance the learning experience? What are the ethical considerations? What’s the future of AI in education?”

 

Generative AI exists because of the transformer: This is how it works

Visual Storytelling Team & Murgia, M. (2023, September 11). Financial Times.

  • Similar to Clarke et al.’s  article for The Guardian, this piece is a detailed explanation of transformer models with helpful visual representations of the different steps involved in text generation.

 

Case tracker: artificial intelligence, copyrights and class actions

Weisenberger, T. M. (n.d.). BakerHostetler.

  • This page monitors ongoing copyright infringement lawsuits involving generative AI in the United States.

Additional Information

Office of Student Community Standards (OSCS)

The Office of Community Standards is responsible for promoting the rights and responsibilities of students through the administration of the Code of Student Community Standards and the Code of Student Academic Integrity. They also support the MRU campus community in navigating conflict using various resolution pathways.

If you have questions or concerns about the use of GenAI in an assignment, course or academic assessment at MRU, please contact the Office of Community Standards by emailing studentcommunitystandards@mtroyal.ca

References

Bearman, M., Ajjawi, R., Boud, D., Tai, J. & Dawson, P. (2023). CRADLE Suggests… assessment and genAI. Centre for Research in Assessment and Digital Learning, Deakin University, Melbourne, Australia. https://doi.org/10.6084/m9.figshare.22494178

 

Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2021.

 

Craig, C. J. (2021). AI and copyright. In F. Martin-Bariteau & T. Scassa (Eds.), Artificial intelligence and the law in Canada. LexisNexis Canada.
https://ssrn.com/abstract=3733958 

 

Creative Commons. (2021, September 17). Government of Canada consultation on a modern copyright framework for artificial intelligence and the Internet of Things. Innovation, Science and Economic Development Canada. https://ised-isde.canada.ca/site/strategic-policy-sector/en/marketplace-framework-policy/copyright-policy/submissions-consultation-modern-copyright-framework-artificial-intelligence-and-internet-things/creative-commons 

 

D’Agostino, S. (2023, January 12). ChatGPT advice academics can use now. Inside Higher Ed. https://www.insidehighered.com/news/2023/01/12/academic-experts-offer-advice-chatgpt

 

Eaton, S., & Anselmo, L. (2023, January). Teaching and learning with artificial intelligence apps. Taylor Institute for Teaching and Learning.
https://taylorinstitute.ucalgary.ca/teaching-with-AI-apps

 

Fowler, G. A. (2023, April 3). We tested a new ChatGPT-detector for teachers. It flagged an innocent student. The Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin

 

Fricke, V. (2022, October 17). The end of creativity?! – AI-generated content under the Canadian Copyright Act. McGill University. https://www.mcgill.ca/business-law/article/end-creativity-ai-generated-content-under-canadian-copyright-act 

 

Gal, Uri. (2023, February 7). ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned. The Conversation. https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283  

 

Government of Canada. (2021). A consultation on a modern copyright framework for artificial intelligence and the Internet of Things.
https://ised-isde.canada.ca/site/strategic-policy-sector/en/marketplace-framework-policy/copyright-policy/consultation-modern-copyright-framework-artificial-intelligence-and-internet-things-0

 

Government of Canada. (2023, March 14). The Artificial Intelligence and Data Act (AIDA) – Companion document. Innovation, Science and Economic Development Canada. https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document

 

Grinschgl, S., & Neubauer, A. C. (2022). Supporting cognition with modern technology: Distributed cognition today and in an AI-enhanced future. Frontiers in Artificial Intelligence, 5, Article 908261. https://doi.org/10.3389/frai.2022.908261

 

Kumar, R., Mindzak, M., Eaton, S. E., & Morrison, R. (2022, May 17). AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity [Conference presentation]. Canadian Society for the Study of Higher Education Annual Conference, Online. https://dx.doi.org/10.11575/PRISM/39762

 

Kwantlen Polytechnic University. (n.d.). Generative AI: An overview for teaching and learning. Retrieved October 11, 2023, from https://wordpress.kpu.ca/generativeaitlkpu/files/2023/04/Generative-AI-An-Overview-for-Teaching-and-Learning-03042023.pdf

 

Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), Article 102720. https://doi.org/10.1016/j.acalib.2023.102720

 

Lucy, L., & Bamman, D. (2021). Gender and representation bias in GPT-3 generated stories. Proceedings of the Third Workshop on Narrative Understanding, 48–55. https://doi.org/10.18653/v1/2021.nuse-1.5 

 

metaLAB at Harvard. (n.d.). AI pedagogy project. https://aipedagogy.org/

 

NVIDIA. (n.d.). NVIDIA large language models (LLMs). Retrieved January 18, 2023, from https://web.archive.org/web/20230117121919/https://www.nvidia.com/en-us/deep-learning-ai/solutions/large-language-models/ 

 

Poisson, J. (Host).  (2022, December 14). AI art and text is getting smarter, what comes next? [Audio podcast episode]. In Frontburner. CBC.
https://www.cbc.ca/radio/frontburner/ai-art-and-text-is-getting-smarter-what-comes-next-1.6684148

 

Snyder, K. (2023, February 3). We asked ChatGPT to write performance reviews and they are wildly sexist (and racist). Fast Company. https://www.fastcompany.com/90844066/chatgpt-write-performance-reviews-sexist-and-racist

 

Trust, T. (n.d.). ChatGPT & education [Google slides]. https://docs.google.com/presentation/d/1Vo9w4ftPx-rizdWyaYoB-pQ3DzK1n325OgDgXsnt0X0

 

UNESCO. (2022). K-12 AI curricula: A mapping of government-endorsed AI curricula. UNESDOC Digital Library.
https://unesdoc.unesco.org/ark:/48223/pf0000380602 

 

UNESCO. (2023). Guidance for generative AI in education and research. UNESDOC Digital Library. https://unesdoc.unesco.org/ark:/48223/pf0000386693

 

University of Toronto. (n.d.). ChatGPT and generative AI in the classroom. Retrieved December 7, 2023, from https://www.viceprovostundergrad.utoronto.ca/strategic-priorities/digital-learning/special-initiative-artificial-intelligence/

 

Upshall, M. (2022). An AI toolkit for libraries. Insights, 35(18). https://doi.org/10.1629/uksg.592

 

de Vries, A. (2023). The growing energy footprint of artificial intelligence. Joule 7, 1-4. https://doi.org/10.1016/j.joule.2023.09.004

 

Ward, D., Gibbs, A., Henkel, T., Loshbaugh, H. G., Siering, G., Williamson, J., & Kayser, M. (2023, December 1). Indecision about AI in classes is so last week. Inside Higher Ed. https://www.insidehighered.com/opinion/career-advice/2023/12/01/advice-about-ai-classroom-coming-new-year-opinion