Skip to Main Content

Artificial Intelligence

Learn more about generative AI, its potential uses in teaching and learning, and the opportunities and challenges presented by this emerging technology.

The MRU Library, Academic Development Centre and Student Learning Services have collaborated to bring you the information on this living page, whose intent is to provide information and resources about generative AI and higher education to the MRU community. For concerns related to academic conduct, please contact the Office of Student Community Standards.

Last updated April 19, 2024. 

AI vs. GenAI

Artificial intelligence: Machines that imitate some features of human intelligence, such as perception, learning, reasoning, problem-solving, language interaction and creative work UNESCO (2022)


Artificial intelligence (AI) is a general term used to describe a number of different, specific systems. We encounter and use AI every day: from navigating maps on Google or Apple, to asking Siri or Alexa to set a timer, to searching a library catalogue. AI is a part of our lives. 

Generative AI (GenAI) is “a type of artificial intelligence that involves creating machines or computer programs that can generate new content, such as images, text, or music. Unlike traditional AI systems that rely on predefined rules or pre-existing data to make decisions, generative AI models use algorithms and neural networks to learn patterns and relationships in data and generate new outputs based on that learning” (Kwantlen Polytechnic University, n.d., p. 1).


The “brains” of an AI system, algorithms are a complex set of rules and decisions that determine which action the AI system takes. Machine learning algorithms can discover their own rules or be rule-based, in which case human programmers input the rules.

Machine Learning (ML)

A field of study with a range of approaches to developing the algorithms used in AI systems. Machine learning algorithms can discover rules and patterns in data without a human specifying them, which can sometimes lead to the system perpetuating biases.

Training Data

The data, generated by humans, used to train the algorithm or machine learning model. Training data is an essential component to the AI system, and may perpetuate the systemic biases of source data when implemented.

For these and related definitions, browse the Glossary of Artificial Intelligence Terms for Educators (CIRCLS, n.d.).

Functions of GenAI

GenAI tools perform a wide variety of practical functions and tasks (examples retrieved from NVIDIA, n.d.; Upshall, 2022). For a comprehensive directory of AI tools, explore Futurepedia.

Examples of Functions

Generate text from a prompt 

“Write a paper on the impact of fake news on education”
“Write a poem about existentialism in the style of Walt Whitman”
“Simplify the following radiology report”

Synthesize information

(e.g., summarize a text, combine information from multiple sources)

Create an image or digital illustration from a prompt

“A Cubist oil painting of a couple lounging next to a creek”
“A photorealistic image of a half-eaten pumpkin pie”

Generate computer code

(e.g., generate new code from a comment, fix flawed code)

Translate text

“Translate the following text from Turkish to English”

Opportunities and Challenges

Emerging GenAI technologies present both opportunities and challenges for learners and educators.


  • AI LITERACY DEVELOPMENT: GenAI tools offer opportunities to focus discussion and instruction on AI Literacy (Bali, 2024; Upshall, 2022). For example, activities where learners analyze GenAI output could help develop AI tool appraisal and critical thinking skills.
  • IMPROVE TEACHING: There is a wide range of potential uses of GenAI to improve teaching, many of which are still being explored (metaLAB at Harvard, n.d.). For example:
    • The rise of GenAI has prompted educators to rethink their assessment practices (Bearman et al., 2023; UNESCO, 2023, p. 37).
    • Educators could use GenAI tools as curriculum or course co-designers (UNESCO, 2023, p. 31). For example, a GenAI tool could help an instructor to draft learning outcomes for a course or for a specific assessment.
    • Educators could use GenAI tools as teaching assistants that could provide learners with individualized support (UNESCO, 2023, p. 31) that is personalized to their learning style, interests, abilities, and learning needs (Kwantlen Polytechnic University, n.d., p. 3).
  • IMPROVE LEARNING: GenAI tools may help augment learning. Mike Sharples has devised 10 roles that a GenAI tool could play in augmenting learning, including Possibility Engine, Personal Tutor, and Motivator (Sabzalieva & Valentini,, 2023, p. 9). Interacting with GenAI tools can help learners to develop their evaluative judgement capabilities, essential for academic success and lifelong learning (Bearman et al., 2024).
  • ACCESSIBILITY: GenAI may be used as an assistive tool for those with accessibility needs (Heidt, 2024Kwantlen Polytechnic University, n.d.). This could include auto-generating captions or sign language interpretation for audio or visual content that lacks it, or generating audio descriptions of textual or visual material (UNESCO, 2023, p. 35).
  • COGNITIVE OFFLOADING: Users may delegate certain tasks to GenAI to reduce cognitive demand, thereby freeing up the user’s time and effort for other tasks (Grinschgl & Neubauer, 2022).

Challenges and Ethical Implications

  • UNRELIABLE CONTENT: GenAI use must be paired with human verification. GenAI output should be considered as one data point to be cross-checked against data from other credible sources.. More recent GenAI models are less likely to generate citations for sources that do not exist, but it is always important to check 1) that a source is real and 2) that the GenAI output actually matches the cited source.
  • ACADEMIC MISCONDUCT: GenAI systems may be manipulated or used in unethical ways, such as when a student knowingly uses them to bypass learning. In addition, identifying when a learner has used GenAI generated text in their writing can be very difficult, posing a challenge to educators (Elkhatat et al., 2023Fowler, 2023; Furze, 2024Kumar et al., 2022).
  • BIAS AND DISCRIMINATION: GenAI systems perpetuate existing human biases, as they generate outputs based on patterns in the data they were trained on. For example, GenAI photo editing tools have expressed racial biases (Poisson, 2022), and large language models such as ChatGPT have perpetuated gender biases and stereotypes (Lucy & Bamman, 2021; Snyder, 2023) in its outputs.
  • INTELLECTUAL PROPERTY: Developing laws and ongoing court cases regarding the use of genAI tools and copyright currently lead to significant legal uncertainty in Canada and around the world. Some of the concerns include (University of Toronto, n.d.):
    • Input (i.e. training data): The legality of the content used to train AI models is unknown in some cases. There are a number of lawsuits originating from the US that allege genAI tools infringe on copyright and it remains unclear if and how the fair use doctrine can be applied. As of now, no genAI lawsuits have started in Canada and because of this the uncertainty remains regarding the extent to which existing exceptions in the copyright framework, such as fair dealing, apply to this activity.
    • Output (i.e. text, image, etc. generated by genAI tools): Authorship and ownership of works created by AI is unclear. Traditionally, Canadian law has indicated that an author must be a natural person (human) who exercises skill and judgement in the creation of a work. As there are likely to be varying degrees of human input in generated content, it is unclear in Canada how it will be determined who the appropriate author and owner of works are. Currently, the Federal Government of Canada is seeking public feedback on this concern.
  • SUSTAINABILITY: Concerns have been raised about the environmental costs involved both in initial training of GenAI models and in their daily use once they have been rolled out to the public. Specifically, researchers are analyzing their electricity use and carbon emissions (de Vries, 2023; Luccioni, 2023).
  • PRIVACY: GenAI systems are trained on enormous datasets that may include personal information previously posted to the internet that could be used to identify individuals (Gal, 2023; Kwantlen Polytechnic University, n.d.). Additionally, there are considerable privacy concerns related to the information that users supply when prompting GenAI systems and that user information then being used to train the model in the future (Gal, 2023).

Suggestions for Use

Suggestions for Faculty

  • ETHICS: Ask yourself whether you are comfortable with the ethical implications of using GenAI tools (e.g., environmental sustainability, unethical labour practices by tech companies bias and discrimination). See the Challenges and Ethical Implications section of this page for more information. 
  • CONVERSATIONS WITH STUDENTS: Make time in class to have open conversations with students about GenAI and the implications of its use in their academic work (Ward et al., 2023). Examples of questions:
    • What do you know about artificial intelligence tools?
    • How have you been using them?
    • What potential opportunities and challenges do you see?
  • CLEAR EXPECTATIONS: Mention GenAI tools explicitly in your course outline (see University of Alberta’s sample statements). Each time you introduce an assessment, clarify your expectations with respect to GenAI tool use and provide a rationale tied to the purpose of the assignment.
  • ACADEMIC INTEGRITY: Help students acquire a foundational understanding of academic integrity (e.g., have them complete MRU’s academic integrity online training module).
  • EXPERIMENT: Engage, explore and experiment (Eaton & Anselmo, 2023). Trying out AI tools yourself will help you understand what is possible. Be aware of the data you are providing and available privacy settings (e.g., the ability to turn off chat history in ChatGPT).
  • ACKNOWLEDGEMENT: Give clear guidance to students on how to acknowledge GenAI output. Openly acknowledge your own use of GenAI tool use in your teaching and scholarship (e.g., how you have used it to design learning materials and assessments).
  • ASSESSMENT: Think more deeply than ever (D'Agostino, 2023) about the learning outcomes of your course and how your assessments align with those outcomes. Identify the cognitive tasks your students need to perform without assistance (Bearman et al., 2023).
  • SPACE FOR FAILURE: Encourage productive struggle and learning from failure by allowing resubmissions/rewrites where feasible (see the linked slide in this resource) (Trust, n.d.). Fear of failure can be a factor in a student’s decision to use GenAI in ways that may bypass learning.

Suggestions for Students

  • ETHICS: Ask yourself whether you are comfortable with the ethical implications of using GenAI tools (e.g., environmental sustainability, unethical labour practices by tech companies, and bias in GenAI training data and discrimination in their output). See the Challenges and Ethical Implications section of this page for more information.
  • INSTRUCTOR EXPECTATIONS: For every assignment and test, make sure you understand your instructor’s expectations with respect to Gen AI use. Check your course outline, and check assignment guidelines documents for this information. If you are unsure, ask your instructor.  Where GenAI use is allowed, be sure to check expectations for acknowledgement of tool use.
  • ACADEMIC INTEGRITY:  To learn more about academic integrity and what constitutes academic misconduct, complete MRU’s online training module. (Log in using your credentials, and then select the “Enroll in Course” button. If you’re already enrolled, you’ll see “Open Course.”).
  • EXPERIMENT: Take the time to experiment with GenAI tools to better understand what they can and cannot do. Critically analyze the output; sometimes it looks great on the surface, but not when you look more deeply. These tools are great synthesizers, but the critical thinker is you.
  • IMPLICATIONS FOR YOUR LEARNING: Before using a GenAI tool for a particular task, ask yourself how it will affect your learning. Will it enhance learning, or diminish it? Will it give you opportunities to think more deeply or less deeply? In the case of using GenAI for writing, be aware of how using the tool could impact your own writer’s voice.
  • PRIVACY: Ask yourself whether the information you are feeding into the GenAI tool is even yours to share. Do you have the appropriate rights or permissions? If you do, could sharing this information impact you negatively in the future?
  • APPLICATIONS IN THE WORKPLACE: Be curious about how GenAI tools are being used by professionals in your discipline. Ask your professors, and ask people in your network.

GenAI and the Law

Copyright Law

Canadian copyright law implies AI cannot own the copyright to creative works. Determining the author of an AI-created work will require a legislative amendment and careful consideration of who (or what) can author AI-generated works. In 2021, the Government of Canada released A Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things (Government of Canada, 2021), which aimed to gather public feedback on potential legislative amendments to the Copyright Act regarding AI. Following this consultation, the Government of Canada has recently released the Consultation on Copyright in the Age of Generative Artificial Intelligence. This consultation will be used to inform the government's policy development process.

Terms and Conditions

If you plan to use GenAI tools, ensure you have read and understood the tool's Terms and Conditions. For any clarification, reach out to the MRU Copyright Advisor (

Artificial Intelligence and Data Act (AIDA)

AIDA is a part of the Digital Charter Implementation Act and is currently working its way through the House of Commons under Bill C-27. AIDA is meant to create a “new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses” (Government of Canada, n.d.). AIDA would require that appropriate measures be put in place to identify, assess, and mitigate risks of harm or biased output prior to the system being available to the public. These obligations would be guided by the following principles:

  • Human oversight & monitoring
  • Transparency
  • Fairness and equity
  • Safety
  • Accountability
  • Validity & robustness

Upcoming Events

Using AI in tried-and-tested pedagogical approaches to enhance learning: An exploratory study (Celebrate! Teaching & Learning at MRU)

Date: Thursday May 9, 2024
Time: 10:00 - 10:25am
Presenter: Uthpala Senarathne Tennakoon
Where: Ross Glen Hall (EC 1050)

Artificial intelligence (AI) took to new heights with ChatGPT release in November 2022, and an immediate impact was felt in the education sector. AI has various potential educational uses, including improving productivity, learning outcomes, personalized instruction, instant feedback, and student engagement. However, since the introduction of ChatGPT, most universities have struggled to understand, monitor, and limit students’ misuse of the technology. As we prepare our students to succeed in this changing world, educators must adopt technological advancements to enhance the student learning experience. Additionally, students need to be made aware of the drawbacks and limitations of these technologies as they navigate these uncertain and emerging trends. There is a lack of empirical evidence on the use of AI tools in undergraduate education and the effectiveness of such use. This session will present the results of an ongoing exploratory study that examines the effectiveness of intentional and strategically designed AI use in the classroom.

Student-to-Student Connections: Developing Artificial Intelligence Guidelines for an International Audience (Celebrate! Teaching and Learning at MRU)

Date: Thursday May 9, 2024
Time: 10:30 - 10:55am
Presenters: Matt Bondea, Casey Buss
Where: Ross Glen Hall (EC 1050)

This session sheds light on the experience of being part of an international student working group under the direction of the International Center for Academic Integrity for developing AI policies from a student’s perspective. It will focus on three specific areas: what the group accomplished, what the experience taught them, and the key principles they advocate for in the development of AI policies and initiatives. This includes highlighting an infographic and mindmap created to illustrate the proper use of AI in academic life from a student's perspective. The session will also explore the challenges of group work and tools the group found helpful for accomplishing their goals. Lastly, they will discuss universal principles found to be integral when drafting AI policies and initiatives.

Living into AI in Higher Education: Exploring the Burning Questions that Remain (Celebrate! Teaching and Learning at MRU)

Date: Thursday May 9, 2024
Time: 1:30 - 1:55pm
Presenters: Joel Blechinger, David Hyttenrauch, Andrea Phillipson, Silvia Rossi, Erika Smith
Where: Ross Glen Hall (EC 1050)

Since the introduction of large-scale access to generative Artificial Intelligence (genAI) over the past year-and-a-half, those working and learning in higher education have had an opportunity to “live into” the potential benefits and challenges presented by these new tools. In this roundtable discussion, participants will have an opportunity to discuss some burning, critical questions that remain at the fore of their experience, and explore implications for the ways in which genAI has and will continue to impact teaching and learning in the undergraduate context.

Participants’ burning questions will include discussion of authentic and meaningful learning opportunities in the genAI context, how to help students develop a sound decision-making process for when and when not to use genAI tools, and the problems that arise when citing genAI output.

The AI Edge: Enhancing Communication Skills for the Digital Era (Celebrate! Teaching and Learning at MRU)

Date: Thursday May 9, 2024
Time: 2:00 - 2:55pm
Presenters: May Hall, Kris Hans, Brenda Lang
Where: Ross Glen Hall (EC 1050)

Kris Hans, Brenda Lang, and May Hall (Project Assistant) will share the transformative journey of integrating AI-powered tools into business communication education. Our project, funded by the Provost’s Teaching and Learning Enhancement Grant (TLEG), aimed to revolutionize MGMT 3210 - Business Communication Theory and Practice by incorporating AI to enrich student learning experiences. We will share insights from our journey, including the development and execution of AI-integrated exercises and assessments, and the crucial role of student feedback in shaping our approach. The presentation will highlight the effectiveness of these innovative methods in improving writing, critical thinking, and ethical considerations in using AI tools. Through engaging narratives and empirical evidence, we aim to spark discussions on the future of education in the rapidly evolving digital landscape, the pedagogical implications of AI integration, and strategies for overcoming challenges encountered along the way.

Recommended Readings and Resources

Developing evaluative judgement for a time of generative artificial intelligence

Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Assessment & Evaluation in Higher Education.

  • Bearman et al. argue that “generative AI can be a partner in the development of human evaluative judgement capability,” an essential and uniquely human ability.

How AI chatbots like ChatGPT or Bard work – visual explainer

Clarke, S., Milmo, D., & Blight, G. (2023, November 1). The Guardian.

  • Clarke et al. provide a visual walkthrough of how large language models work to predict the next word in a sequence of text.

Teaching and learning with artificial intelligence apps

Eaton, S., & Anselmo, L. (2023, January). Taylor Institute for Teaching and Learning.

  • Advice on using AI apps in the classroom.  “If we think of artificial intelligence apps as another tool that students can use to ethically demonstrate their knowledge and learning, then we can emphasize learning as a process not a product.”  

Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text 

Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). International Journal for Educational Integrity.

  • This paper is an analysis of the AI content detection tools developed by OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag, and their accuracy at detecting AI-generated text.

ENAI recommendations on the ethical use of artificial intelligence in education

Foltynek, T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. (2023). International Journal for Educational Integrity, 19. Article 12.

  • The European Network for Academic Integrity shares its recommendations on the ethical use of AI in education.


Cohere AI CEO Aidan Gomez on the emerging legal and regulatory challenges for artificial intelligence [Audio podcast episode]

Geist, M. (Host). (2023, April 17). In Law Bytes. Michael Geist.

  • Law Bytes host Michael Geist is joined by Cohere AI CEO Aidan Gomez to discuss complex legal and regulatory issues related to AI.


An Indigenous perspective on generative AI [Audio podcast episode]

Hendrix, J. (Host). (2023, January 29). In The Sunday Show. Tech Policy Press.

  • Justin Hendrix interviews Michael Running Wolf, a PhD student in computer science at McGill University and a Northern Cheyenne and Lakota man. Michael Running Wolf is also the founder of the non-profit Indigenous in AI. He provides his perspective on generative AI.


AI observatory

Higher Education Strategy Associates. (n.d.). 

  • Higher Education Strategy Associates (HESA) launched this Observatory on AI Policies in Canadian Post-Secondary Education. HESA’s AI Observatory “will act as a Canadian clearinghouse for post-secondary institutions’ policies and guidelines with respect to AI.”


The mounting human and environmental costs of generative AI

Luccioni, S. (2023, April 12). Ars Technica.

  • Dr. Sasha Luccioni explores the human and environmental costs of generative AI.


Initial guidance for evaluating the use of AI in scholarship and creativity.

Modern Language Association and Conference on College Composition and Communication Joint Task Force on Writing and AI. (2024, January 28). 

  • The MLA-CCCC’s Joint Task Force on Writing and AI offers its “provisional guidance for evaluating the use of AI in Scholarship and Creativity, including basic standards for the ethical use of these technologies.”


OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic

Perrigo, B. (2023, January 18). Time.

  • Important reporting in Time about the unethical labour practices that were used to train ChatGPT.


ChatGPT and Artificial Intelligence in higher education: Quick start guide

Sabzalieva, E., & Valentini, A. (2023). United Nations Educational, Scientific and Cultural Organization and UNESCO International Institute for Higher Education in Latin America and the Carribean.

  • “The Quick Start Guide provides an overview of how ChatGPT works and explains how it can be used in higher education. The Quick Start Guide raises some of the main challenges and ethical implications of AI in higher education and offers practical steps that higher education institutions can take.”


Inside the secret list of websites that make AI like ChatGPT sound smart

Schaul, K., Chen, S. Y., & Tiku, N. (2023, April 19). The Washington Post.

  • Reporters from The Washington Post drill down into Google’s C4 data set, “a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models, including Google’s T5 and Facebook’s LLaMA.”


AI dialogues [Audio podcast]

Verkoeyen, S. (Host). (2023–present). MacPherson Institute.

  • “AI Dialogues delves into the ethical and practical questions of generative AI for McMaster University and post-secondary education, bridging the gap between knowledgeable educators, students, and practitioners and those less familiar with AI technology. Each episode, we’ll explore the complexities of AI, its potential for innovation, and the challenges it poses. We’ll tackle questions like: How can AI enhance the learning experience? What are the ethical considerations? What’s the future of AI in education?”


Generative AI exists because of the transformer: This is how it works

Visual Storytelling Team & Murgia, M. (2023, September 11). Financial Times.

  • Similar to Clarke et al.’s  article for The Guardian, this piece is a detailed explanation of transformer models with helpful visual representations of the different steps involved in text generation.


Case tracker: Artificial intelligence, copyrights and class actions

Weisenberger, T. M. (n.d.). BakerHostetler.

  • This page monitors ongoing copyright infringement lawsuits involving generative AI in the United States.

Additional Information

Office of Student Community Standards (OSCS)

The Office of Community Standards is responsible for promoting the rights and responsibilities of students through the administration of the Code of Student Community Standards and the Code of Student Academic Integrity. They also support the MRU campus community in navigating conflict using various resolution pathways.

If you have questions or concerns about the use of GenAI in an assignment, course or academic assessment at MRU, please contact the Office of Community Standards by emailing


Bali, M. (2024, February 26). Where are the crescents in AI? LSE Higher Education Blog.


Bearman, M., Ajjawi, R., Boud, D., Tai, J. & Dawson, P. (2023). CRADLE Suggests… assessment and genAI. Centre for Research in Assessment and Digital Learning, Deakin University, Melbourne, Australia.


Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Developing evaluative judgement for a time of generative artificial intelligence. Assessment & Evaluation in Higher Education,


Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2021.


Craig, C. J. (2021). AI and copyright. In F. Martin-Bariteau & T. Scassa (Eds.), Artificial intelligence and the law in Canada. LexisNexis Canada. 


Creative Commons. (2021, September 17). Government of Canada consultation on a modern copyright framework for artificial intelligence and the Internet of Things. Innovation, Science and Economic Development Canada. 


D’Agostino, S. (2023, January 12). ChatGPT advice academics can use now. Inside Higher Ed.


Eaton, S., & Anselmo, L. (2023, January). Teaching and learning with artificial intelligence apps. Taylor Institute for Teaching and Learning.


Fricke, V. (2022, October 17). The end of creativity?! – AI-generated content under the Canadian Copyright Act. McGill University. 


Furze, L. (2024, April 9). AI detection in education is a dead end. Leon Furze.


Gal, Uri. (2023, February 7). ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned. The Conversation.  


Government of Canada. (2021). A consultation on a modern copyright framework for artificial intelligence and the Internet of Things.


Government of Canada. (2023, March 14). The Artificial Intelligence and Data Act (AIDA) – Companion document. Innovation, Science and Economic Development Canada.


Grinschgl, S., & Neubauer, A. C. (2022). Supporting cognition with modern technology: Distributed cognition today and in an AI-enhanced future. Frontiers in Artificial Intelligence, 5, Article 908261.


Heidt, A. (2024, April 8). ‘Without these tools, I’d be lost’: How generative AI aids in accessibility. Nature.


Kumar, R., Mindzak, M., Eaton, S. E., & Morrison, R. (2022, May 17). AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity [Conference presentation]. Canadian Society for the Study of Higher Education Annual Conference, Online.


Kwantlen Polytechnic University. (n.d.). Generative AI: An overview for teaching and learning. Retrieved October 11, 2023, from


Lucy, L., & Bamman, D. (2021). Gender and representation bias in GPT-3 generated stories. Proceedings of the Third Workshop on Narrative Understanding, 48–55. 


metaLAB at Harvard. (n.d.). AI pedagogy project.


NVIDIA. (n.d.). NVIDIA large language models (LLMs). Retrieved January 18, 2023, from 


Poisson, J. (Host).  (2022, December 14). AI art and text is getting smarter, what comes next? [Audio podcast episode]. In Frontburner. CBC.


Sabzalieva, E., & Valentini, A. (2023). ChatGPT and Artificial Intelligence in higher education: Quick start guide. United Nations Educational, Scientific and Cultural Organization and UNESCO International Institute for Higher Education in Latin America and the Carribean.


Snyder, K. (2023, February 3). We asked ChatGPT to write performance reviews and they are wildly sexist (and racist). Fast Company.


Trust, T. (n.d.). ChatGPT & education [Google slides].


UNESCO. (2022). K-12 AI curricula: A mapping of government-endorsed AI curricula. UNESDOC Digital Library. 


UNESCO. (2023). Guidance for generative AI in education and research. UNESDOC Digital Library.


University of Toronto. (n.d.). ChatGPT and generative AI in the classroom. Retrieved December 7, 2023, from


Upshall, M. (2022). An AI toolkit for libraries. Insights, 35(18).


de Vries, A. (2023). The growing energy footprint of artificial intelligence. Joule 7, 1-4.


Ward, D., Gibbs, A., Henkel, T., Loshbaugh, H. G., Siering, G., Williamson, J., & Kayser, M. (2023, December 1). Indecision about AI in classes is so last week. Inside Higher Ed.