Skip to Main Content

Banner

Artificial Intelligence

Learn more about generative AI, its potential uses in teaching and learning, and the opportunities and challenges presented by this emerging technology.

About

The MRU Library, Academic Development Centre and Student Learning Services have collaborated to bring you the information on this living page, whose intent is to provide information and resources about generative AI and higher education to the MRU community. For concerns related to academic conduct, please contact the Office of Student Community Standards.

Last updated December 9, 2024. 

AI vs. GenAI

Artificial Intelligence: Machines that imitate some features of human intelligence, such as perception, learning, reasoning, problem-solving, language interaction and creative work UNESCO (2022)

Artificial Intelligence (AI)

A general term used to describe a number of different, specific systems. We encounter and use AI every day: from navigating maps on Google or Apple, to asking Siri or Alexa to set a timer, to searching a library catalogue. AI is a part of our lives. 

Generative AI (GenAI)

“A type of artificial intelligence that involves creating machines or computer programs that can generate new content, such as images, text, or music. Unlike traditional AI systems that rely on predefined rules or pre-existing data to make decisions, generative AI models use algorithms and neural networks to learn patterns and relationships in data and generate new outputs based on that learning” (Kwantlen Polytechnic University, n.d., p. 1).


Algorithm

The “brains” of an AI system, algorithms are a complex set of rules and decisions that determine which action the AI system takes. Machine learning algorithms can discover their own rules or be rule-based, in which case human programmers input the rules.

Machine Learning (ML)

A field of study with a range of approaches to developing the algorithms used in AI systems. Machine learning algorithms can discover rules and patterns in data without a human specifying them, which can sometimes lead to the system perpetuating biases.

Training Data

The data, generated by humans, used to train the algorithm or machine learning model. Training data is an essential component to the AI system, and may perpetuate the systemic biases of source data when implemented.

For these and related definitions, browse the Glossary of Artificial Intelligence Terms for Educators (CIRCLS, n.d.).

Types of GenAI

There are many different types of generative AI that can create text, images, sound, video, and more. This section describes common types of generative AI and includes examples of tools.

Text Generators

Create new text that is similar to the data they were trained on. The training process involves consuming large amounts of text from data from webpages, books, and other sources, then analyzing the text to find patterns and relationships in human language. Because of this training process, these tools are commonly referred to as Large Language Models (LLMs). They use probability to predict which words should appear in sequence.

AI chatbots can produce essays, blogs, scripts, news articles, reflective statements, and even poetry.

Examples of tools: ChatGPT, Perplexity AI, and Microsoft Copilot.

Image Generators

Learn through analyzing datasets of images with captions or text descriptions. If it knows what two different concepts are, like a cat and a skateboard, it can merge those concepts together when prompted to create an image of a cat on a skateboard.

Generative AI image tools can produce diverse images in a range of media, everything from photorealistic oil painting style to anime.

Examples of tools: DALL·E, Midjourney and Stable Diffusion.

Sound and Music Generators

Analyze music tracks and metadata (artist name, album title, genre, year song was released, associated playlists) to identify patterns and features in particular music genres. They may also be trained on song lyrics.

Examples of tools: AIVA and Soundful.

Video Generators

Creating a video typically requires the use of audio, visual, and text elements. Some generative AI video programs have harvested existing videos to learn how to create new ones, while others have sourced the three elements to create video from audio, visual, and text sources. There are even generative AI video programs that have been trained to use video editing software, so they can apply effects to a video that you have created, such as adding captions, transitions, and animations.

Examples of tools: Runway Gen-1 and Invideo.

Research Discovery and Explanation Generators

Some generative AI tools can automate parts of the research process and make long, complex texts easier to decipher. This type of AI often analyzes research papers that users upload to extract key information or summarize a paper.

Examples of tools: Elicit and Scite.

Note: Many of these tools cost money to use or to access premium features, like more recent content and faster processing speeds. However, in some cases you can create a basic account for free or explore the tool with a short-term trial.

The text in this section was adapted from Typesof Generative AI by the University of Alberta Library, which is licensed under CC BY-NC-SA 4.0.

Opportunities and Challenges

Emerging GenAI technologies present both opportunities and challenges for learners and educators.

Opportunities

GenAI tools offer opportunities to focus discussion and instruction on AI Literacy (Bali, 2024; Upshall, 2022). For example, activities where learners analyze GenAI output could help develop AI tool appraisal and critical thinking skills.

There is a wide range of potential uses of GenAI to improve teaching, many of which are still being explored (metaLAB at Harvard, n.d.). For example:

  • The rise of GenAI has prompted educators to rethink their assessment practices (Bearman et al., 2023; UNESCO, 2023, p. 37).
  • Educators could use GenAI tools as curriculum or course co-designers (UNESCO, 2023, p. 31). For example, a GenAI tool could help an instructor to draft learning outcomes for a course or for a specific assessment.
  • Educators could use GenAI tools as teaching assistants that could provide learners with individualized support (UNESCO, 2023, p. 31) that is personalized to their learning style, interests, abilities, and learning needs (Kwantlen Polytechnic University, n.d., p. 3).

GenAI tools may help augment learning. Mike Sharples has devised 10 roles that a GenAI tool could play in augmenting learning, including Possibility Engine, Personal Tutor, and Motivator (Sabzalieva & Valentini,, 2023, p. 9). Interacting with GenAI tools can help learners to develop their evaluative judgement capabilities, essential for academic success and lifelong learning (Bearman et al., 2024).

GenAI may be used as an assistive tool for those with accessibility needs (Heidt, 2024Kwantlen Polytechnic University, n.d.). This could include auto-generating captions or sign language interpretation for audio or visual content that lacks it, or generating audio descriptions of textual or visual material (UNESCO, 2023, p. 35).

Users may delegate certain tasks to GenAI to reduce cognitive demand, thereby freeing up the user’s time and effort for other tasks (Grinschgl & Neubauer, 2022).

Challenges and Ethical Implications

GenAI use must be paired with human verification. GenAI output should be considered as one data point to be cross-checked against data from other credible sources.. More recent GenAI models are less likely to generate citations for sources that do not exist, but it is always important to check 1) that a source is real and 2) that the GenAI output actually matches the cited source.

GenAI systems may be manipulated or used in unethical ways, such as when a student knowingly uses them to bypass learning. In addition, identifying when a learner has used GenAI generated text in their writing can be very difficult, posing a challenge to educators (Elkhatat et al., 2023Fowler, 2023; Furze, 2024Kumar et al., 2022).

GenAI systems perpetuate existing human biases, as they generate outputs based on patterns in the data they were trained on (Scheuer-Larsen, 2023). For example, GenAI photo editing tools have expressed racial biases (Poisson, 2022), and large language models such as ChatGPT and Gemini have perpetuated gender biases (UNESCO/IRCAI, 2024), racial biases (Snyder, 2023), and ability biases (Urbina, et al., in press) in their outputs. 

Developing laws and ongoing court cases regarding the use of genAI tools and copyright currently lead to significant legal uncertainty in Canada and around the world. Some of the concerns include (University of Toronto, n.d.):

  • Input (i.e. training data): The legality of the content used to train AI models is unknown in some cases. There are a number of lawsuits originating from the US that allege genAI tools infringe on copyright and it remains unclear if and how the fair use doctrine can be applied. As of now, no genAI lawsuits have started in Canada and because of this the uncertainty remains regarding the extent to which existing exceptions in the copyright framework, such as fair dealing, apply to this activity.
  • Output (i.e. text, image, etc. generated by genAI tools): Authorship and ownership of works created by AI is unclear. Traditionally, Canadian copyright law has indicated that an author must be a natural person (human) who exercises skill and judgement in the creation of a work. As there are likely to be varying degrees of human input in generated content, it is unclear in Canada how it will be determined who the appropriate author and owner of works are. 

Concerns have been raised about the environmental costs involved both in initial training of GenAI models and in their daily use once they have been rolled out to the public. Specifically, researchers are analyzing their energy, water, and mineral uses, and greenhouse gas emissions (Luccioni et al., 2024).

GenAI systems are trained on enormous datasets that may include personal information previously posted to the internet that could be used to identify individuals (Gal, 2023; Kwantlen Polytechnic University, n.d.). Additionally, there are considerable privacy concerns related to the information that users supply when prompting GenAI systems and that user information then being used to train the model in the future (Gal, 2023).

Suggestions for Use

Suggestions for Faculty
Ethics Ask yourself whether you are comfortable with the ethical implications of using GenAI tools (e.g., environmental sustainability, unethical labour practices by tech companies bias and discrimination). See the Challenges and Ethical Implications section of this page for more information.
Conversations with Students Make time in class to have open conversations with students about GenAI and the implications of its use in their academic work (Ward et al., 2023). Examples of questions:
  • What do you know about artificial intelligence tools?
  • How have you been using them?
  • What potential opportunities and challenges do you see?
Clear Expectations Mention GenAI tools explicitly in your course outline (see University of Alberta’s sample statements). Each time you introduce an assessment, clarify your expectations with respect to GenAI tool use and provide a rationale tied to the purpose of the assignment.
Academic Integrity Help students acquire a foundational understanding of academic integrity (e.g., have them complete MRU’s academic integrity online training module).
Experiment Engage, explore and experiment (Eaton & Anselmo, 2023). Trying out AI tools yourself will help you understand what is possible. Be aware of the data you are providing and available privacy settings (e.g., the ability to turn off chat history in ChatGPT).
Acknowledgement Give clear guidance to students on how to acknowledge GenAI output. Openly acknowledge your own use of GenAI tool use in your teaching and scholarship (e.g., how you have used it to design learning materials and assessments).
Assessment Think more deeply than ever (D'Agostino, 2023) about the learning outcomes of your course and how your assessments align with those outcomes. Identify the cognitive tasks your students need to perform without assistance (Bearman et al., 2023).
Space for Failure Encourage productive struggle and learning from failure by allowing resubmissions/rewrites where feasible (see the linked slide in this resource) (Trust, n.d.). Fear of failure can be a factor in a student’s decision to use GenAI in ways that may bypass learning.
Suggestions for Students
Ethics Ask yourself whether you are comfortable with the ethical implications of using GenAI tools (e.g., environmental sustainability, unethical labour practices by tech companies, and bias in GenAI training data and discrimination in their output). See the Challenges and Ethical Implications section of this page for more information.
Instructor Expectations For every assignment and test, make sure you understand your instructor’s expectations with respect to Gen AI use. Check your course outline, and check assignment guidelines documents for this information. If you are unsure, ask your instructor.  Where GenAI use is allowed, be sure to check expectations for acknowledgement of tool use.
Academic Integrity To learn more about academic integrity and what constitutes academic misconduct, complete MRU’s online training module. (Log in using your @mtroyal.ca credentials, and then select the “Enroll in Course” button. If you’re already enrolled, you’ll see “Open Course.”).
Experiment Take the time to experiment with GenAI tools to better understand what they can and cannot do. Critically analyze the output; sometimes it looks great on the surface, but not when you look more deeply. These tools are great synthesizers, but the critical thinker is you.
Implications for your Learning Before using a GenAI tool for a particular task, ask yourself how it will affect your learning. Will it enhance learning, or diminish it? Will it give you opportunities to think more deeply or less deeply? In the case of using GenAI for writing, be aware of how using the tool could impact your own writer’s voice.
Privacy Ask yourself whether the information you are feeding into the GenAI tool is even yours to share. Do you have the appropriate rights or permissions? If you do, could sharing this information impact you negatively in the future?
Applications in the Workplace Be curious about how GenAI tools are being used by professionals in your discipline. Ask your professors, and ask people in your network.

Recommended Readings and Resources

Understanding the Tech

How AI chatbots like ChatGPT or Bard work – visual explainer
Clarke, S., Milmo, D., & Blight, G. (2023, November 1). The Guardian.
Clarke et al. provide a visual walkthrough of how large language models work to predict the next word in a sequence of text.

Generative AI exists because of the transformer: This is how it works
Visual Storytelling Team & Murgia, M. (2023, September 11). Financial Times.
Similar to Clarke et al.’s article for The Guardian, this piece is a detailed explanation of transformer models with helpful visual representations of the different steps involved in text generation.

GenAI in Teaching and Learning

AI observatory
Higher Education Strategy Associates. (n.d.).
Higher Education Strategy Associates (HESA) launched this Observatory on AI Policies in Canadian Post-Secondary Education. HESA’s AI Observatory “will act as a Canadian clearinghouse for post-secondary institutions’ policies and guidelines with respect to AI.”

AI dialogues [Audio podcast]
Verkoeyen, S. (Host). (2023–present). MacPherson Institute.
“AI Dialogues delves into the ethical and practical questions of generative AI for McMaster University and post-secondary education, bridging the gap between knowledgeable educators, students, and practitioners and those less familiar with AI technology. Each episode, we’ll explore the complexities of AI, its potential for innovation, and the challenges it poses. We’ll tackle questions like: How can AI enhance the learning experience? What are the ethical considerations? What’s the future of AI in education?”

ChatGPT and Artificial Intelligence in higher education: Quick start guide
Sabzalieva, E., & Valentini, A. (2023). United Nations Educational, Scientific and Cultural Organization and UNESCO International Institute for Higher Education in Latin America and the Carribean.
“The Quick Start Guide provides an overview of how ChatGPT works and explains how it can be used in higher education. The Quick Start Guide raises some of the main challenges and ethical implications of AI in higher education and offers practical steps that higher education institutions can take.”

Teaching and learning with artificial intelligence apps
Eaton, S., & Anselmo, L. (2023, January). Taylor Institute for Teaching and Learning.
Advice on using AI apps in the classroom. “If we think of artificial intelligence apps as another tool that students can use to ethically demonstrate their knowledge and learning, then we can emphasize learning as a process not a product.”

Developing evaluative judgement for a time of generative artificial intelligence
Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Assessment & Evaluation in Higher Education.
Bearman et al. argue that “generative AI can be a partner in the development of human evaluative judgement capability,” an essential and uniquely human ability.

ENAI recommendations on the ethical use of artificial intelligence in education
Foltynek, T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. (2023). International Journal for Educational Integrity, 19. Article 12.
The European Network for Academic Integrity shares its recommendations on the ethical use of AI in education.

Initial guidance for evaluating the use of AI in scholarship and creativity.
Modern Language Association and Conference on College Composition and Communication Joint Task Force on Writing and AI. (2024, January 28).
The MLA-CCCC’s Joint Task Force on Writing and AI offers its “provisional guidance for evaluating the use of AI in Scholarship and Creativity, including basic standards for the ethical use of these technologies.”

The problem with “perfect” answers: GenAI and academic research tools
Munoz, R. (2024, October 15). EDUCAUSE Review
Munoz explores GenAI for academic research and Gen Z information behaviours in this short, thoughtful piece.

AI Misuse

Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text
Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). International Journal for Educational Integrity.
This paper is an analysis of the AI content detection tools developed by OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag, and their accuracy at detecting AI-generated text.

Validity matters more than cheating
Dawson, R., Bearman, M., Dollinger, M., & Boud, D. (2024). Assessment & Evaluation in Higher Education, 1-12.
The authors argue that over-emphasizing cheating can actually harm assessment validity, which should be the central concern of educators. They further argue that validity subsumes the concept of cheating, so addressing validity will also address cheating.

Indigenous Perspectives on AI

An Indigenous perspective on generative AI [Audio podcast episode]
Hendrix, J. (Host). (2023, January 29). In The Sunday Show. Tech Policy Press.
Justin Hendrix interviews Michael Running Wolf, a PhD student in computer science at McGill University and a Northern Cheyenne and Lakota man. Michael Running Wolf is also the founder of the non-profit Indigenous in AI. He provides his perspective on generative AI.

#DataBack Vol. 2: Truth Before “Indigenous AI”
Animikii. (2024).
“A critical review of Generative AI for Indigenous-focused organizations, governments, and anyone who wants to better understand and protect their rights and privacy.”

Indigenous AI
Indigenous Protocol and Artificial Intelligence Working Group. (n.d.).
“The Indigenous Protocol and Artificial Intelligence (A.I.) Working Group develops new conceptual and practical approaches to building the next generation of A.I. systems.”

Sustainability and Ethical Concerns

AI Is a lot of work
Dzieza, J. (2023, June 20). The Verge.
Dzieza explores the “vast tasker underclass” that powers AI.

The environmental impacts of AI – Primer
Luccioni, S., Trevelin, B., & Mitchell, M. (2024, September 3). Hugging Face.
Luccioni, Trevelin, and Mitchell wrote this primer “to shed light on the environmental impacts of the full AI lifecycle, describing which kinds of impacts are at play when, and why they matter.”

OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic
Perrigo, B. (2023, January 18). Time.
Important reporting in Time about the unethical labour practices that were used to train ChatGPT.

GenAI and Copyright

Case tracker: Artificial intelligence, copyrights and class actions
Weisenberger, T. M. (n.d.). BakerHostetler.
This page monitors ongoing copyright infringement lawsuits involving generative AI in the United States.

DAIL – the Database of AI Litigation
Ethical Tech Initiative of DC at George Washington Law School. (n.d.).
This page monitors ongoing copyright infringement lawsuits involving AI more broadly in the United States.

Cohere AI CEO Aidan Gomez on the emerging legal and regulatory challenges for artificial intelligence [Audio podcast episode]
Geist, M. (Host). (2023, April 17). In Law Bytes. Michael Geist.
Law Bytes host Michael Geist is joined by Cohere AI CEO Aidan Gomez to discuss complex legal and regulatory issues related to AI.

Inside the secret list of websites that make AI like ChatGPT sound smart
Schaul, K., Chen, S. Y., & Tiku, N. (2023, April 19). The Washington Post.
Reporters from The Washington Post drill down into Google’s C4 data set, “a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models, including Google’s T5 and Facebook’s LLaMA.”

Additional Information

Office of Student Community Standards (OSCS)

The Office of Community Standards is responsible for promoting the rights and responsibilities of students through the administration of the Code of Student Community Standards and the Code of Student Academic Integrity. They also support the MRU campus community in navigating conflict using various resolution pathways.

If you have questions or concerns about the use of GenAI in an assignment, course or academic assessment at MRU, please contact the Office of Community Standards by emailing studentcommunitystandards@mtroyal.ca

References