Artificial Intelligence
Learn more about generative AI, its potential uses in teaching and learning, and the opportunities and challenges presented by this emerging technology.
The MRU Library, Academic Development Centre and Student Learning Services have collaborated to bring you the information on this living page, whose intent is to provide information and resources about generative AI and higher education to the MRU community. For concerns related to academic conduct, please contact the Office of Student Community Standards.
Last updated October 10, 2024.
AI vs. GenAI
Artificial intelligence: Machines that imitate some features of human intelligence, such as perception, learning, reasoning, problem-solving, language interaction and creative work UNESCO (2022)
Artificial intelligence (AI) is a general term used to describe a number of different, specific systems. We encounter and use AI every day: from navigating maps on Google or Apple, to asking Siri or Alexa to set a timer, to searching a library catalogue. AI is a part of our lives.
Generative AI (GenAI) is “a type of artificial intelligence that involves creating machines or computer programs that can generate new content, such as images, text, or music. Unlike traditional AI systems that rely on predefined rules or pre-existing data to make decisions, generative AI models use algorithms and neural networks to learn patterns and relationships in data and generate new outputs based on that learning” (Kwantlen Polytechnic University, n.d., p. 1).
Algorithm
The “brains” of an AI system, algorithms are a complex set of rules and decisions that determine which action the AI system takes. Machine learning algorithms can discover their own rules or be rule-based, in which case human programmers input the rules.
Machine Learning (ML)
A field of study with a range of approaches to developing the algorithms used in AI systems. Machine learning algorithms can discover rules and patterns in data without a human specifying them, which can sometimes lead to the system perpetuating biases.
Training Data
The data, generated by humans, used to train the algorithm or machine learning model. Training data is an essential component to the AI system, and may perpetuate the systemic biases of source data when implemented.
For these and related definitions, browse the Glossary of Artificial Intelligence Terms for Educators (CIRCLS, n.d.).
Functions of GenAI
GenAI tools perform a wide variety of practical functions and tasks (examples retrieved from NVIDIA, n.d.; Upshall, 2022). For a comprehensive directory of AI tools, explore Futurepedia and Ithaka S+R's Generative AI Product Tracker.
Examples of Functions |
|
---|---|
Generate text from a prompt |
“Write a paper on the impact of fake news on education” |
Synthesize information |
(e.g., summarize a text, combine information from multiple sources) |
Create an image or digital illustration from a prompt |
“A Cubist oil painting of a couple lounging next to a creek” |
Generate computer code |
(e.g., generate new code from a comment, fix flawed code) |
Translate text |
“Translate the following text from Turkish to English” |
Opportunities and Challenges
Emerging GenAI technologies present both opportunities and challenges for learners and educators.
Opportunities
- AI LITERACY DEVELOPMENT: GenAI tools offer opportunities to focus discussion and instruction on AI Literacy (Bali, 2024; Upshall, 2022). For example, activities where learners analyze GenAI output could help develop AI tool appraisal and critical thinking skills.
- IMPROVE TEACHING: There is a wide range of potential uses of GenAI to improve teaching, many of which are still being explored (metaLAB at Harvard, n.d.). For example:
- The rise of GenAI has prompted educators to rethink their assessment practices (Bearman et al., 2023; UNESCO, 2023, p. 37).
- Educators could use GenAI tools as curriculum or course co-designers (UNESCO, 2023, p. 31). For example, a GenAI tool could help an instructor to draft learning outcomes for a course or for a specific assessment.
- Educators could use GenAI tools as teaching assistants that could provide learners with individualized support (UNESCO, 2023, p. 31) that is personalized to their learning style, interests, abilities, and learning needs (Kwantlen Polytechnic University, n.d., p. 3).
- IMPROVE LEARNING: GenAI tools may help augment learning. Mike Sharples has devised 10 roles that a GenAI tool could play in augmenting learning, including Possibility Engine, Personal Tutor, and Motivator (Sabzalieva & Valentini,, 2023, p. 9). Interacting with GenAI tools can help learners to develop their evaluative judgement capabilities, essential for academic success and lifelong learning (Bearman et al., 2024).
- ACCESSIBILITY: GenAI may be used as an assistive tool for those with accessibility needs (Heidt, 2024; Kwantlen Polytechnic University, n.d.). This could include auto-generating captions or sign language interpretation for audio or visual content that lacks it, or generating audio descriptions of textual or visual material (UNESCO, 2023, p. 35).
- COGNITIVE OFFLOADING: Users may delegate certain tasks to GenAI to reduce cognitive demand, thereby freeing up the user’s time and effort for other tasks (Grinschgl & Neubauer, 2022).
Challenges and Ethical Implications
- UNRELIABLE CONTENT: GenAI use must be paired with human verification. GenAI output should be considered as one data point to be cross-checked against data from other credible sources.. More recent GenAI models are less likely to generate citations for sources that do not exist, but it is always important to check 1) that a source is real and 2) that the GenAI output actually matches the cited source.
- ACADEMIC MISCONDUCT: GenAI systems may be manipulated or used in unethical ways, such as when a student knowingly uses them to bypass learning. In addition, identifying when a learner has used GenAI generated text in their writing can be very difficult, posing a challenge to educators (Elkhatat et al., 2023; Fowler, 2023; Furze, 2024; Kumar et al., 2022).
- BIAS AND DISCRIMINATION: GenAI systems perpetuate existing human biases, as they generate outputs based on patterns in the data they were trained on (Scheuer-Larsen, 2023). For example, GenAI photo editing tools have expressed racial biases (Poisson, 2022), and large language models such as ChatGPT and Gemini have perpetuated gender biases (UNESCO/IRCAI, 2024), racial biases (Snyder, 2023), and ability biases (Urbina, et al., in press) in their outputs.
- INTELLECTUAL PROPERTY: Developing laws and ongoing court cases regarding the use of genAI tools and copyright currently lead to significant legal uncertainty in Canada and around the world. Some of the concerns include (University of Toronto, n.d.):
- Input (i.e. training data): The legality of the content used to train AI models is unknown in some cases. There are a number of lawsuits originating from the US that allege genAI tools infringe on copyright and it remains unclear if and how the fair use doctrine can be applied. As of now, no genAI lawsuits have started in Canada and because of this the uncertainty remains regarding the extent to which existing exceptions in the copyright framework, such as fair dealing, apply to this activity.
- Output (i.e. text, image, etc. generated by genAI tools): Authorship and ownership of works created by AI is unclear. Traditionally, Canadian copyright law has indicated that an author must be a natural person (human) who exercises skill and judgement in the creation of a work. As there are likely to be varying degrees of human input in generated content, it is unclear in Canada how it will be determined who the appropriate author and owner of works are.
- SUSTAINABILITY: Concerns have been raised about the environmental costs involved both in initial training of GenAI models and in their daily use once they have been rolled out to the public. Specifically, researchers are analyzing their energy, water, and mineral uses, and greenhouse gas emissions (Luccioni et al., 2024).
- PRIVACY: GenAI systems are trained on enormous datasets that may include personal information previously posted to the internet that could be used to identify individuals (Gal, 2023; Kwantlen Polytechnic University, n.d.). Additionally, there are considerable privacy concerns related to the information that users supply when prompting GenAI systems and that user information then being used to train the model in the future (Gal, 2023).
Suggestions for Use
Suggestions for Faculty
- ETHICS: Ask yourself whether you are comfortable with the ethical implications of using GenAI tools (e.g., environmental sustainability, unethical labour practices by tech companies bias and discrimination). See the Challenges and Ethical Implications section of this page for more information.
- CONVERSATIONS WITH STUDENTS: Make time in class to have open conversations with students about GenAI and the implications of its use in their academic work (Ward et al., 2023). Examples of questions:
- What do you know about artificial intelligence tools?
- How have you been using them?
- What potential opportunities and challenges do you see?
- CLEAR EXPECTATIONS: Mention GenAI tools explicitly in your course outline (see University of Alberta’s sample statements). Each time you introduce an assessment, clarify your expectations with respect to GenAI tool use and provide a rationale tied to the purpose of the assignment.
- ACADEMIC INTEGRITY: Help students acquire a foundational understanding of academic integrity (e.g., have them complete MRU’s academic integrity online training module).
- EXPERIMENT: Engage, explore and experiment (Eaton & Anselmo, 2023). Trying out AI tools yourself will help you understand what is possible. Be aware of the data you are providing and available privacy settings (e.g., the ability to turn off chat history in ChatGPT).
- ACKNOWLEDGEMENT: Give clear guidance to students on how to acknowledge GenAI output. Openly acknowledge your own use of GenAI tool use in your teaching and scholarship (e.g., how you have used it to design learning materials and assessments).
- ASSESSMENT: Think more deeply than ever (D'Agostino, 2023) about the learning outcomes of your course and how your assessments align with those outcomes. Identify the cognitive tasks your students need to perform without assistance (Bearman et al., 2023).
- SPACE FOR FAILURE: Encourage productive struggle and learning from failure by allowing resubmissions/rewrites where feasible (see the linked slide in this resource) (Trust, n.d.). Fear of failure can be a factor in a student’s decision to use GenAI in ways that may bypass learning.
Suggestions for Students
- ETHICS: Ask yourself whether you are comfortable with the ethical implications of using GenAI tools (e.g., environmental sustainability, unethical labour practices by tech companies, and bias in GenAI training data and discrimination in their output). See the Challenges and Ethical Implications section of this page for more information.
- INSTRUCTOR EXPECTATIONS: For every assignment and test, make sure you understand your instructor’s expectations with respect to Gen AI use. Check your course outline, and check assignment guidelines documents for this information. If you are unsure, ask your instructor. Where GenAI use is allowed, be sure to check expectations for acknowledgement of tool use.
- ACADEMIC INTEGRITY: To learn more about academic integrity and what constitutes academic misconduct, complete MRU’s online training module. (Log in using your @mtroyal.ca credentials, and then select the “Enroll in Course” button. If you’re already enrolled, you’ll see “Open Course.”).
- EXPERIMENT: Take the time to experiment with GenAI tools to better understand what they can and cannot do. Critically analyze the output; sometimes it looks great on the surface, but not when you look more deeply. These tools are great synthesizers, but the critical thinker is you.
- IMPLICATIONS FOR YOUR LEARNING: Before using a GenAI tool for a particular task, ask yourself how it will affect your learning. Will it enhance learning, or diminish it? Will it give you opportunities to think more deeply or less deeply? In the case of using GenAI for writing, be aware of how using the tool could impact your own writer’s voice.
- PRIVACY: Ask yourself whether the information you are feeding into the GenAI tool is even yours to share. Do you have the appropriate rights or permissions? If you do, could sharing this information impact you negatively in the future?
- APPLICATIONS IN THE WORKPLACE: Be curious about how GenAI tools are being used by professionals in your discipline. Ask your professors, and ask people in your network.
GenAI and the Law
Copyright Law
Canadian copyright law implies non-human entities (i.e. AI) cannot own the copyright to creative works. Determining the author of an AI-created work will require a legislative amendment and careful consideration of who (or what) can author AI-generated works. In 2021, the Government of Canada released A Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things (Government of Canada, 2021), which aimed to gather public feedback on potential legislative amendments to the Copyright Act regarding AI. Following this consultation, the Government of Canada has recently released the Consultation on Copyright in the Age of Generative Artificial Intelligence. This consultation will be used to inform the government's policy development process.
Are there copyright considerations I need to think about when using generative AI tools?
Terms and Conditions
If you plan to use GenAI tools, ensure you have read and understood the tool's Terms and Conditions. For any clarification, reach out to the MRU Copyright Advisor (mrucopyright@mtroyal.ca).
Artificial Intelligence and Data Act (AIDA)
AIDA is a part of the Digital Charter Implementation Act and is currently working its way through the House of Commons under Bill C-27. AIDA is meant to create a “new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses” (Government of Canada, n.d.). AIDA would require that appropriate measures be put in place to identify, assess, and mitigate risks of harm or biased output prior to the system being available to the public. These obligations would be guided by the following principles:
- Human oversight & monitoring
- Transparency
- Fairness and equity
- Safety
- Accountability
- Validity & robustness
Recommended Readings and Resources
Developing evaluative judgement for a time of generative artificial intelligence
Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Assessment & Evaluation in Higher Education.
- Bearman et al. argue that “generative AI can be a partner in the development of human evaluative judgement capability,” an essential and uniquely human ability.
How AI chatbots like ChatGPT or Bard work – visual explainer
Clarke, S., Milmo, D., & Blight, G. (2023, November 1). The Guardian.
- Clarke et al. provide a visual walkthrough of how large language models work to predict the next word in a sequence of text.
Teaching and learning with artificial intelligence apps
Eaton, S., & Anselmo, L. (2023, January). Taylor Institute for Teaching and Learning.
- Advice on using AI apps in the classroom. “If we think of artificial intelligence apps as another tool that students can use to ethically demonstrate their knowledge and learning, then we can emphasize learning as a process not a product.”
Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text
Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). International Journal for Educational Integrity.
- This paper is an analysis of the AI content detection tools developed by OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag, and their accuracy at detecting AI-generated text.
ENAI recommendations on the ethical use of artificial intelligence in education
Foltynek, T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. (2023). International Journal for Educational Integrity, 19. Article 12.
- The European Network for Academic Integrity shares its recommendations on the ethical use of AI in education.
Cohere AI CEO Aidan Gomez on the emerging legal and regulatory challenges for artificial intelligence [Audio podcast episode]
Geist, M. (Host). (2023, April 17). In Law Bytes. Michael Geist.
- Law Bytes host Michael Geist is joined by Cohere AI CEO Aidan Gomez to discuss complex legal and regulatory issues related to AI.
An Indigenous perspective on generative AI [Audio podcast episode]
Hendrix, J. (Host). (2023, January 29). In The Sunday Show. Tech Policy Press.
- Justin Hendrix interviews Michael Running Wolf, a PhD student in computer science at McGill University and a Northern Cheyenne and Lakota man. Michael Running Wolf is also the founder of the non-profit Indigenous in AI. He provides his perspective on generative AI.
AI observatory
Higher Education Strategy Associates. (n.d.).
- Higher Education Strategy Associates (HESA) launched this Observatory on AI Policies in Canadian Post-Secondary Education. HESA’s AI Observatory “will act as a Canadian clearinghouse for post-secondary institutions’ policies and guidelines with respect to AI.”
The mounting human and environmental costs of generative AI
Luccioni, S. (2023, April 12). Ars Technica.
- Dr. Sasha Luccioni explores the human and environmental costs of generative AI.
Initial guidance for evaluating the use of AI in scholarship and creativity.
Modern Language Association and Conference on College Composition and Communication Joint Task Force on Writing and AI. (2024, January 28).
- The MLA-CCCC’s Joint Task Force on Writing and AI offers its “provisional guidance for evaluating the use of AI in Scholarship and Creativity, including basic standards for the ethical use of these technologies.”
OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic
Perrigo, B. (2023, January 18). Time.
- Important reporting in Time about the unethical labour practices that were used to train ChatGPT.
ChatGPT and Artificial Intelligence in higher education: Quick start guide
Sabzalieva, E., & Valentini, A. (2023). United Nations Educational, Scientific and Cultural Organization and UNESCO International Institute for Higher Education in Latin America and the Carribean.
- “The Quick Start Guide provides an overview of how ChatGPT works and explains how it can be used in higher education. The Quick Start Guide raises some of the main challenges and ethical implications of AI in higher education and offers practical steps that higher education institutions can take.”
Inside the secret list of websites that make AI like ChatGPT sound smart
Schaul, K., Chen, S. Y., & Tiku, N. (2023, April 19). The Washington Post.
- Reporters from The Washington Post drill down into Google’s C4 data set, “a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models, including Google’s T5 and Facebook’s LLaMA.”
AI dialogues [Audio podcast]
Verkoeyen, S. (Host). (2023–present). MacPherson Institute.
- “AI Dialogues delves into the ethical and practical questions of generative AI for McMaster University and post-secondary education, bridging the gap between knowledgeable educators, students, and practitioners and those less familiar with AI technology. Each episode, we’ll explore the complexities of AI, its potential for innovation, and the challenges it poses. We’ll tackle questions like: How can AI enhance the learning experience? What are the ethical considerations? What’s the future of AI in education?”
Generative AI exists because of the transformer: This is how it works
Visual Storytelling Team & Murgia, M. (2023, September 11). Financial Times.
- Similar to Clarke et al.’s article for The Guardian, this piece is a detailed explanation of transformer models with helpful visual representations of the different steps involved in text generation.
Case tracker: Artificial intelligence, copyrights and class actions
Weisenberger, T. M. (n.d.). BakerHostetler.
- This page monitors ongoing copyright infringement lawsuits involving generative AI in the United States.
Additional Information
Office of Student Community Standards (OSCS)
The Office of Community Standards is responsible for promoting the rights and responsibilities of students through the administration of the Code of Student Community Standards and the Code of Student Academic Integrity. They also support the MRU campus community in navigating conflict using various resolution pathways.
If you have questions or concerns about the use of GenAI in an assignment, course or academic assessment at MRU, please contact the Office of Community Standards by emailing studentcommunitystandards@mtroyal.ca
References
Bali, M. (2024, February 26). Where are the crescents in AI? LSE Higher Education Blog. https://blogs.lse.ac.uk/highereducation/2024/02/26/where-are-the-crescents-in-ai/
Bearman, M., Ajjawi, R., Boud, D., Tai, J. & Dawson, P. (2023). CRADLE Suggests… assessment and genAI. Centre for Research in Assessment and Digital Learning, Deakin University, Melbourne, Australia. https://doi.org/10.6084/m9.figshare.22494178
Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Developing evaluative judgement for a time of generative artificial intelligence. Assessment & Evaluation in Higher Education, https://doi.org/10.1080/02602938.2024.2335321
Bowen, D., & Fleming, R. (2024, September 12). Assessment and swiss cheese - Phill Dawson (No. 9). [Audio podcast episode]. In AI in Education. https://aipodcast.education/assessment-and-swiss-cheese-phill-dawson-episode-9-of-series-9
D’Agostino, S. (2023, January 12). ChatGPT advice academics can use now. Inside Higher Ed. https://www.insidehighered.com/news/2023/01/12/academic-experts-offer-advice-chatgpt
Dawson, R., Bearman, M., Dollinger, M., & Boud, D. (2024). Validity matters more than cheating. Assessment & Evaluation in Higher Education, 1-12. https://doi.org/10.1080/02602938.2024.2386662
Eaton, S., & Anselmo, L. (2023, January). Teaching and learning with artificial intelligence apps. Taylor Institute for Teaching and Learning.
https://taylorinstitute.ucalgary.ca/teaching-with-AI-apps
Furze, L. (2024, April 9). AI detection in education is a dead end. Leon Furze. https://leonfurze.com/2024/04/09/ai-detection-in-education-is-a-dead-end/
Gal, Uri. (2023, February 7). ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned. The Conversation. https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283
Government of Canada. (2023, March 14). The Artificial Intelligence and Data Act (AIDA) – Companion document. Innovation, Science and Economic Development Canada. https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
Grinschgl, S., & Neubauer, A. C. (2022). Supporting cognition with modern technology: Distributed cognition today and in an AI-enhanced future. Frontiers in Artificial Intelligence, 5, Article 908261. https://doi.org/10.3389/frai.2022.908261
Heidt, A. (2024, April 8). ‘Without these tools, I’d be lost’: How generative AI aids in accessibility. Nature. https://www.nature.com/articles/d41586-024-01003-w
Kumar, R., Mindzak, M., Eaton, S. E., & Morrison, R. (2022, May 17). AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity [Conference presentation]. Canadian Society for the Study of Higher Education Annual Conference, Online. https://dx.doi.org/10.11575/PRISM/39762
Kwantlen Polytechnic University. (n.d.). Generative AI: An overview for teaching and learning. Retrieved October 11, 2023, from https://wordpress.kpu.ca/generativeaitlkpu/files/2023/04/Generative-AI-An-Overview-for-Teaching-and-Learning-03042023.pdf
metaLAB at Harvard. (n.d.). AI pedagogy project. https://aipedagogy.org/
NVIDIA. (n.d.). NVIDIA large language models (LLMs). Retrieved January 18, 2023, from https://web.archive.org/web/20230117121919/https://www.nvidia.com/en-us/deep-learning-ai/solutions/large-language-models/
Poisson, J. (Host). (2022, December 14). AI art and text is getting smarter, what comes next? [Audio podcast episode]. In Frontburner. CBC.
https://www.cbc.ca/radio/frontburner/ai-art-and-text-is-getting-smarter-what-comes-next-1.6684148
Sabzalieva, E., & Valentini, A. (2023). ChatGPT and Artificial Intelligence in higher education: Quick start guide. United Nations Educational, Scientific and Cultural Organization and UNESCO International Institute for Higher Education in Latin America and the Carribean. https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf
Scheuer-Larsen, C. (2023, September 27). A nuanced view of bias in language models. Viden.ai. https://viden.ai/en/a-nuanced-view-of-bias-in-language-models/
Snyder, K. (2023, February 3). We asked ChatGPT to write performance reviews and they are wildly sexist (and racist). Fast Company. https://www.fastcompany.com/90844066/chatgpt-write-performance-reviews-sexist-and-racist
Trust, T. (n.d.). ChatGPT & education [Google slides]. https://docs.google.com/presentation/d/1Vo9w4ftPx-rizdWyaYoB-pQ3DzK1n325OgDgXsnt0X0
UNESCO. (2022). K-12 AI curricula: A mapping of government-endorsed AI curricula. UNESDOC Digital Library.
https://unesdoc.unesco.org/ark:/48223/pf0000380602
UNESCO. (2023). Guidance for generative AI in education and research. UNESDOC Digital Library. https://unesdoc.unesco.org/ark:/48223/pf0000386693
UNESCO/International Research Centre on Artificial Intelligence. (2024). Challenging systematic prejudices: An investigation into gender bias in large language models. https://unesdoc.unesco.org/ark:/48223/pf0000388971/PDF/388971eng.pdf.multi
University of Toronto. (n.d.). ChatGPT and generative AI in the classroom. Retrieved December 7, 2023, from https://www.viceprovostundergrad.utoronto.ca/strategic-priorities/digital-learning/special-initiative-artificial-intelligence/
Upshall, M. (2022). An AI toolkit for libraries. Insights, 35(18). https://doi.org/10.1629/uksg.592
Urbina, J. T., Vu, P. D., & Nguyen, M. V. (in press). Disability ethics and education in the age of artificial intelligence: Identifying ability bias in ChatGPT and Gemini. Archives of Physical Medicine and Rehabilitation, 1-6. https://doi.org/10.1016/j.apmr.2024.08.014
Ward, D., Gibbs, A., Henkel, T., Loshbaugh, H. G., Siering, G., Williamson, J., & Kayser, M. (2023, December 1). Indecision about AI in classes is so last week. Inside Higher Ed. https://www.insidehighered.com/opinion/career-advice/2023/12/01/advice-about-ai-classroom-coming-new-year-opinion