Skip to Main Content

Artificial Intelligence

Learn more about generative AI, its potential uses in teaching and learning, and the opportunities and challenges presented by this emerging technology.

The MRU Library, Academic Development Centre and Student Learning Services have collaborated to bring you the information on this living page, whose intent is to provide information and resources about generative AI and higher education to the MRU community. For concerns related to academic conduct, please contact the Office of Student Community Standards.

Last updated October 20, 2023

AI vs. GenAI

Artificial intelligence: Machines that imitate some features of human intelligence, such as perception, learning, reasoning, problem-solving, language interaction and creative work UNESCO (2022)

 

Artificial intelligence (AI) is a general term used to describe a number of different, specific systems. We encounter and use AI every day: from navigating maps on Google or Apple, to asking Siri or Alexa to set a timer, to searching a library catalogue. AI is a part of our lives. 

Generative AI (GenAI) is “a type of artificial intelligence that involves creating machines or computer programs that can generate new content, such as images, text, or music. Unlike traditional AI systems that rely on predefined rules or pre-existing data to make decisions, generative AI models use algorithms and neural networks to learn patterns and relationships in data and generate new outputs based on that learning” (Kwantlen Polytechnic University, n.d., p. 1).

Algorithm

The “brains” of an AI system, algorithms are a complex set of rules and decisions that determine which action the AI system takes. Machine learning algorithms can discover their own rules or be rule-based, in which case human programmers input the rules.

Machine Learning (ML)

A field of study with a range of approaches to developing the algorithms used in AI systems. Machine learning algorithms can discover rules and patterns in data without a human specifying them, which can sometimes lead to the system perpetuating biases.

Training Data

The data, generated by humans, used to train the algorithm or machine learning model. Training data is an essential component to the AI system, and may perpetuate the systemic biases of source data when implemented.

For these and related definitions, browse the Glossary of Artificial Intelligence Terms for Educators (CIRCLS, n.d.).

Functions of GenAI

GenAI tools perform a wide variety of practical functions and tasks (examples retrieved from NVIDIA, n.d.; Upshall, 2022). For a comprehensive directory of AI tools, explore Futurepedia.

Examples of Functions

Generate text from a prompt 

“Write a paper on the impact of fake news on education”
“Write a poem about existentialism in the style of Walt Whitman”
“Simplify the following radiology report”

Synthesize information

(e.g., summarize a text, combine information from multiple sources)

Create an image or digital illustration from a prompt

“A Cubist oil painting of a couple lounging next to a creek”
“A photorealistic image of a half-eaten pumpkin pie”

Generate computer code

(e.g., generate new code from a comment, fix flawed code)

Translate text

“Translate the following text from Turkish to English”

 

Opportunities and Challenges

While emerging GenAI technologies present a number of opportunities for learners and educators, there are also challenges to integrating these systems into curriculum and coursework.

Opportunities

  • AI LITERACY DEVELOPMENT: New GenAI tools offer opportunities to introduce discussion and instruction centering on AI Literacy (Upshall, 2022). For example, instructors could use GenAI output for activities designed to help learners build skills in AI tool appraisal and to practise critical thinking.
  • IMPROVE TEACHING: There is a wide range of potential uses of GenAI to improve teaching, many of which are still being explored. For example:
    • The rise of GenAI has prompted educators to rethink their assessment practices (Bearman et al., 2023; UNESCO, 2023, p. 37).
    • Educators could use GenAI tools as curriculum or course co-designers (UNESCO, 2023, p. 31). For example, a GenAI tool could help an instructor to draft learning outcomes for a course or for a specific assessment.
    • Educators could use GenAI tools as teaching assistants that could provide learners with individualized support (UNESCO, 2023, p. 31) that is personalized to their learning style, interests, abilities, and learning needs (Kwantlen Polytechnic University, n.d., p. 3).
  • IMPROVE LEARNING: GenAI tools may help augment learning environments. Mike Sharples (Professor Emeritus of Educational Technology, Open University, UK) has devised 10 possible roles that a GenAI tool could play in augmenting learning for students, including Possibility Engine, Socratic Opponent, Personal Tutor, and Motivator (UNESCO, 2023, p. 9).
  • ACCESSIBILITY: GenAI may be used as an assistive tool for those with accessibility needs (Kwantlen Polytechnic University, n.d.). This could include auto-generating captions or sign language interpretation for audio or visual content that lacks it, or generating audio descriptions of textual or visual material (UNESCO, 2023, p. 35).

Challenges and Ethical Implications

  • UNRELIABLE CONTENT: GenAI tools have no knowledge of the real world and need to be paired with human verification. For example,  citations or sources provided by ChatGPT will need to be checked to ensure that they are not fabricated and that they actually contain the information that has been attributed to them.
  • ACADEMIC MISCONDUCT: GenAI systems may be manipulated or used in unethical ways, such as when a student uses them to bypass learning. In addition, identifying when a learner has used GenAI generated text in their writing can be very difficult, posing a challenge to educators (Kumar et al., 2022; Fowler, 2023; Elkhatat et al., 2023).
  • BIAS AND DISCRIMINATION: GenAI systems perpetuate existing human biases, as they generate outputs based on patterns in the data they were trained on. For example, GenAI photo editing tools have expressed racial biases (Poisson, 2022), and large language software such as ChatGPT has perpetuated gender biases and stereotypes (Lucy & Bamman, 2021; Snyder, 2023) in its outputs.
  • SUSTAINABILITY: Concerns have been raised about the environmental costs involved both in initial training of GenAI models and in their daily use once they have been rolled out to the public. Specifically, researchers are analyzing their electricity use and carbon emissions (de Vries, 2023; Luccioni, 2023).
  • PRIVACY: GenAI systems are trained on enormous datasets that may include personal information previously posted to the internet that could be used to identify individuals (Gal, 2023; Kwantlen Polytechnic University, n.d.). Additionally, there are considerable privacy concerns related to the information that users supply when prompting GenAI systems and that user information then being used to train the model in the future (Gal, 2023).

Suggestions for Use

Suggestions for Faculty

  • EXPERIMENT: Engage, explore and experiment (Eaton & Anselmo, 2023). Trying out AI tools yourself will help you understand what is possible. Be aware of the data you are providing and available privacy settings (e.g., the ability to turn off chat history in ChatGPT).
  • CONVERSATIONS WITH STUDENTS: Make time in class to have open conversations with students about GenAI and the implications of its use in their academic work. Examples of questions:
    • What do you know about artificial intelligence tools?
    • How have you been using them?
    • What potential opportunities and challenges do you see?
  • CLEAR EXPECTATIONS: Mention GenAI tools explicitly in your course outline (see University of Alberta’s sample statements). Each time you introduce an assessment, clarify your expectations with respect to GenAI tool use.
  • ACKNOWLEDGEMENT/CITATION: Give clear guidance to students on how to acknowledge and potentially cite GenAI outputs according to the citation style used in your course. Openly acknowledge your own use of GenAI tool use in your teaching and scholarship (e.g., how you have used it to design learning materials and assessments).
  • ACADEMIC INTEGRITY: Help students acquire a foundational understanding of academic integrity (e.g., have them complete MRU’s academic integrity online training module).
  • ASSESSMENT: Think more deeply than ever (D'Agostino, 2023) about the learning outcomes of your course and how your assessments align with those outcomes. Identify the cognitive tasks your students need to perform without assistance (Bearman et al., 2023).
  • SPACE FOR FAILURE: Encourage productive struggle and learning from failure by allowing resubmissions/rewrites where feasible (see the linked slide in this resource) (Trust, n.d.). Fear of failure can be a factor in a student’s decision to use GenAI in ways that may bypass learning.

Suggestions for Students

  • EXPERIMENT: Take the time to experiment with GenAI tools to better understand what they can and cannot do. Critically analyze the output; sometimes it looks great on the surface, but not when you look more deeply. These tools are great synthesizers, but the critical thinker is you.
  • INSTRUCTOR EXPECTATIONS: For every assignment and test, make sure you understand your instructor’s expectations with respect to Gen AI use. Check your course outline, and check assignment guidelines documents for this information. If you are unsure, ask your instructor.  Where GenAI use is allowed, be sure to check expectations for acknowledgement of tool use and, potentially, citation.
  • ACADEMIC INTEGRITY:  To learn more about academic integrity and what constitutes academic misconduct, complete MRU’s online training module. (Log in using your @mtroyal.ca credentials, and then select the “Enroll in Course” button. If you’re already enrolled, you’ll see “Open Course.”)
  • IMPLICATIONS FOR YOUR LEARNING: Before using a GenAI tool for a particular task, ask yourself how it will affect your learning. Will it enhance learning, or diminish it? Will it give you opportunities to think more deeply or less deeply? In the case of using GenAI for writing, be aware of how using the tool could impact your own writer’s voice.
  • PRIVACY: Ask yourself whether the information you are feeding into the GenAI tool is even yours to share. Do you have the appropriate rights or permissions? If you do, could sharing this information impact you negatively in the future?
  • ETHICS: Ask yourself whether you are comfortable with the ethical implications of using GenAI tools (e.g., environmental sustainability, unethical labour practices by tech companies, bias and discrimination). See the Challenges and Ethical Implications section of this page for more information).
  • APPLICATIONS IN THE WORKPLACE: Be curious about how GenAI tools are being used by professionals in your discipline. Ask your professors, and ask people in your network.

GenAI and the Law

Copyright Law

Canadian copyright law implies AI cannot own the copyright to creative works. Determining the author of an AI-created work will require a legislative amendment and careful consideration of who (or what) can author AI-generated works. In 2021, the Government of Canada released A Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things (Government of Canada, 2021), which aimed to gather public feedback on potential legislative amendments to the Copyright Act regarding AI. Following this consultation, the Government of Canada has recently released the Consultation on Copyright in the Age of Generative Artificial Intelligence and feedback is open until December 4, 2023. This latest consultation will be used to inform the government's policy development process.

Terms and Conditions

If you plan to use GenAI tools, ensure you have read and understood the Terms and Conditions of the developer(s). For any clarification, reach out to the MRU Copyright Advisor (mrucopyright@mtroyal.ca).

Artificial Intelligence and Data Act (AIDA)

AIDA is a part of the Digital Charter Implementation Act and is currently working its way through the House of Commons under Bill C-27. AIDA is meant to create a “new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses” (Government of Canada, n.d.). AIDA would require that appropriate measures be put in place to identify, assess, and mitigate risks of harm or biased output prior to the system being available to the public. These obligations would be guided by the following principles:

  • Human oversight & monitoring
  • Transparency
  • Fairness and equity
  • Safety
  • Accountability
  • Validity & robustness

Upcoming Events

(Online) Text-matching with D2L Brightspace

Date: Tuesday, October 17, 2023
Time: 9:30 - 10:30am
Presenter: ADC
Where: Online - Virtual

Text-matching software is a learning technology that helps students improve their academic writing and promotes academic integrity.  This one hour session will provide information about leveraging the Brightspace LMS for the purpose of text-matching.

Facilitators: Andrew Reil (Academic Development Centre) & John Cheeseman (Academic Development Centre)

Please note: As this is an event intended for the MRU community, a valid MRU email address is required to register.

(Online) Text-matching with D2L Brightspace

Date: Tuesday, October 17, 2023
Time: 1:30 - 2:30pm
Presenter: ADC
Where: Online - Virtual

Text-matching software is a learning technology that helps students improve their academic writing and promotes academic integrity.  This one hour session will provide information about leveraging the Brightspace LMS for the purpose of text-matching.

Facilitators: Andrew Reil (Academic Development Centre) & John Cheeseman (Academic Development Centre)

Please note: As this is an event intended for the MRU community, a valid MRU email address is required to register.

(Online) Considering How Citation Guidance for AI Can Support Student Learning

Date: Thursday, October 19, 2023
Time: 10:00 - 11:00am
Presenter: ADC
Where: Online - Virtual

In this session, we’ll discuss recent guidance on how to cite content from generative AI (such as ChatGPT) from associations such as APA and MLA, and explore how this guidance may be used to help inform student learning or be integrated into assessment, such as research assignment instructions.

Facilitators: Erika Smith (Academic Development Centre) & Joel Blechinger (Library)

Recommended Readings and Resources

Teaching and learning with artificial intelligence apps

Eaton, S., & Anselmo, L. (2023, January). Taylor Institute for Teaching and Learning.

  • Advice on using AI apps in the classroom.  “If we think of artificial intelligence apps as another tool that students can use to ethically demonstrate their knowledge and learning, then we can emphasize learning as a process not a product.”  
  •  

Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text 

Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). International Journal for Educational Integrity.

  • This paper is an analysis of the AI content detection tools developed by OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag, and their accuracy at detecting AI-generated text.
  •  

ENAI recommendations on the ethical use of artificial intelligence in education

Foltynek, T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. (2023). International Journal for Educational Integrity, 19. Article 12.

  • The European Network for Academic Integrity shares its recommendations on the ethical use of AI in education.

 

How to cheat on your final paper: Assigning AI for student writing

Fyfe, P. (2022). AI & Society.

  • “This paper shares results from a pedagogical experiment that assigns undergraduates to ‘cheat’ on a final class essay by requiring their use of text-generating AI software. For this assignment, students harvested content from an installation of GPT-2, then wove that content into their final essay. At the end, students offered a ‘revealed’ version of the essay as well as their own reflections on the experiment. In this assignment, students were specifically asked to confront the oncoming availability of AI as a writing tool.”

 

Cohere AI CEO Aidan Gomez on the emerging legal and regulatory challenges for artificial intelligence [Audio podcast episode]

Geist, M. (Host). (2023, April 17). In Law Bytes. Michael Geist.

  • Law Bytes host Michael Geist is joined by Cohere AI CEO Aidan Gomez to discuss complex legal and regulatory issues related to AI.

 

An Indigenous perspective on generative AI [Audio podcast episode]

Hendrix, J. (Host). (2023, January 29). In The Sunday Show. Tech Policy Press.

  • Justin Hendrix interviews Michael Running Wolf, a PhD student in computer science at McGill University and a Northern Cheyenne and Lakota man. Michael Running Wolf is also the founder of the non-profit Indigenous in AI. He provides his perspective on generative AI.

 

A.I. is mastering language. Should we trust what it says?

Johnson, S. (2022, April 15). New York Times Magazine.

  • This longform piece from the New York Times Magazine provides a useful overview of large language models (LLMs) and the history of OpenAI, the company behind GPT-3 (and 3.5 and 4) and DALL·E 2.

 

A jargon-free explanation of how AI large language models work

Lee, T. B., & Trott, S. (2023, July 31). Ars Technica.

  • Writers Lee and Trott provide a plain language explanation of the different components of large language models including word vectors, transformers, and the training process.

 

The mounting human and environmental costs of generative AI

Luccioni, S. (2023, April 12). Ars Technica.

  • Dr. Sasha Luccioni explores the human and environmental costs of generative AI.

 

OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic

Perrigo, B. (2023, January 18). Time.

  • Important reporting in Time about the unethical labour practices that were used to train ChatGPT.

 

ChatGPT and Artificial Intelligence in higher education: Quick start guide

Sabzalieva, E., & Valentini, A. (2023). United Nations Educational, Scientific and Cultural Organization and UNESCO International Institute for Higher Education in Latin America and the Carribean.

  • “The Quick Start Guide provides an overview of how ChatGPT works and explains how it can be used in higher education. The Quick Start Guide raises some of the main challenges and ethical implications of AI in higher education and offers practical steps that higher education institutions can take.”

 

Generative AI exists because of the transformer: This is how it works

Visual Storytelling Team & Murgia, M. (2023, September 11). Financial Times.

  • Similar to Lee and Trott’s article for Ars Technica, this piece is a detailed explanation of transformer models with helpful visual representations of the different steps involved in text generation.

Additional Information

Office of Student Community Standards (OSCS)

The Office of Community Standards is responsible for promoting the rights and responsibilities of students through the administration of the Code of Student Community Standards and the Code of Student Academic Integrity. They also support the MRU campus community in navigating conflict using various resolution pathways.

If you have questions or concerns about the use of GenAI in an assignment, course or academic assessment at MRU, please contact the Office of Community Standards by emailing studentcommunitystandards@mtroyal.ca

References

Bearman, M., Ajjawi, R., Boud, D., Tai, J. & Dawson, P. (2023). CRADLE Suggests… assessment and genAI. Centre for Research in Assessment and Digital Learning, Deakin University, Melbourne, Australia. https://doi.org/10.6084/m9.figshare.22494178

Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2021.

Craig, C. J. (2021). AI and copyright. In F. Martin-Bariteau & T. Scassa (Eds.), Artificial intelligence and the law in Canada. LexisNexis Canada.
https://ssrn.com/abstract=3733958 

Creative Commons. (2021, September 17). Government of Canada consultation on a modern copyright framework for artificial intelligence and the Internet of Things. Innovation, Science and Economic Development Canada. https://ised-isde.canada.ca/site/strategic-policy-sector/en/marketplace-framework-policy/copyright-policy/submissions-consultation-modern-copyright-framework-artificial-intelligence-and-internet-things/creative-commons 

D’Agostino, S. (2023, January 12). ChatGPT advice academics can use now. Inside Higher Ed. https://www.insidehighered.com/news/2023/01/12/academic-experts-offer-advice-chatgpt

de Vries, A. (2023). The growing energy footprint of artificial intelligence. Joule 7, 1-4. https://doi.org/10.1016/j.joule.2023.09.004

Eaton, S., & Anselmo, L. (2023, January). Teaching and learning with artificial intelligence apps. Taylor Institute for Teaching and Learning.
https://taylorinstitute.ucalgary.ca/teaching-with-AI-apps

Fowler, G. A. (2023, April 3). We tested a new ChatGPT-detector for teachers. It flagged an innocent student. The Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin

Fricke, V. (2022, October 17). The end of creativity?! – AI-generated content under the Canadian Copyright Act. McGill University. https://www.mcgill.ca/business-law/article/end-creativity-ai-generated-content-under-canadian-copyright-act 

Gal, Uri. (2023, February 7). ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned. The Conversation. https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283  

Government of Canada. (2021). A consultation on a modern copyright framework for artificial intelligence and the Internet of Things.
https://ised-isde.canada.ca/site/strategic-policy-sector/en/marketplace-framework-policy/copyright-policy/consultation-modern-copyright-framework-artificial-intelligence-and-internet-things-0

Government of Canada. (2023, March 14). The Artificial Intelligence and Data Act (AIDA) – Companion document. Innovation, Science and Economic Development Canada. https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document

Kumar, R., Mindzak, M., Eaton, S. E., & Morrison, R. (2022, May 17). AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity [Conference presentation]. Canadian Society for the Study of Higher Education Annual Conference, Online. https://dx.doi.org/10.11575/PRISM/39762

Kwantlen Polytechnic University. (n.d.). Generative AI: An overview for teaching and learning. Retrieved October 11, 2023, from https://wordpress.kpu.ca/generativeaitlkpu/files/2023/04/Generative-AI-An-Overview-for-Teaching-and-Learning-03042023.pdf

Lucy, L., & Bamman, D. (2021). Gender and representation bias in GPT-3 generated stories. Proceedings of the Third Workshop on Narrative Understanding, 48–55.
https://doi.org/10.18653/v1/2021.nuse-1.5 

NVIDIA. (n.d.). NVIDIA large language models (LLMs). Retrieved January 18, 2023, from https://web.archive.org/web/20230117121919/https://www.nvidia.com/en-us/deep-learning-ai/solutions/large-language-models/ 

Poisson, J. (Host).  (2022, December 14). AI art and text is getting smarter, what comes next? [Audio podcast episode]. In Frontburner. CBC.
https://www.cbc.ca/radio/frontburner/ai-art-and-text-is-getting-smarter-what-comes-next-1.6684148

Snyder, K. (2023, February 3). We asked ChatGPT to write performance reviews and they are wildly sexist (and racist). Fast Company. https://www.fastcompany.com/90844066/chatgpt-write-performance-reviews-sexist-and-racist

Trust, T. (n.d.). ChatGPT & education [Google slides]. https://docs.google.com/presentation/d/1Vo9w4ftPx-rizdWyaYoB-pQ3DzK1n325OgDgXsnt0X0

UNESCO. (2022). K-12 AI curricula: A mapping of government-endorsed AI curricula. UNESDOC Digital Library.
https://unesdoc.unesco.org/ark:/48223/pf0000380602 

UNESCO. (2023). Guidance for generative AI in education and research. UNESDOC Digital Library. https://unesdoc.unesco.org/ark:/48223/pf0000386693

Upshall, M. (2022). An AI toolkit for libraries. Insights, 35(18).
https://doi.org/10.1629/uksg.592