Artificial Intelligence
Artificial Intelligence tools are continuing to grow in popularity. Learn more about AI, its potential uses in teaching and learning, and the opportunities and challenges presented by this emerging technology.
The MRU Library, Academic Development Centre and Student Learning Services have collaborated to bring you the information on this living page. It will continue to be updated with new resources and information related to AI and higher education as they arise. The intent of this page is to provide information and resources about AI to the MRU community; for concerns related to academic misconduct and integrity, please contact the Office of Student Community Standards.
Last updated May 3, 2023
What is AI?
Artificial intelligence: Machines that imitate some features of human intelligence, such as perception, learning, reasoning, problem-solving, language interaction and creative work UNESCO (2022)
Artificial intelligence based tools are widely used in academia and beyond, and new tools are continually emerging (see this Futurepedia directory, for example). AI tools can be implemented with positive purposes in education by supporting learners and helping to reduce the burden of routine or repetitive tasks, allowing for more focus on learning and research. Consider, for example, the benefits of automatic transcription of a lecture, grammar or spellcheck, or related reading suggestions in a library database. Conversely, there are also concerns associated with these tools, such as embedded bias, privacy risks, or the potential for misuse if students use them in un-authorised ways to complete an assignment.
Eaton and Anselmo (2023) provide a succinct and practical overview of the use of AI apps in the classroom. They recommend instructors engage with these tools, explain them to students and explore possibilities for enhancing teaching and learning.
Artificial intelligence (AI) is a general term used to describe a number of different, specific systems. We encounter and use AI every day: from navigating maps on Google or Apple, to asking Siri or Alexa to set a timer, to searching a library catalogue. AI is a part of our lives.
Terms like algorithm, machine learning, training data, neural networks and deep learning are often referenced in discussions related to AI.
Algorithm
The “brains” of an AI system, algorithms are a complex set of rules and decisions that determine which action the AI system takes. Machine learning algorithms can discover their own rules or be rule-based, in which case human programmers input the rules.
Machine Learning (ML)
A field of study with a range of approaches to developing the algorithms used in AI systems. Machine learning algorithms can discover rules and patterns in data without a human specifying them, which can sometimes lead to the system perpetuating biases.
Training Data
The data, generated by humans, used to train the algorithm or machine learning model. Training data is an essential component to the AI system, and may perpetuate the systemic biases of source data when implemented.
For these and related definitions, browse the Glossary of Artificial Intelligence Terms for Educators (CIRCLS, n.d.).
Functions of AI
AI systems are powerful enough to perform a wide variety of practical functions and tasks (examples retrieved from NVIDIA, n.d.; Upshall, 2022).
Function |
Examples of Tools* |
Generate text from a prompt “Write a paper on the impact of fake news on education” |
ChatGPT CopyAI Jasper |
Synthesize information (e.g., summarize a text, create a procedure from multiple sources) |
Scholarcy ChatGPT Quillbot |
Create an image or digital illustration from a prompt “A Cubist oil painting of a couple lounging next to a creek” |
DALL·E 2 Midjourney Craiyon Stable Diffusion |
Generate computer code (e.g., generate new code from a comment, fix flawed code) |
GitHub Copilot ChatGPT |
Translate text “Translate the following text from Turkish to English” |
Google Translate ChatGPT |
Enhance productivity in teaching and research by assisting with repetitive tasks such as writing, transcribing and suggesting related resources |
Grammarly [writing support] Web of Science Reviewer Locator [suggests peer reviewers] Transkribus [handwriting transcription for archival documents] Turnitin [text matching] |
Opportunities and Challenges
While emerging AI technologies present a number of opportunities for learners and educators, there are also challenges to integrating these systems into curriculum and coursework.
Opportunities
- New AI tools offer opportunities to introduce discussion and instruction centering on AI Literacy (Upshall, 2022). For example, instructors could use AI output for activities designed to help learners build skills in AI tool appraisal and practise critical thinking.
- AI tools may help increase efficiency in learning environments. One instructor using ChatGPT describes it as a “learning companion” and a “multiplier of ability” (Wingard, 2023). For example:
- AI systems may assist faculty or students in the initial stages of a project, such as brainstorming.
- Students could use ChatGPT as a virtual study partner, using it to summarize content or generate test questions (Wingard, 2023).
- Faculty could use AI tools to generate a set of learning outcomes as they design a new course.
- AI may be used as an assistive tool for those with accessibility needs.
- The rise of AI has prompted educators to rethink their assessment practices.
Challenges
- AI technologies are advancing rapidly, making it difficult to keep up with tools that are available and what functions they can and cannot perform. For example, ChatGPT did not originally have access to the internet, but this capability has been integrated.
- AI tools have no knowledge of the real world and may need to be paired with human verification. For example, text matching tools identify matching text, but only a human can determine if plagiarism has occurred.
- AI tools may be used in situations where they lack validity, such as when journal impact factors are used to judge the value of individual research papers.
- AI systems may be manipulated or used in unethical ways, such as when a student uses them to bypass learning.
- Outputs can be difficult to detect; identifying when a learner has used AI generated text in their writing can be very difficult, posing a challenge to educators (Kumar et al., 2022).
- AI systems perpetuate existing human biases, as they generate outputs based on patterns in the data they were trained on. For example, AI photo editing tools have expressed racial biases (Poisson, 2022), and large language software such as ChatGPT has perpetuated gender biases and stereotypes (Lucy & Bamman, 2021; Snyder, 2023) in their outputs.
Suggestions for Use
Suggestions for Faculty
- Have open conversations with students about AI and the implications of its use in their academic work. Examples of questions:
- What do you know about artificial intelligence tools?
- How have you been using them?
- What potential opportunities and challenges do you see?
- Engage, explore and experiment (Eaton & Anselmo, 2023). Trying out AI tools yourself will help you understand what is possible. Be aware of the data you are providing and available privacy settings (e.g., the ability to turn off chat history in ChatGPT)
- Help students acquire a foundational understanding of academic integrity (e.g., have them complete MRU’s academic integrity online training module).
- Mention AI tools explicitly in your course outline. Each time you introduce an assessment, clarify your expectations with respect to AI tool use (see University of Alberta’s sample statements).
- Think more deeply than ever (D'Agostino, 2023) about the learning outcomes of your course and how your assignments align with those outcomes. Ask yourself: What are the cognitive tasks your students need to perform without assistance?
- Encourage productive struggle and learning from failure by allowing resubmissions/rewrites where feasible (see the linked slide in this resource) (Trust, n.d.). Fear of failure can be a factor in a student’s decision to engage in academic misconduct.
- Design activities where students analyze, evaluate and revise AI output, or consider developing multimodal “performance task” assignments (Alby, n.d.).
- Focus on designing assignments that enhance interactions students may already have with AI. While it may be tempting to increase the difficulty level of an assignment to make it harder for ChatGPT, consider how this might also impact student learning and present barriers to students with disabilities.
Suggestions for Students
- For every assignment and test, ask your instructor what their expectations are with respect to AI use. If you are unsure whether use of a particular tool is allowed in your course, reach out to your instructor.
- Experiment with AI tools to better understand what they can and cannot do. Take the time to critically analyze the output. (Sometimes it looks great on the surface, but not when you look more deeply at the content. These tools are great synthesizers, but the critical thinker is you.)
- Learn more about academic integrity by completing MRU’s online training module.
- Ask yourself these key questions:
- If I use this tool for a particular task, how will it affect my learning? Will it enhance or diminish my learning? Will it give me opportunities to think more deeply or less deeply? If I use AI to generate writing, will I lose my own voice?
- If I use this tool, will it be fair to other students?
- If I use this tool, what are the privacy considerations?
- Are there other ethical implications to consider (e.g., users providing free labour; unethical labour practices)?
- Encourage your peers to ask themselves the above questions, too.
Suggestions for All

AI and the Law
Copyright law
Canadian copyright law implies AI cannot own the copyright to creative works. Determining the author of an AI-created work will require a legislative amendment and careful consideration of who (or what) can author AI-generated works. In 2021, the Government of Canada released A Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things (Government of Canada, 2021), which aimed to gather public feedback on potential legislative amendments to the Copyright Act regarding AI. At this time, no AI-focused amendments to the Copyright Act have been proposed. If amendments are to be proposed, varying opinions exist on how AI should integrate with copyright.
-
From Fricke (2022): “When adapting the use of AI to the Canadian Copyright Act, the goal should be to strike a balance between protecting creativity and skills, on the one hand, and not restricting the use of works that do not deserve such protection, on the other, regardless of how the legislator decides when it comes to who is entitled to copyright.”’
-
From Creative Commons (2021): “Copyright is not the right mechanism to encourage economic investment in the development of AI systems. Copyright’s utilitarian doctrine and incentives theory cannot support a claim that AI be afforded rights for any generated output because AI fails to meet the role of the author and its contribution to human-led social progress. Granting AI-outputs the status of copyright work goes against the social purpose for which copyright was created.”
Terms and Conditions
If you plan to use generative AI tools, ensure you have read and understood the Terms and Conditions of the developer(s). For any clarification, reach out to the Copyright Advisor (mrucopyright@mtroyal.ca).
Artificial Intelligence and Data Act (AIDA)
AIDA is a part of the Digital Charter Implementation Act and is currently working its way through the House of Commons under Bill C-27. AIDA is meant to create a “new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses” (Government of Canada, n.d.). AIDA would require that appropriate measures be put in place to identify, assess, and mitigate risks of harm or biased output prior to the system being available to the public. These obligations would be guided by the following principles:
- Human oversight & monitoring
- Transparency
- Fairness and equity
- Safety
- Accountability
- Validity & robustness
Upcoming Events
(Online) Lunch and Learn: Misinformation and Fact-Checking Content from AI (ChatGPT)
Date: Tuesday, June 6, 2023
Time: 12:00pm - 1:00pm
Presenter: ADC
Where: Online - Virtual
In this Lunch and Learn, we'll discuss the potential and pitfalls of misinformation and fact-checking content from AI (such as ChatGPT), and consider the ways in which we might guide our students to consider these problems.
Facilitator: Erika Smith
Recommended Readings and Resources
ChatGPT advice academics can use now
D’Agostino, S. (2023, January 12). Inside Higher Ed.
- D’Agostino solicits advice from eleven leading academics for their thoughts on ChatGPT and learning, both the risks posed and opportunities offered by the technology.
Teaching and learning with artificial intelligence apps
Eaton, S., & Anselmo, L. (2023, January). Taylor Institute for Teaching and Learning.
- Advice on using AI apps in the classroom. “If we think of artificial intelligence apps as another tool that students can use to ethically demonstrate their knowledge and learning, then we can emphasize learning as a process not a product.”
ENAI recommendations on the ethical use of artificial intelligence in education
Foltynek, T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. (2023). International Journal for Educational Integrity, 19. Article 12.
- The European Network for Academic Integrity shares its recommendations on the ethical use of AI in education.
How to cheat on your final paper: Assigning AI for student writing
Fyfe, P. (2022). AI & Society.
- “This paper shares results from a pedagogical experiment that assigns undergraduates to ‘cheat’ on a final class essay by requiring their use of text-generating AI software. For this assignment, students harvested content from an installation of GPT-2, then wove that content into their final essay. At the end, students offered a ‘revealed’ version of the essay as well as their own reflections on the experiment. In this assignment, students were specifically asked to confront the oncoming availability of AI as a writing tool.”
Cohere AI CEO Aidan Gomez on the emerging legal and regulatory challenges for artificial intelligence [Audio podcast episode]
Geist, M. (Host). (2023, April 17). In Law Bytes. Michael Geist.
- Law Bytes host Michael Geist is joined by Cohere AI CEO Aidan Gomez to discuss complex legal and regulatory issues related to AI.
An Indigenous perspective on generative AI [Audio podcast episode]
Hendrix, J. (Host). (2023, January 29). In The Sunday Show. Tech Policy Press.
- Justin Hendrix interviews Michael Running Wolf, a PhD student in computer science at McGill University and a Northern Cheyenne and Lakota man. Michael Running Wolf is also the founder of the non-profit Indigenous in AI. He provides his perspective on generative AI.
A.I. is mastering language. Should we trust what it says?
Johnson, S. (2022, April 15). New York Times Magazine.
- This longform piece from the New York Times Magazine provides a useful overview of large language models (LLMs) and the history of OpenAI, the company behind GPT-3 (and 3.5 and 4) and DALL·E 2.
The mounting human and environmental costs of generative AI
Luccioni, S. (2023, April 12). Ars Technica.
- Dr. Sasha Luccioni explores the human and environmental costs of generative AI.
OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic
Perrigo, B. (2023, January 18). Time.
- Important reporting in Time about the unethical labour practices that were used to train ChatGPT.
ChatGPT and Artificial Intelligence in higher education: Quick start guide
Sabzalieva, E., & Valentini, A. (2023). United Nations Educational, Scientific and Cultural Organization and UNESCO International Institute for Higher Education in Latin America and the Carribean.
- “The Quick Start Guide provides an overview of how ChatGPT works and explains how it can be used in higher education. The Quick Start Guide raises some of the main challenges and ethical implications of AI in higher education and offers practical steps that higher education institutions can take.”
Inside the secret list of websites that make AI like ChatGPT sound smart
Schaul, K., Chen, S. Y., & Tiku, N. (2023, April 19). The Washington Post.
- Reporters from The Washington Post drill down into Google’s C4 data set, “a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models, including Google’s T5 and Facebook’s LLaMA.”
ChatGPT and education
Trust, T. (n.d.). [Google slides].
- This Google Slides deck provides an overview of what ChatGPT is, what it can and cannot do, and what educators can do about it. It also includes links to additional resources at its end. Screenshots of the ChatGPT interface are also provided as illustrations of tasks it can perform; these could be useful to refer to or provide to others in cases when the Open AI ChatGPT website is at capacity.
Additional Information
Office of Student Community Standards (OSCS)
The Office of Community Standards is responsible for promoting the rights and responsibilities of students through the administration of the Code of Student Community Standards and the Code of Student Academic Integrity. They also support the MRU campus community in navigating conflict using various resolution pathways.
If you have questions or concerns about the use of AI in an assignment, course or academic assessment at MRU, please contact the Office of Community Standards by emailing studentcommunitystandards@mtroyal.ca
Feedback
As new AI technologies emerge, this page will be routinely updated with additional information and resources to support the MRU community. Have a suggestion for what to include? Get in touch with us.
References
Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2021.
CIRCLS. (n.d.). Glossary of artificial intelligence terms for educators – CIRCLS. Educator CIRCLS Blog. Retrieved January 18, 2023, from
https://circls.org/educatorcircls/ai-glossary
Craig, C. J. (2021). AI and copyright. In F. Martin-Bariteau & T. Scassa (Eds.), Artificial intelligence and the law in Canada. LexisNexis Canada.
https://ssrn.com/abstract=3733958
Creative Commons. (2021, September 17). Government of Canada consultation on a modern copyright framework for artificial intelligence and the Internet of Things. Innovation, Science and Economic Development Canada. https://ised-isde.canada.ca/site/strategic-policy-sector/en/marketplace-framework-policy/copyright-policy/submissions-consultation-modern-copyright-framework-artificial-intelligence-and-internet-things/creative-commons
Eaton, S., & Anselmo, L. (2023, January). Teaching and learning with artificial intelligence apps. Taylor Institute for Teaching and Learning.
https://taylorinstitute.ucalgary.ca/teaching-with-AI-apps
Fowler, G. A. (2023, April 3). We tested a new ChatGPT-detector for teachers. It flagged an innocent student. The Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin
Fricke, V. (2022, October 17). The end of creativity?! – AI-generated content under the Canadian Copyright Act. McGill University. https://www.mcgill.ca/business-law/article/end-creativity-ai-generated-content-under-canadian-copyright-act
Government of Canada. (2021). A consultation on a modern copyright framework for artificial intelligence and the Internet of Things.
https://ised-isde.canada.ca/site/strategic-policy-sector/en/marketplace-framework-policy/copyright-policy/consultation-modern-copyright-framework-artificial-intelligence-and-internet-things-0
Government of Canada. (2023, March 14). The Artificial Intelligence and Data Act (AIDA) – Companion document. Innovation, Science and Economic Development Canada. https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
Kumar, R., Mindzak, M., Eaton, S. E., & Morrison, R. (2022, May 17). AI & AI: Exploring the contemporary intersections of artificial intelligence and academic integrity [Conference presentation]. Canadian Society for the Study of Higher Education Annual Conference, Online.
http://hdl.handle.net/1880/114647
Lucy, L., & Bamman, D. (2021). Gender and representation bias in GPT-3 generated stories. Proceedings of the Third Workshop on Narrative Understanding, 48–55.
https://doi.org/10.18653/v1/2021.nuse-1.5
NVIDIA. (n.d.). NVIDIA large language models (LLMs). Retrieved January 18, 2023, from
https://www.nvidia.com/en-us/deep-learning-ai/solutions/large-language-models/
Poisson, J. (Host). (2022, December 14). AI art and text is getting smarter, what comes next? [Audio podcast episode]. In Frontburner. CBC.
https://www.cbc.ca/radio/frontburner/ai-art-and-text-is-getting-smarter-what-comes-next-1.6684148
Snyder, K. (2023, February 3). We asked ChatGPT to write performance reviews and they are wildly sexist (and racist). Fast Company. https://www.fastcompany.com/90844066/chatgpt-write-performance-reviews-sexist-and-racist
Trust, T. (n.d.). ChatGPT & education [Google slides]. https://docs.google.com/presentation/d/1Vo9w4ftPx-rizdWyaYoB-pQ3DzK1n325OgDgXsnt0X0
UNESCO. (2022). K-12 AI curricula: A mapping of government-endorsed AI curricula. UNESCOC Digital Library.
https://unesdoc.unesco.org/ark:/48223/pf0000380602
Upshall, M. (2022). An AI toolkit for libraries. Insights, 35(18).
https://doi.org/10.1629/uksg.592
Wingard, J. (2023, January 10). ChatGPT: A threat to higher education? Forbes.
https://www.forbes.com/sites/jasonwingard/2023/01/10/chatgpt-a-threat-to-higher-education/