Skip to Main Content

SCIE 5010 - Fall 2025 Library Session

1. Have you used GenAI?
Yes: 39 votes (79.59%)
No: 4 votes (8.16%)
I don't know...maybe? It's everywhere!: 6 votes (12.24%)
Total Votes: 49

2.  What does your SCIE 5010 syllabus and/or assignments state about GenAI or AI use for this class?

Some Definitions:

 

Artificial intelligence (AI):

"Machines that imitate some features of human intelligence, such as perception, learning, reasoning, problem-solving, language interaction and creative work" (UNESCO, 2022).

Generative AI (GenAI):

“A type of artificial intelligence that involves creating machines or computer programs that can generate new content, such as images, text, or music. Unlike traditional AI systems that rely on predefined rules or pre-existing data to make decisions, generative AI models use algorithms and neural networks to learn patterns and relationships in data and generate new outputs based on that learning” (Kwantlen Polytechnic University, n.d., p. 1).

Large Language Models (LLMs):

"A language model is a type of artificial intelligence model that is trained to understand and generate human language. It learns the patterns, structures, and relationships within a given language and has traditionally been used for narrow AI tasks such as text translation. The quality of a language model depends on its size, the amount and diversity of data it was trained on, and the complexity of the learning algorithms used during training.

A large language model (LLM) refers to a specific class of language model that has significantly more parameters than traditional language models. Parameters are the internal variables of the model that are learned during the training process and represent the knowledge the model has acquired" (Rouse, 2024)

More information about GenAI and teaching and learning can be found on the MRU GenAI webpage: https://library.mtroyal.ca/ai

 

But how does it work?

This article from September 11, 2023 in the Financial Times provides a good visual overview: Generative AI exists because of the transformer 

Keep in mind:  

  • These models work by performing a prediction on what the next most likely word in a sequence will be from analyzing a large pre-loaded dataset. 

  • GenAI models are not search engines, or, at least, they weren't designed as search engines originally. Some of them have search engine functionality now and will provide footnotes (like Copilot/Bing), but it is still worth examining the linked source to see how the chatbot has represented the source. This is different than Google Search snippets where an excerpt from the actual source is provided to the user.

Click on the Padlet link and contribute your thoughts on the benefits and issues of GenAI: https://padlet.com/kkeavey/generative-ai-8u8xxlrx0fmzs8nt

Or use the QR code below to get to the Padlet Activity:

There are many issues with GenAI tools. Here is a non-exhaustive list. Some things to keep in mind are:

  • Academic integrity—probably the issue that has been most talked about in higher education. What does it mean if GenAI can "pass" an assignment/test? Should the assignment/test be altered? How much—if at all—should students be taught about how to use GenAI? (Answers to that question vary significantly by discipline, from total embrace to banning.)

    • Example: ChatGPT "passing" the MCAT: "Depending on its visual item response strategy, ChatGPT performed at or above the median performance of 276,779 student test takers on the MCAT." (Bommineni et al., 2023). Other examples of tests.
       

  • Research integrity—can GenAI be considered an author? Should researchers have to disclose if they've used GenAI in their research or their writing? How should that disclosure be made? There have been notable instances of GenAI output making it past peer review, particularly early after ChatGPT's popularization. (image example)

  • User privacy and protection of user information—if people disclose private information to GenAI tools, will it be used to train/refine the tools? (For example: AI therapy chatbots)
     

  • Bias—both in GenAI training data and in generated output. (Textual examples and image examples.) (Over-correction for bias has also been its own problem, too, though!).  Also, bias in GenAI in library search interfaces only searching their own database contents, not the sum of all information.
     

  • Digital Divide - Approximately 32% of the global population does not have internet access. Additionally, many GenAI tools have tiers of access, where paid levels have access to more accurate or up-to-date models and more robust outputs.
     

  • Information quality—sometimes called the "hallucination" problem or fabrication problem. A notable example of this problem from early after ChatGPT became popular was lawyers that got in trouble for referencing fake legal cases generated by ChatGPT. More recently Microsoft released CoPilot GenAI into all of its products, including Excel, with an important caveat about accuracy (LLMs are not calculators!).
     

  • Deskilling—if we come to rely too much on GenAI, will we lose valuable skills? Are there some skills that we're okay with losing because we'll save time and energy for other, more important tasks? (For example: calculators and long division)
     

  • Copyright infringement—in training data and in generated output (For example: GenAI image generation tools and lawsuits—here's a list of some American cases)
     

  • Distinguishing machines from humans—do people have a right to know if/when they're interacting with a bot that is convincingly "human"? If so, how will people respond to the disclosure that they're being served by AI? Are there communicative contexts where this disclosure may not be met positively? (For example: Vanderbilt University)
     

  • Environmental impacts/sustainabilityconcerns about energy consumption and carbon emissions used both in training GenAI models and then in integrating them into preexisting software, workflows, etc. One estimation of the water used to generate a 100-word email using ChatGPT: 519 ml (a little over 1 bottle of water).

AI as a Research Tool

Generative artificial intelligence (AI) is a hot topic these days that is having an impact on many areas of cultural life, education, and the economy.

Many people in education (including myself) are still getting their heads around generative AI as a topic, and this is made difficult by how quickly the technology changes and how little non-experts understand about it. It is incredibly complex, blackboxed technology.

If you do choose to use generative AI, you may want to use it as a brainstorming partner early on in your exploration of a topic similar to how you might browse a Wikipedia article on a subject to get a quick grasp of it in the early stages of your research. Do not use generative AI as a research tool in complete replacement of LibrarySearch or Google Scholar. If you do so, your research (and thinking) will suffer.

Specifically, be sure to scrutinize any source(s) that generative AI provides you with on a topic. This is because, at this point, it is prone to error: what some have called "hallucination," but that I prefer to call "fabrication."

If generative AI provides you with a source:

(1) make sure that the source actually exists, and, if it does exist;

(2) make sure that the source actually contains the information that generative AI has attributed to it.


What is fabrication?

An Investigation of ChatGPT's Sources

  1. BookInfluencer Marketing for Dummies by Kristy Sammis, Cat Lincoln, and Stefania Pomponi

    • This source does exist and it was written by these authors, but it is a For Dummies book that wouldn't be considered scholarly.

  2. BookInfluencer Marketing: Building Brand in a Digital Age by Duncan Brown and Nick Haye

    • This source does exist and it was written by those authors, but ChatGPT has fabricated a subtitle for it that it doesn't have.

  3. Academic Article: "The Rise of Influencer Marketing and Its Impact on Consumer Behavior" by Liu, Hu, and Zhang (2019)

    • To the best of my searching abilities, this source does not exist.

  4. Academic Article: "Ethical and Legal Issues in Influencer Marketing" by Brenner, A. and Capron, L. (2019)

    • To the best of my searching abilities, this source does not exist.

  5. Academic Article: "The Dark Side of Social Media: A Consumer Psychology Perspective" by Phua, J., Jin, S.V., and Kim, J.J. (2017)

    • This source is a Frankenstein composite of 2 sources. The authors have been taken from this article and the title has been taken from this edited book with which those authors had no involvement.

Federal Government of Canada Guide on the use of generative artificial intelligence

FASTER principles (developed by the Treasury Board of Canada Secretariat):

To maintain public trust and ensure the responsible use of generative AI tools by federal institutions, institutions should align with the “FASTER” principles that TBS has developed:

  • Fair: ensure that content from these tools does not include or amplify biases and that it complies with human rights, accessibility, and procedural and substantive fairness obligations; engage with affected stakeholders before deployment

  • Accountable: take responsibility for the content generated by these tools and the impacts of their use. This includes making sure generated content is accurate, legal, ethical, and compliant with the terms of use; establish monitoring and oversight mechanisms

  • Secure: ensure that the infrastructure and tools are appropriate for the security classification of the information and that privacy and personal information are protected; assess and manage cyber security risks and robustness when deploying a system

  • Transparent: identify content that has been produced using generative AI; notify users that they are interacting with an AI tool; provide information on institutional policies, appropriate use, training data and the model when deploying these tools; document decisions and be able to provide explanations if tools are used to support decision-making

  • Educated: learn about the strengths, limitations and responsible use of the tools; learn how to create effective prompts and to identify potential weaknesses in the outputs

  • Relevant: make sure the use of generative AI tools supports user and organizational needs and contributes to better outcomes for clients; consider the environmental impacts when choosing to use a tool; identify appropriate tools for the task; AI tools aren’t the best choice in every situation


The ROBOT Test

Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test


Reliability

  • How reliable is the information available about the AI technology?

  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?

  • If it is produced by the party responsible for the AI, how much information are they making available? 

    • Is information only partially available due to trade secrets?

    • How biased is they information that they produce?

Objective

  • What is the goal or objective of the use of AI?

  • What is the goal of sharing information about it?

    • To inform?

    • To convince?

    • To find financial support?

Bias

  • What could create bias in the AI technology?

  • Are there ethical issues associated with this?

  • Are bias or ethical issues acknowledged?

    • By the source of information?

    • By the party responsible for the AI?

    • By its users?

Owner

  • Who is the owner or developer of the AI technology?

  • Who is responsible for it?

    • Is it a private company?

    • The government?

    • A think tank or research group?

  • Who has access to it? Who can use it?

Type

  • Which subtype of AI is it?

  • Is the technology theoretical or applied?

  • What kind of information system does it rely on?

  • Does it rely on human intervention? 


--> Of particular note when considering using AI:  it is the responsibility of the user to confirm the veracity of AI outputs; as students learning the discipline, it may be difficult for you to discern accuracy, so it is wise to weigh this responsibility and the potential for academic misconduct

 

Figure 1: When is it safe to use ChatGPT?

"Is it safe to use ChatGPT for your task?" by Aleksandr Tiulkanov, AI and Data Policy Lawyer, licensed under CC BY 4.0

In-Class Activity

As a group, pick a GenAI tool to experiment with from this list: Generative AI Product Tracker (Ithaka S+R).

That list is super long and a little unwieldy, so you might want to first pick a category of GenAI product and then narrow down to a specific one within that category.

The categories are:

  • General Purpose Tools

  • Discovery Tools

  • Teaching & Learning Tools 

  • Research Workflow Tools

  • Writing Tools

  • Coding Tools

  • Image Generation Tools

  • Other

Spend a few minutes familiarizing yourselves with the tool as a group.

If the tool requires account creation to access it and you're not comfortable with that, please choose another tool that provides free functionality without an account, or just watch a demo video of the tool.

Then answer the following questions:

  1. Which tool did you choose to investigate?

  2. What was the tool made for? What are the claimed benefits of a person using it?

  3. How did your experimentation process go? Did you get the tool to successfully generate output?

    1. If so, were you satisfied with the tool's generated output? Did the tool seem useful to you?

    2. If not, what was it that prohibited you from generating output?

  4. Can you see any issues with the tool (maybe from the list of issues provided earlier in class or from the FASTER or ROBOT evaluation tools)?

Librarian

Profile Photo
Kalen Keavey

Contact:
Email: kkeavey@mtroyal.ca
Phone: 403.440.8516
Office: EL4423O