"Machines that imitate some features of human intelligence, such as perception, learning, reasoning, problem-solving, language interaction and creative work" (UNESCO, 2022).
Generative AI (GenAI):
“A type of artificial intelligence that involves creating machines or computer programs that can generate new content, such as images, text, or music. Unlike traditional AI systems that rely on predefined rules or pre-existing data to make decisions, generative AI models use algorithms and neural networks to learn patterns and relationships in data and generate new outputs based on that learning” (Kwantlen Polytechnic University, n.d., p. 1).
Large Language Models (LLMs):
"A language model is a type of artificial intelligence model that is trained to understand and generate human language. It learns the patterns, structures, and relationships within a given language and has traditionally been used for narrow AI tasks such as text translation. The quality of a language model depends on its size, the amount and diversity of data it was trained on, and the complexity of the learning algorithms used during training.
A large language model (LLM) refers to a specific class of language model that has significantly more parameters than traditional language models. Parameters are the internal variables of the model that are learned during the training process and represent the knowledge the model has acquired" (Rouse, 2024)
More information about GenAI and teaching and learning can be found on the MRU GenAI webpage: https://library.mtroyal.ca/ai
For the purposes of text generation, here are a few GenAI LLM chatbot tools you could use. (This list is not exhaustive.):
OpenAI's ChatGPT (requires a free account to use ChatGPT 4o a limited number of times and ChatGPT 4o mini, their free chatbot)
Google AI's Gemini (formerly known as Bard, but was renamed Gemini) (requires a Google account to use Gemini chatbot)
Perplexity AI's Perplexity AI (doesn't require an account, but a free account is required to try Perplexity AI Pro and to save chats/threads)
Microsoft's Copilot/Bing search (doesn't require an account, but supposedly works best with Microsoft account and in the Microsoft Edge browser)
HuggingFace's HuggingChat (doesn't require an account to use the chatbot)
Keep in mind:
These models work by performing a calculation to predict what the next most likely word in a sequence is.
These models are not search engines, or, at least, they weren't designed as search engines originally. Some of them have search engine functionality now and will provide footnotes (like Copilot/Bing), but it is still worth examining the linked source to see how the chatbot has represented the source. This is different than Google Search snippets where an excerpt from the actual source is provided to the user.
There are many issues with GenAI tools. Here is a non-exhaustive list. Some to keep in mind are:
Academic integrity—probably the issue that has been most talked about in higher education. What does it mean if GenAI can "pass" an assignment/test? Should the assignment/test be altered? How much—if at all—should students be taught about how to use GenAI? (Answers to that question vary significantly by discipline from total embrace to banning.)
Example: ChatGPT "passing" the MCAT: "Depending on its visual item response strategy, ChatGPT performed at or above the median performance of 276,779 student test takers on the MCAT." (Bommineni et al., 2023). Other examples of tests.
Research integrity—can GenAI be considered an author? Should researchers have to disclose if they've used GenAI in their research or their writing? How should that disclosure be made? There have been notable instances of GenAI output making it past peer review, particularly early after ChatGPT's popularization. (Textual examples and image examples.)
User privacy and protection of user information—if people disclose private information to GenAI tools, will it be used to train/refine the tools? (For example: AI therapy chatbots)
Bias—both in GenAI training data and in generated output. (Textual examples and image examples.) (Over-correction for bias has also been its own problem, too, though!)
Deskilling—if we come to rely too much on GenAI, will we lose valuable skills? Are there some skills that we're okay with losing because we'll save time and energy for other, more important tasks? (For example: calculators and long division)
Copyright infringement—in training data and in generated output (For example: GenAI image generation tools and lawsuits—here's a list of some American cases)
Distinguishing machines from humans—do people have a right to know if/when they're interacting with a bot that is convincingly "human"? If so, how will people respond to the disclosure that they're being served by AI? Are there communicative contexts where this disclosure may not be met positively? (For example: Vanderbilt University)
Environmental impacts/sustainability—concerns about energy consumption and carbon emissions used both in training GenAI models and then in integrating them into preexisting software, workflows, etc. One estimation of the water used to generate a 100 word email using ChatGPT: 519 ml (a little over 1 bottle of water).
Many people in education (including myself) are still getting their heads around generative AI as a topic, and this is made difficult by how quickly the technology changes and how little non-experts understand about it. It is incredibly complex, blackboxed technology.
If you do choose to use generative AI, you may want to use it as a brainstorming partner early on in your exploration of a topic similar to how you might browse a Wikipedia article on a subject to get a quick grasp of it in the early stages of your research. Do not use generative AI as a research tool in complete replacement of LibrarySearch or Google Scholar. If you do so, your research (and thinking) will suffer.
Specifically, be sure to scrutinize any source(s) that generative AI provides you with on a topic. This is because, at this point, it is prone to error: what some have called "hallucination," but that I prefer to call "fabrication."
If generative AI provides you with a source:
(1) make sure that the source actually exists, and, if it does exist;
(2) make sure that the source actually contains the information that generative AI has attributed to it.
Book: Influencer Marketing for Dummies by Kristy Sammis, Cat Lincoln, and Stefania Pomponi
This source does exist and it was written by these authors, but it is a For Dummies book that wouldn't be considered scholarly.
Book: Influencer Marketing: Building Brand in a Digital Age by Duncan Brown and Nick Haye
This source does exist and it was written by those authors, but ChatGPT has fabricated a subtitle for it that it doesn't have.
Academic Article: "The Rise of Influencer Marketing and Its Impact on Consumer Behavior" by Liu, Hu, and Zhang (2019)
To the best of my searching abilities, this source does not exist.
Academic Article: "Ethical and Legal Issues in Influencer Marketing" by Brenner, A. and Capron, L. (2019)
To the best of my searching abilities, this source does not exist.
Academic Article: "The Dark Side of Social Media: A Consumer Psychology Perspective" by Phua, J., Jin, S.V., and Kim, J.J. (2017)
This source is a Frankenstein composite of 2 sources. The authors have been taken from this article and the title has been taken from this edited book with which those authors had no involvement.
FASTER principles (developed by the Treasury Board of Canada Secretariat):
To maintain public trust and ensure the responsible use of generative AI tools by federal institutions, institutions should align with the “FASTER” principles that TBS has developed:
Fair: ensure that content from these tools does not include or amplify biases and that it complies with human rights, accessibility, and procedural and substantive fairness obligations; engage with affected stakeholders before deployment
Accountable: take responsibility for the content generated by these tools and the impacts of their use. This includes making sure generated content is accurate, legal, ethical, and compliant with the terms of use; establish monitoring and oversight mechanisms
Secure: ensure that the infrastructure and tools are appropriate for the security classification of the information and that privacy and personal information are protected; assess and manage cyber security risks and robustness when deploying a system
Transparent: identify content that has been produced using generative AI; notify users that they are interacting with an AI tool; provide information on institutional policies, appropriate use, training data and the model when deploying these tools; document decisions and be able to provide explanations if tools are used to support decision-making
Educated: learn about the strengths, limitations and responsible use of the tools; learn how to create effective prompts and to identify potential weaknesses in the outputs
Relevant: make sure the use of generative AI tools supports user and organizational needs and contributes to better outcomes for clients; consider the environmental impacts when choosing to use a tool; identify appropriate tools for the task; AI tools aren’t the best choice in every situation
That list is super long and a little unwieldy, so you might want to first pick a category of GenAI product and then narrow down to a specific one within that category.
The categories are:
General Purpose Tools (pp. 1-9)
Discovery Tools (pp. 10-19)
Teaching & Learning Tools (pp. 19-31)
Workflow Tools (pp. 31-42)
Writing Tools (pp. 42-46)
Coding Tools (pp. 46-48)
Image Generation Tools (pp. 49-50)
Other (pp. 50-53)
Spend a few minutes familiarizing yourselves with the tool as a group.
If the tool requires account creation to access it and you're not comfortable with that, please choose another tool that provides free functionality without an account, or just watch a demo video of the tool.
Then answer the following questions:
Which tool did you choose to investigate?
What was the tool made for? What are the claimed benefits of a person using it?
How did your experimentation process go? Did you get the tool to successfully generate output?
If so, were you satisfied with the tool's generated output? Did the tool seem useful to you?
If not, what was it that prohibited you from generating output?
Can you see any issues with the tool (maybe 1 or more from the list of issues provided earlier in class)?