Skip to Main Content

ENGL 2207 - Winter 2024

Artificial intelligence (AI):

"Machines that imitate some features of human intelligence, such as perception, learning, reasoning, problem-solving, language interaction and creative work" (UNESCO, 2022).

Generative AI (GenAI):

“A type of artificial intelligence that involves creating machines or computer programs that can generate new content, such as images, text, or music. Unlike traditional AI systems that rely on predefined rules or pre-existing data to make decisions, generative AI models use algorithms and neural networks to learn patterns and relationships in data and generate new outputs based on that learning” (Kwantlen Polytechnic University, n.d., p. 1).

Large Language Models (LLMs):

"A language model is a type of artificial intelligence model that is trained to understand and generate human language. It learns the patterns, structures, and relationships within a given language and has traditionally been used for narrow AI tasks such as text translation. The quality of a language model depends on its size, the amount and diversity of data it was trained on, and the complexity of the learning algorithms used during training.

A large language model (LLM) refers to a specific class of language model that has significantly more parameters than traditional language models. Parameters are the internal variables of the model that are learned during the training process and represent the knowledge the model has acquired" (Rouse, 2024)

More information about GenAI and teaching and learning can be found on the MRU GenAI webpage: https://library.mtroyal.ca/ai

For the purposes of text generation, here are a few GenAI LLM chatbot tools you could use for the assignment. (This list is not exhaustive.):

  • OpenAI's ChatGPT (requires a free account to use ChatGPT 3.5, their free chatbot)

  • Google AI's Gemini (formerly known as Bard, but was recently renamed Gemini) (requires a Google account to use Gemini chatbot)

  • Perplexity AI's Perplexity AI (doesn't require an account, but a free account is required to try Perplexity AI Pro and to save chats/threads)

  • Microsoft's Copilot/Bing search (doesn't require an account, but supposedly works best with Microsoft account and in the Microsoft Edge browser)

  • HuggingFace's HuggingChat (doesn't require an account to use the chatbot)

  • Meta's Llama 2 (doesn't require an account to use the chatbot)

Keep in mind:

  • These models work by performing a calculation to predict what the next most likely word in a sequence is.

  • These models are not search engines, or, at least, they weren't designed as search engines originally. Some of them have search engine functionality now and will provide footnotes (like Copilot/Bing), but it is still worth examining the linked source to see how the chatbot has represented the source. This is different than Google Search snippets where an excerpt from the actual source is provided to the user.


Example prompt:

Write about a genre of music that you like: pop music. Provide some examples of musicians in this genre and of the albums and songs that they have written. Include a personal anecdote describing your experiences with this music. Include at least two reputable sources about the genre.

There are many issues with GenAI tools. Some to keep in mind are:

  • User privacy and protection of user information—if people disclose private information to GenAI tools, will it be used to train/refine the tools? (For example: AI therapy chatbots)

  • Bias—both in GenAI training data and in generated output. (Over-correction for bias has also been its own problem recently, too, though!)

  • Information quality—sometimes called the "hallucination" problem or fabrication problem

  • Deskilling—if we come to rely too much on GenAI, will we lose valuable skills? Are there some skills that we're okay with losing because we'll save time and energy for other, more important tasks? (For example: calculators and long division)

  • Copyright infringementin training data and in generated output (For example: GenAI image generation tools and lawsuitshere's a list of some American cases)

  • Distinguishing machines from humans—do people have a right to know if/when they're interacting with a bot that is convincingly "human"? If so, how will people respond to the disclosure that they're being served by AI? Are there communicative contexts where this disclosure may not be met positively? (For example: Vanderbilt University)

  • Environmental impactsconcerns about energy consumption and carbon emissions used both in training GenAI models and then in integrating them into preexisting software, workflows, etc.

Librarian

Profile Photo
Joel Blechinger
he/him/his
Contact:
Email: jblechinger@mtroyal.ca
Phone: 403.440.8624
Office: EL4423E
Website