Skip to Main Content

Getting Started with AI

Learn how to use generative AI effectively and safely.

AI & Ethics

There is currently no way to ethically use the most popular, publicly available generative AI models. To ethically use AI, the source of the training data must be shared and entirely in the public domain. Currently, none of the companies such as OpenAI, Google, and Microsoft meet this criteria for their models. The data on which these popular and widely available generative AI models (ChatGPT, Claude, Gemini, Copilot, Dall-E) have been trained is tainted by work stolen from artists and writers.

Generative AI isn't intrinsically unethical, but people who select the training data can be unethical. For example, training your model on ethically sourced data is time intensive, but possible.

Regardless, AI is here to stay and employers will expect prospective employees to understand how to use it so the best we can do is focus on causing the least harm.

  • Be aware of SOWELA's Academic Integrity Policy on AI
    • Poor academic integrity examples include, but are not limited to the following: Using ChatGPT or other AI (artificial intelligence) to produce any portion of assigned work unless authorized by the course instructor.
  • Be aware of your instructor's AI policy in their syllabus
  • Do not use AI on assignments unless you have your instructor's explicit permission
  • Cite any and all AI use regardless of whether it is for class or personal use
  • Evaluate the models you select to use - do they transparently disclose the sources of their data?
  • Remember that your instructors have AI detecting software - don't cheat!

 

Why Cite AI Usage?

Other than keeping in line with SOWELA's Academic Integrity Policy, why should you cite AI usage?

Images of baby peacocks from Google. Half are AI generated.The fallout of uncited AI use can be seen everywhere online. Message boards are filled with bots, social media has been swarmed with fake and misleading images, and Google image search has turned into a pile of AI generated slop.

Google image search was once a good place to get fast images for whatever you were looking for, such as a presentation. Now the images there can no longer be trusted. For example, if you search 'baby peacock' roughly half of the results will be fake.

If AI generated content is not marked as AI generated, then there is no way to filter this out and the internet as a whole becomes so much worse. Always mark AI generated content for what it is, regardless of if it is for schoolwork or not.


Dead Internet Theory

Dead internet theory was a conspiracy from 2021 where people tried to convince others that everyone else posting online was a bot and all content was inorganic and curated in effort to manipulate.

With the advent of easily accessible generative AI, that hoax is actually coming to fruition due to unethical users refusing to label AI generate content for what it is and hosting companies not placing barriers against it. You must now ask yourself if what you are seeing is real. This has hit everything: Unvetted music streams/playlists now contain AI generated music, long-form videos on Youtube are now using AI voices to read AI generated scripts with AI generated images of the topic being discussed, Facebook is full of viral AI images of religious figures or 'fake history', and Twitter and Reddit are filled to the brim with bot accounts farming likes or karma. The internet truly feels dead now.

The only way to combat this is to label AI generated content for what it is. If you generate an AI image and post it somewhere, be sure to cite it or else it too may end up on Google image search!

 

Be Aware of Bias

It is important to keep in mind that generative AI can be biased. If the information it is trained on is biased, then the AI will give biased output (just like how it will give inaccurate information if it is trained on inaccurate data). Bias is a massive issue in facial tracking AI and image generators, but language models are not free from bias either.

  • Acknowledging that their model, ChatGPT, is skewed towards Western views, OpenAI admits that their LLM performs best in English as some steps to prevent harmful content have only been tested in English. The company also says that the model's amicable nature can reinforce a user's stance, even if that stance has a strong bias.
  • Working to try to reduce bias in Claude, Anthropic linked to a paper (Bias Benchmark for Question Answering) that says it's well-documented that language models learn social bias. When tested it appeared that models would select unsupported answers rather than express uncertainty, and those unsupported answers often contain social bias.