There is currently no way to ethically use the most popular, publicly available generative AI models. To ethically use AI, the source of the training data must be shared and entirely in the public domain. Currently, none of the companies such as OpenAI, Google, and Microsoft meet this criteria for their models. The data on which these popular and widely available generative AI models (ChatGPT, Claude, Gemini, Copilot, Dall-E) have been trained is tainted by work stolen from artists and writers.
Generative AI isn't intrinsically unethical, but people who select the training data can be unethical. For example, training your model on ethically sourced data is time intensive, but possible.
Regardless, AI is here to stay and employers will expect prospective employees to understand how to use it so the best we can do is focus on causing the least harm.
- Poor academic integrity examples include, but are not limited to the following: Using ChatGPT or other AI (artificial intelligence) to produce any portion of assigned work unless authorized by the course instructor.
Other than keeping in line with SOWELA's Academic Integrity Policy, why should you cite AI usage?
The fallout of uncited AI use can be seen everywhere online. Message boards are filled with bots, social media has been swarmed with fake and misleading images, and Google image search has turned into a pile of AI generated slop.
Google image search was once a good place to get fast images for whatever you were looking for, such as a presentation. Now the images there can no longer be trusted. For example, if you search 'baby peacock' roughly half of the results will be fake.
If AI generated content is not marked as AI generated, then there is no way to filter this out and the internet as a whole becomes so much worse. Always mark AI generated content for what it is, regardless of if it is for schoolwork or not.
Dead internet theory was a conspiracy from 2021 where people tried to convince others that everyone else posting online was a bot and all content was inorganic and curated in effort to manipulate.
With the advent of easily accessible generative AI, that hoax is actually coming to fruition due to unethical users refusing to label AI generate content for what it is and hosting companies not placing barriers against it. You must now ask yourself if what you are seeing is real. This has hit everything: Unvetted music streams/playlists now contain AI generated music, long-form videos on Youtube are now using AI voices to read AI generated scripts with AI generated images of the topic being discussed, Facebook is full of viral AI images of religious figures or 'fake history', and Twitter and Reddit are filled to the brim with bot accounts farming likes or karma. The internet truly feels dead now.
The only way to combat this is to label AI generated content for what it is. If you generate an AI image and post it somewhere, be sure to cite it or else it too may end up on Google image search!
It is important to keep in mind that generative AI can be biased. If the information it is trained on is biased, then the AI will give biased output (just like how it will give inaccurate information if it is trained on inaccurate data). Bias is a massive issue in facial tracking AI and image generators, but language models are not free from bias either.