Some ethical considerations in ChatGPT and other LLMs

Like many others, I’ve been thinking about GPT and ChatGPT lately, and I’m particularly interested in diving deeper into ethical considerations and issues related to these kinds of tools. As I start looking into this sort of question, I realize there are a lot of such considerations. And here I’m only going to be able to scratch the surface. But I wanted to pull together for myself some ethical areas that I think may be particularly important for post-secondary students, faculty, and staff to consider.

Notes:

  • This post will focus on ethical issues outside of academic integrity, which is certainly an important issue but not my particular focus here.
  • An area I barely touch on below, but plan to look into more, is AI and Indigenous approaches, protocols, and data sovereignty. One place I will likely start is by digging into a 2020 position paper by an Indigenous protocol and AI working group.
  • This post is quite long! I frequently make long blog posts but this one may be one of the longest. There is a lot to consider.
  • I am focusing here on ethical issues and concerns, and there are quite a few. It may sound like I may be arguing we should not use AI language models like ChatGPT in teaching and learning. That is not my point here; rather, I think it’s important to recognize ethical issues when considering whether or how to use such tools in an educational context, and discuss them with students.

Some of the texts I especially relied on when crafting this post, that I recommend:

And shortly before publishing I learned of this excellent post by Leon Furze on ethical considerations regarding AI in teaching and learning. It has many similar points to the below, along with teaching points and example ways to engage students in discussing these issues, focused on different disciplines. It’s very good, and comes complete with an infographic.

My post here has been largely a way for me to think through the issues by writing.

Bias, discrimination, stereotypes, toxicity in outputs

Perpetuating harmful stereotypes and discrimination is a well-documented harm in machine learning models that represent natural language (Caliskan et al., 2017). (Weidinger et al., 2021, p. 9)

Bender et al. (p. 613) reveal that large datasets do not equally represent online users but significantly overrepresent younger users, people from developed countries, and English speakers. This means that dominant biases are disproportionately displayed including white supremacist, sexist, and ageist views. In use, GPT-3 has been shown to reproduce subtle biases and overtly discriminatory language patterns from its training data in many contexts including gender, race, religion, and disability. (Chan, 2022)

As Weidinger et al. (2021) note, “LMs are optimised to mirror language as accurately as possible, by detecting the statistical patterns present in natural language” (p. 11), so when the datasets they are trained on contain biases, discriminatory, abusive, or toxic language, these then find their way into the tools’ outputs. As Abid et al. (2021) note, one method to address this problem is to carefully select training data; but this is not how GPT (on which ChatGPT is built) has been trained. Abid et al. (2021) were able to reduce some biased and discriminatory outputs by changing prompts fed to GPT, but this is a manual solution that relies on users choosing (and knowing how) to do so.

ChatGPT does have content filters to try to avoid such issues. For example, as of August 2022, Open AI released a Moderation endpoint to Open AI API developers, meant to use AI to flag harmful content. According to the announcement about this tool from Open AI, “When given a text input, the Moderation endpoint assesses whether the content is sexual, hateful, violent, or promotes self-harm—content prohibited by our content policy.” In their Nov. 30, 2022 announcement about ChatGPT, Open AI notes:

While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.

You can read more about the Moderation endpoint in Open AI’s API documentation, including the kinds of content that it is meant to address, including hate, violence, and sexual content (among others).

Several news reports of biased, racist, discriminatory outputs show that the filter does still have a ways to go, such as reported in a January 16, 2023 article by Hannah Getuhan for Insider, and a December 8, 2022 article by Davey Alba for Bloomberg. The above quote, and a quote from Open AI CEO Sam Altman in the Bloomberg article , indicate that the company is relying at least in part on user feedback (such as using the “thumbs up” or “thumbs down” functionality to rate outputs) to improve the filters. And in the meantime, biased, discriminatory and harmful content is being created.

Open AI notes, in their documentation on their API, that “From hallucinating inaccurate information, to offensive outputs, to bias, and much more, language models may not be suitable for every use case without significant modifications” (under “Safety Best Practices”).

Bender et al. (2021) note a downstream effect as well: when people use these kinds of language models and add biased or discriminatory text to the internet, it not only spreads such things further, it also can potentially contribute to further perpetuating the problem in training data for later language models (p. 617).

For the educational context, it’s important for those who might want to use content generated by ChatGPT in teaching and learning to recognize that what they or their students produce using the tool can produce overt, or (also dangerous) subtle biases and stereotypes, potentially reinforcing those that have been fed into it through training data unless this issue is directly addressed and discussed.

Relying on exploitative labour practices for content moderation

One way that AI companies address problems such as the above is to have humans label text, images, videos, etc. with violence, abuse, hate speech, and other toxic content, so as to build AI models that can find and remove such content automatically.

A recent article in Time magazine explained how OpenAI relied on underpaid workers in Kenya to do this labeling for a tool that ended up as part of ChatGPT. Not only were those workers paid extremely low wages (“between around $1.32 and $2 per hour depending on seniority and performance,” according to the Time article), they are also exposed on a daily basis to material that can be horrific and traumatic.

This issue is not just limited to OpenAI’s work, of course. An article in Noēma in October 2022 (“The Exploited Labour Behind Artificial Intelligence“) points out that these kinds of practices are relied on by many companies, including those running major social media platforms that rely on automated content moderation:

Every murder, suicide, sexual assault or child abuse video that does not make it onto a platform has been viewed and flagged by a content moderator or an automated system trained by data most likely supplied by a content moderator. Employees performing these tasks suffer from anxiety, depression and post-traumatic stress disorder due to constant exposure to this horrific content.

As one grapples with the lingering issues of bias and harmful content in platforms such as ChatGPT, and talks with students about them, it would be useful to also talk with students about the ethical problems involved in attempts to label and remove such content.

Data and privacy

Privacy of those who make accounts and use the tools

As Autumm Caines notes, “Anytime you use a tool that needs an account the company now has an identifier in which they can track your use of the site to your identity” (“Prior to (or instead of) using ChatGPT with your students”) In its current form, ChatGPT requires a name, email address, and a mobile phone number. One can provide a pseudonym and an email address that doesn’t include one’s name; a mobile phone number could be more challenging to avoid tying to oneself, though one could use a pre-paid “burner” phone perhaps. The point is that, for example, if one wanted to ask one’s students to create an account and use a tool like this in teaching, it is important to let them know that the work they do in the tool can be tied to their identity through their account unless they take steps to anonymize it.

It’s also important, of course, to take a careful look at the Terms of Service and Privacy Policies to get as much of a sense as possible (which isn’t always as clear as one might like) as to how one’s data will be used.

Open AI notes in their FAQ about the ChatGPT research release that they may review the content of conversations one has with the tool, “to improve our systems and to ensure the content complies with our policies and safety requirements.” The FAQ note that the contents of conversations may also be used for future training of the model, which means, as Caines notes, that “Using ChatGPT is providing free labor to OpenAI in their product development,” inviting large numbers of people to participate in work needed to make the model better, by making it freely available.

In their documentation about their GPT API, Open AI notes:

As part of . . . continuous improvement, when you use OpenAI models via our API, we may use the data you provide us to improve our models. Not only does this help our models become more accurate and better at solving your specific problem, it also helps improve their general capabilities and safety.

On that page they also describe a procedure to opt out of having your data used to improve models.

Finally, it’s also not clear what happens to one’s data once one deletes their account. The FAQ for ChatGPT says:

Can you delete my data?

Yes, please follow the data deletion process here: https://help.openai.com/en/articles/6378407-how-can-i-delete-my-account

That takes you to the process for deleting an account, but what happens to one’s chat history, one’s specific prompts, and how all of that may be tied to oneself as an individual? I haven’t been able to find that information in their documentation.

So of course I asked ChatGPT what happens to my chat history after I delete my account.

When you delete your account, your chat history with me may be deleted, depending on the service provider’s policy. It’s important to check the terms of service and privacy policy of the platform you’re using to interact with me to understand how they handle user data and chat history.

For OpenAI, if you delete your account, the company will destroy any data associated with your account, including your chat history. This process is subject to OpenAI’s data retention policy, and will be done in a way that makes the data irretrievable. Additionally, OpenAI also has technical and organizational measures in place to protect data against accidental or unauthorized access, alteration, or destruction.

It’s important to keep in mind that, even if the data is deleted, there may be copies of the data that exist in backups or other storage systems. These copies will be deleted over time in accordance with OpenAI’s data retention policy.

I can’t find Open AI’s data retention policy anywhere on their website, unless it’s buried in the Privacy Policy somewhere that I couldn’t find.

It may seem like I’m digging into this tool more than others, but really, these are questions that one should ask before using tools oneself, or asking students to do so. For those that have a license contract with the university, it is often the case that these sorts of questions are asked and answered, and certain privacy requirements must be met (depending on the context). In British Columbia, a Privacy Impact Assessment is required.

Potential privacy leaks by the tools themselves

Weidinger et al. (2021) note a different concern with privacy that could happen with LLMs, namely that there is the possibility of the models providing private information that was part of their training data:

Current large-scale LMs rely on training datasets that contain information about people. Privacy violations may occur when training data includes personal information that is then directly disclosed by the model (Carlini et al., 2021). Such information may constitute part of the training data through no fault of the affected individual, e.g. where data leaks occur or where others post private information about them on online networks (Mao et al., 2011) (p. 19)

I couldn’t find any clear information in Open AI’s documentation or blog posts about this (though there are quite a few blog posts and publications, and I haven’t reviewed them all!). But I did ask Chat GPT about steps that Open AI is taking to address this kind of problem. (I also asked it about myself–the Christina Hendricks who teaches philosophy at the University of British Columbia–and it couldn’t find me or any private info about me).

After a little back-and-forth in which ChatGPT said that “my training data is periodically updated, and any private information would have been removed during the process of data preprocessing and cleaning,” I asked the following question and received a helpful (?) answer.

ME: How does open AI clean training data to remove private information?
CHATGPT: OpenAI uses a combination of techniques to clean and preprocess the training data to remove any private information. Some of these techniques include:
  • Data Anonymization: This involves replacing any personally identifying information, such as names, addresses, and phone numbers, with generic placeholders.
  • Data Masking: This involves obscuring sensitive information, such as financial data, with a symbol or character.
  • Data De-identification: This involves removing or altering any data elements that could be used to identify an individual, such as dates of birth or social security numbers.
  • Data Filtering: This involves removing any data that is deemed unnecessary or irrelevant for training the model, such as duplicate or low-quality data.
  • Regular Auditing: OpenAI regularly audits the training data to ensure that it is clean and free of private information.

These techniques are applied to the data before it is used to train the model to ensure that the model does not have access to any private information. Additionally, OpenAI follow the industry standard and comply with GDPR, CCPA and other data protection regulations.

When I asked if ChatGPT would give me the address of a famous person (the actress Christina Hendricks, because why not), it said:

As a language model, I don’t have access to any personal information and I don’t have the capability to access it. Additionally, it would be illegal and unethical to share someone’s personal information without their consent. It’s not appropriate to share or ask for someone’s address without a valid reason and proper authorization.

Nice one, ChatGPT. Your filters and guardrails are working here.

Entering private data

Weidinger et al. (2021) note that there is a danger of users inputting private information, which may be exacerbated when one is chatting with a seemingly somewhat human-like bot and engaging in a conversation like one might with another human.

In conversation, users may reveal private information that would otherwise be difficult to access, such as thoughts, opinions, or emotions. Capturing such information may enable downstream applications that violate privacy rights or cause harm to users, such as via surveillance or the creation of addictive applications. (p. 30)

In their FAQ on ChatGPT, OpenAI notes that they can’t delete specific prompts from one’s history, and they say, “Please don’t share any sensitive information in your conversations.”

It is important to share with students that they should not reveal private information about themselves or others through their prompts.

Reinforcing existing power structures

Due to the nature of training data, LLMs can reproduce, and reinforce, larger social, political, and other power relationships. As Leon Furze puts it, “The power structures reflected in the datasets become encoded in the models, meaning that any output reinforces those structures” (“Chat GPT in Education: Back to Basics”). Furze notes as an example that if the training data is heavily skewed towards English language resources with particular cultural practices or references, the tool will likely not do as well with outputs in other languages or with other cultural references. Explaining further, Chan (2022) points out that

GPT-3’s dataset contained 93% English text and only 7% in other languages reflecting that GPT-3 is made for English-speaking (predominantly Western) countries in mind (Brown et al. [5], p. 14). Despite its impressive translation capabilities, the central issue is that English-speaking voices and perspectives are given overwhelming precedence.

In Open AI’s own documentation on their API, they note that “our support for non-English languages is currently limited” (under “Moderation”).

Weidinger et al. (2021) note one potential concern in this arena (among others):

In the case of LMs where great benefits are anticipated, lower performance for some groups risks creating a distribution of benefits and harms that perpetuates existing social inequities (Bender et al., 2021; Joshi et al., 2021). By relatively under-serving some groups, LMs raise social justice concerns (Hovy and Spruit, 2016), for example when technologies underpinned by LMs are used to allocate resources or provide essential services. (p. 16)

There are also broader issues, such as that all LLMs can do is to repeat patterns that already exist in language, focusing on those that appear most often to determine the likelihood of what words & phrases come next. It makes sense to think that they can thus perpetuate beliefs, values, processes, knowledges, etc. that are currently dominant.

Bender et al. (2021) explain, for example, that even though crawling the internet means there is a large amount of data in training sets, that doesn’t mean there is diversity amongst the viewpoints included. Differential access to and participation in generating content on the internet, as well as content moderation decisions, ways that content tends to propagate and get shared, and choices of what to include and exclude in the training data can mean that

… white supremacist and misogynistic, ageist, etc. views are overrepresented in the training data, not only exceeding their prevalence in the general population but also setting up models trained on these datasets to further amplify biases and harms. (p. 613)

In accepting large amounts of web text as ‘representative’ of ‘all’ of humanity we risk perpetuating dominant viewpoints, increasing power imbalances, and further reifying inequality. (p. 614)

Intellectual property & Indigenous data sovereignty

I am not a lawyer, and the issues here are complex, but there have been concerns raised and lawsuits begun against AI companies that scrape large amounts of data from the internet, including text and images, without permission from content creators. An article from The Verge in November 2022 (“The Scary Truth About AI Copyright is No One Knows What Happens Next“) discusses this issue from the perspective of copyright laws, particularly in the US, and how use of such scraped data for training purposes may be considered “fair use”–though the question is far from settled. See also “The Current Legal Cases Against Generative AI are Just the Beginning” (Techcrunch, January 2023).

There are already at last a few legal cases happening in this area including a class action lawsuit around the creation of Github Copilot, in which, the lawsuit alleges, code was used to train an AI model without crediting its creators, in violation of the licenses on that code; and an upcoming lawsuit by Getty Images against Stable Diffusion for scraping its copyrighted images and using them to train an image-generation tool.

This is also an issue to pay attention to around Indigenous data sovereignty, which Tahu Kukutai and John Taylor explain as “the inherent and inalienable rights and interests of indigenous peoples relating to the collection, ownership and application of data about their people, lifeways and territories” (Indigenous Data Sovereignty: Towards an Agenda, 2016, p. 2). According to an SFU Library Guide on Indigenous data sovereignty, it means that

Indigenous Peoples have the right to own, control, access, and steward data about their communities, lands, and culture. Information management and data collection strategies must align with the practices and culture of the Indigenous Nation, community or Peoples who are represented in the data.

A set of principles I’ve seen referenced a number of times in this area are the OCAP® principles developed by the First Nations Information Governance Centre: (quotes below are from this page)

  • Ownership refers to the relationship of First Nations to their cultural knowledge, data, and information. This principle states that a community or group owns information collectively in the same way that an individual owns his or her personal information.
  • Control affirms that First Nations, their communities, and representative bodies are within their rights to seek control over all aspects of research and information management processes that impact them. …
  • Access refers to the fact that First Nations must have access to information and data about themselves and their communities regardless of where it is held. The principle of access also refers to the right of First Nations’ communities and organizations to manage and make decisions regarding access to their collective information. …
  • Possession … refers to the physical control of data. Possession is the mechanism by which ownership can be asserted and protected.

Depending on the information in the training datasets for LLMs, and how output is used, Indigenous data sovereignty rights may be violated.

Disparate access

This is an issue with many technologies, but still worth pointing out. Weidinger et al. (2021) explain clearly:

Due to differential internet access, language, skill, or hardware requirements, the benefits from LMs are unlikely to be equally accessible to all people and groups who would like to use them. Inaccessibility of the technology may perpetuate global inequities by disproportionately benefiting some groups. (p. 34)

At the time of writing I couldn’t find any information on what the experience would be like trying to use ChatGPT with low wifi bandwidth, but I’m guessing there would be more slowness and errors. And as noted above, this particular tool works best with English. And those students who speak English well and have the time and wherewithal to develop more skills in prompt engineering are going to be able to produce better results.

I would add to the above quote the issues of financial access: while ChatGPT is free of cost to use for now, that could change at any moment. OpenAI note in their FAQ on ChatGPT that it is free to use during “the initial research preview.” They started plans for ChatGPT Pro, and those with some early access are quoting the price at $42 USD a month–clearly out of reach for many.

Just before publishing this post, OpenAI announced ChatGPT Plus, for $20 USD a month. With that subscription you can get access to ChatGPT even during peak times, faster performance, and early access to new features and improvements. They say in that announcement that they will continue to offer a free option, but it seems reasonable to say that those using the free tier will be at a disadvantage in terms of access during peak times and access to features and improvements. Thus, in classes there may be some students who have access to the professional version, and many who don’t. Some students may pool resources together so that a group of them uses the same pro or plus account, while others can only access the free version.

For educators, it is important to understand that while the tool is free right now, if you incorporate it into activities that could change. It is also important to recognize that students will have differential access to be able to use it effectively for course activities and assignments.

Environmental impact

This is also not limited to LLMs, but it’s still worth considering. Bender et al. (2021) note the significant energy usage for training language models, and point to arguments suggesting energy efficiency as an important metric for success (along with other metrics):

As shown in [5],* the amount of compute used to train the largest deep learning models (for NLP and other applications) has increased 300,000x in 6 years, increasing at a far higher pace than Moore’s Law. To promote green AI, Schwartz et al. argue for promoting efficiency as an evaluation metric …. (p. 612)

* Amodei & Hernandez. 2018

The authors note further that the costs of such environmental impacts are being borne most strongly by people in marginalized groups and poorer nations, while the benefits accrue to those whose language and culture are represented in the models, and who have the financial means to be able to access and use them.

Conclusion

There are, I expect, other important ethical topics to be considered, but this post is already over 4000 words!

I want to reiterate that my purpose with this post is to catalogue and think through some ethical considerations around LLMs like ChatGPT, but I am not concluding from all of this that they are so problematic that we ought not to use them at all in teaching and learning. There may be good cases for that to be made, but it’s not what I’m doing here. I’m trying instead to raise ethical considerations for those involved in teaching and learning to be aware of if they choose to use such tools.

Works Cited

Abid, A., Farooqi, M., & Zou, J. (2021). Large language models associate Muslims with violence. Nature Machine Intelligence, 3(6), Article 6. https://doi.org/10.1038/s42256-021-00359-2
Amodei, D., & Hernandez, D. (2018). AI and Compute. https://openai.com/blog/ai-and-compute
Bender, E.M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada. Association for Computing Machinery. ISBN 9781450383097. https://doi.org/10.1145/3442188.3445922.

Caliskan, A., Bryson, J. J., and Narayanan. A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. ISSN 0036-8075, 1095-9203. https://doi.org/10.1126/science.aal4230. http://arxiv.org/abs/1608.07187.

Carlini, N., Tramer, F., Wallace, E., Jagielski, M., Herbert-Voss, A., Lee, K., Roberts, A., Brown, T., Song, D., Erlingsson, U., Oprea, A., and Raffel, C. (2021). Extracting Training Data from Large Language Models. arXiv:2012.07805 [cs]. http://arxiv.org/abs/2012.07805.

Chan, A. (2022). GPT-3 and InstructGPT: Technological dystopianism, utopianism, and “Contextual” perspectives in AI ethics and industry. AI and Ethics. https://doi.org/10.1007/s43681-022-00148-6
Hovy, D. and Spruit, S.L. (2016). The Social Impact of Natural Language Processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Berlin, Germany. Association for Computational Linguistics. doi: 10.18653/v1/P16-2096. https://aclanthology.org/P16-2096.
Joshi, P., Santy, S., Budhiraja, A., Bali, K., and Choudhury, M. (2021). The State and Fate of Linguistic Diversity and Inclusion in the NLP World. arXiv:2004.09095 [cs]. http://arxiv.org/abs/2004.09095.
Mao, H., Shuai, X., and Kapadia, A. (2011). Loose tweets: an analysis of privacy leaks on twitter. In Proceedings of the 10th annual ACM workshop on Privacy in the electronic society, Chicago, Illinois, USA. Association for Computing Machinery. ISBN 9781450310024. doi: 10.1145/2046556.2046558.https://doi.org/10.1145/2046556.2046558.
Schwartz, R., Dodge, J., Smith, N.A., & Etzioni, O. (2020). Green AI. Commun. ACM 63(12,) 54–63. https://doi.org/10.1145/3381831
Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., … Gabriel, I. (2021). Ethical and social risks of harm from Language Models. arXiv. https://doi.org/10.48550/arXiv.2112.04359

4 comments

  1. Fun little experiment – go through the headings of the blog and ask if that’s not also true of universities.

    Racial bias in output
    Exploitation of underpaid labour
    Bias towards English
    Exploitation of indigenous populations

    Is artificial knowledge production different from university knowledge production? Is one big business the same as another.

    I guess we have to ask why the services exist?

    Also hello, hope you’re well

    1. Hi Pat, thanks for stopping by! I hope you’re well too.

      And yes, we can certainly say similar things about many aspects of higher education. And many of the areas I talked about are also areas of concern for other learning technology too. So this is a matter of yes and… and this too.

      Asking why the services exist is a great question. If you mean services like ChatGPT, I’d say it’s likely partly a matter of research (what can we actually manage to do? what problems might we be able to solve?) and also a matter of money (how can this generate more revenue?). Which can also be said of scholarly knowledge production in many cases as well, probably–though I’d like to think with a heavier weight on research, problem-solving than on revenue generation but it depends on the field and the project. Unsurprising when all are within a similar social, economic, and political system and structures.

      I guess I feel like I have at least some more influence on university knowledge production than AI knowledge production, specifically around teaching and learning, in my role. At least at this university. The AI models and training data seem more opaque to me, though to be honest, how and why knowledge gets produced at universities is also a big, thorny question. Through contributing to public discourse, maybe I can have a tiny voice along with many others to suggest change in AI knowledge production, as well as in actions I can take here and now around human teaching and learning practices.

      1. I think that’s the perception is those of us lucky enough to have some agency will be ok, but a lot of the ethics we have no arrived at as positions, where not paid for with the agency of those that made an error, but with the agency of those that had theirs taken away them.

        The opaqueness is the sweet spot, because we live in a time of micromeasurement and legalesed everything. When we need measurement, or a metric, then the opaqueness seems contradictory, but the alternative is knowing how knowledge works, which seems something somewhat philiosophical.

  2. This is a fascinating and extensive look at one of the most important set of issues right now, particularly for educators diving into the Chat World in experiment mode. I hope educators are at least considering the ethical and privacy issues. I worry, too many are not.
    Kevin

Comments are closed.