AI Ethics

The Large Language Models (LLMs) underpinning popular AI chatbots are trained on human-generated creative content, and recombine these data in their outputs. Most often, original content has been collected without permission and without citation. The legal status of LLMs and their outputs is in flux as governments draft policy to regulate the sector.

For now, designers using chatbots should be aware that the Terms of Service contracts for many platforms shift responsibility for the use and distribution of copyrighted content to users. As well, you often have limited rights to the content you produce with an LLM. Instead, the platform generally reserves the right to reproduce your work for any purpose, including for further model training.

As the ethical and legal landscape evolves, be cautious about how you use chatbots:

  • Be aware that LLM outputs are often incomplete, inaccurate, and/or reflect whatever social/cultural biases can be found in their training data.
  • Verify training data sources wherever possible.
  • Confirm outputs with reputable sources to ensure accuracy. Consider using platforms like Perplexity AI, which cite their sources and allow selection of source types.
  • Reject LLM outputs when necessary.
  • When you use a chatbot in your work, always cite the platform.

Spam prevention powered by Akismet