There are 5 critical ethical concerns related to Generative AI and Large Language Models (LLMs) that we need to be aware of and mitigate.

1. Trust and Lack of Interpretability
2. Data Privacy
3. Plagiarism
4. Baked in Bias
5. Environmental Impact and Sustainability 🧵
Deep learning models and LLMs in particular are so large and opaque that even the model developers are often unable to understand why their models are making certain predictions. /2
If one were to use ChatGPT to get first aid instructions, how can we know the response is reliable, accurate, and derived from trustworthy sources.

Ramifications of lack of transparency are troubling in the era of fake news and misinformation. /3
Since LLMs come pre-trained and are subsequently fine-tuned to specific tasks, they create a number of issues and security risks. Without knowing the degree of confidence (or uncertainty) of the model, it’s difficult for us to decide when to trust the model’s output. /4
GPT-3 has been shown to exhibit common gender stereotypes, associating women with family and appearance and describing them as less powerful than male characters. /5
Concerns over ChatGPT’s ability to turn all of our children into mindless, plagiarizing cheats is at the top of many educators' minds and are leading some school districts to ban the use of ChatGPT. /6
Any new tech will have pros and cons. While I am also excited by the new possibilities and the promise these models hold for each of us. It is our responsibility to make sure any model used in the public domain is monitored, explained, and regularly audited for bias. /end

Recommended by
Recommendations from around the web and our community.

Thoughtful thread on generative AI and LLMs.

Great thread from CEO of @fiddlerlabs on LLM + ethics/bias.👇🏽