When implementing AI use in an educational setting, one should first consider ethical issues such as discrimination/bias and surveillance. AI-based pattern recognition is in and of itself problematical. Like humans, the environment in which they learn ultimately impacts the nature of their learning. Pattern matching also has more insidious manifestations. The AI behind a Social Media platform captures a user’s browsing history, without users being aware that they are being
watched – to keep them in the echo chamber of feeds that can trap them into an increasingly narrow worldview.
DALL-E2 is an AI-powered image generation tool created by OpenAI that can create unique and complex images from textual input. DALL-E2 uses a combination of neural networks and natural language processing (NLP) to understand the user’s input and generate an image that matches the description. The platform is designed to be highly versatile and can produce images ranging from realistic to highly abstract or surreal. Its potential applications are varied, including graphic design, art, and visual storytelling.
In the educational setting (McMullan, 2015). AI is often presented as being entirely disembodied from its human creators, which somehow implies that it is therefore objective. There is a mountain of evidence that this is not the case. Since option 1 of this week’s task asks us to take on the role of a county judge at a bail hearing, I decided to put “detain or release individual defendant” in the search bar. The screenshot of this search is clearly biased. As you may notice, most defendants were people of color. As I continued to research this topic, I found an example of bias creeping in BERT—a universal language model that is used by Google’s search engine for things like sentence prediction. It learns from digitized information, including all the biases contained in that content. For example, it’s been discovered that, in general, it didn’t “give women enough credit,” and that when fed 100 random words “99 cases out of 100, BERT was more likely to associate the words with men rather than women.” (Metz, 2019). As with any machine learning algorithm, it is possible that DALL-E2 has biases built into its training data, which could lead to biased or problematic output. Additionally, the algorithm used in DALL-E2 has not been made public, so it is difficult to know exactly how it works or what biases it may contain. However, OpenAI has stated that they have taken steps to ensure that DALL-E is ethically and responsibly developed, and they have implemented various safeguards to prevent the generation of inappropriate or harmful content. Nonetheless, it is important to be aware of the limitations and potential biases of any AI tool and to approach its output with a critical eye.
Most AI systems currently used in education are created by private companies and do not involve much input from educators. This can lead to a lack of training and support for teachers to use AI effectively, potentially marginalizing them. As AI becomes more prevalent in education, it is important for teachers to be aware of the potential impacts on their students and to remain vigilant.
References
McMullan, T. (2015, July 23). What does the panopticon mean in the age of digital surveillance? (https://www.theguardian.com/technology/2015/jul/23/panopticondigital[1]surveillance-jeremy-bentham) The Guardian.
Metz, C. (2019). We Teach A.I. Systems Everything, Including Our Biases (https://people.eou.edu/soctech/files/2020/12/BERTNYT.pdf). New York Times.