Tag Archives: EdTech Ethics

Principles of ethics in Ed Tech & AI (running list)

I’m going to use this post just to note a few resources on ethical principles around educational technology that I haven’t yet discussed in the series I’ve been writing about ethics & ed tech so far. I will at some point get around to writing about these, or at least synthesizing them with others I’ve reviewed so far.

This post will be updated over time. It’s meant as a way for me to keep track of things I want to look into more carefully and/or collate with other principles. Eventually I’d like to map out common ones and pay attention to those that are not commonly included in sets of already-existing principles as well.

I also have a Zotero library about ethics of educational technology and artificial intelligence that I update too.

Ethics in Ed Tech

Ethical Ed Tech Workshop at CUNY

Information and resources for a workshop on Ethical Approaches to Ed Tech, by Laurie Hurson and Talisa Feliciano, as part of a Teach@CUNY 2020 Summer Institute. This web page includes a handout for workshop participants that lists the following categories of questions to ask in regard to ethics & ed tech:

  • Access
  • Control
  • Data
  • Inclusion
  • Intellectual Property & Copyright
  • Privacy
  • Source

See the handout for more details!

UTS Ed Tech Ethics Report

The University of Technology, Sydney, went through a deliberative democracy process in 2021 to address the following question:

What principles should govern UTS use of analytics and artificial intelligence to improve teaching and learning for all, while minimising the possibility of harmful outcomes?

A report on the process and the draft principles was published in 2022. The categories of principles in that report are:

  • Accountability/Transparency
  • Bias/Fairness
  • Equity and Access
  • Safety and Security
  • Human Authority
  • Justifications/Evidence
  • Consent

Again, see the report for more details–the principles are in the Appendix.

Ethics in Artificial Intelligence

EU Ethical Guidelines on AI

In October 2022 the European Commission published a set of Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators.

The categories of these principles are:

  • Human agency and oversight
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Societal and environmental wellbeing
  • Privacy and data governance
  • Technical robustness and safety
  • Accountability

See the PDF version of the report for more detail.

UNESCO Recommendations on the Ethics of AI

In 2022, UNESCO published a report about ethics and AI as well. The main categories of their ethical principles are:

  • Proportionality and do no harm
  • Safety and security
  • Fairness and non-discrimination
  • Sustainability
  • Right to privacy, and data protection
  • Human oversight and determination
  • Transparency and explainability
  • Responsibility and accountability
  • Awareness and literacy
  • Multi-stakeholder and adaptive governance and collaboration

Entangling Pedagogy and Technology (EdTechEthics Part 4)

I’m continuing a series of blog posts on ethics and educational technology, this time with a discussion of a recent open access paper by Tim Fawns called “An Entangled Pedagogy: Looking Beyond the Pedagogy—Technology Dichotomy.”

This paper doesn’t provide a framework for thinking about ethical considerations in educational technology, but rather talks about the importance of considering how technology and pedagogy are entangled with each other, and also with broader contexts and values, including ethical ones. It helps me think further about how ethics and other values are already embedded in educational technology decisions and uses, and also adds complexity to how I’ve been thinking about this topic–after reading and reflecting on this article I am thinking even more about how ethical evaluation of ed tech tools may differ across different types of uses, contexts, and pedagogical purposes.

I’m going to use this post to take some notes for myself on points from the article I am finding particularly generative at the moment, and then do some reflections on implications for thinking about ethical principles or a framework for an ethical approach to educational technology at a post-secondary institution.

Also, I’m excited that there is a workshop about entangled pedagogy, led by Tim Fawns and Maha Bali, as part of MYFest 2022. I’m really looking forward to digging into these ideas further then!

Caveat: this is an incredibly rich article with some complexity that I’m still not sure I fully understand. And I am only going to be able to do a rough summary of many of the author’s very insightful arguments. If anyone reads things differently, or thinks something else is more prominent in the article than I’m indicating here, I’m happy to discuss further in comments!

Continue reading

Elements of Digital Ethics by Per Axbom (Ed Tech Ethics Part 3)

I have started a series of blog posts reviewing what others have done related to the ethics of educational technology–see Ed Tech Ethics part 1 and Ed Tech Ethics part 2 so far.

Here in Part 3 I want to talk about a new resource I came across on Mastodon, a chart of Elements of Digital Ethics, by Per Axbom.

This chart is meant to include ethical considerations and concerns related to work with digital technology generally, and much (if not all?) is also relevant to educational technology. There is a lot here, and I won’t go over every piece (Axbom’s website helpfully provides a summary of each area), but I do want to make a few reflections here to help me connect this work to educational technology specifically, and to what I’ve reviewed in previous posts.

The elements of the chart are not ethical principles or criteria so much as they are broad-ish areas in which ethical concerns and harms arise, and that should be considered when deciding on things like what to purchase and how to use digital products and services.

Continue reading

Ethical questions about learning technology (Ed Tech Ethics Part 2)

As noted in the previous post on this blog, I’m reviewing some resources on ethics of educational technology (aka learning technology). In that post I did a short summary and some reflections on the UK’s Association for Learning Technology’s Framework for Ethical Learning Technology. That framework is made up of fairly broad principles that can form a very useful foundation for self-reflection and discussion about ethical approaches to learning technology decisions and practices.

In this post, I’m going to consider a couple of sets of questions that can guide reviews of specific educational technology tools: (1) a rubric by Sean Michael Morris and Jesse Stommel that has been used and refined in several Digital Pedagogy Lab Institutes, and (2) a tool to help with analyzing the ethics of digital technology that Autumm Caines adapted from another source for an Ed Tech workshop at the 2020 Digital Pedagogy Lab Institute.

Morris & Stommel, Rubric for Critically Evaluating Digital Tools

This rubric comes from Morris & Stommel (2017), where they describe a “crap detection” exercise they have used in Digital Pedagogy Lab Institutes, asking participants to review and compare various learning technology tools on a particular set of questions.

Critically evaluating digital tools activity; questions (included in text below) on a rainbow background

Rubric for evaluating learning technology tools, by Morris and Stommel, licensed CC BY-NC 4.0

The slide above includes the following questions as ethical considerations one could use when reviewing one or a small number of specific learning technology tools:

  1. Who owns the tool? What is the name of the company, the CEO? What are their politics? What does the tool say it does? What does it actually do?
  2. What data are we required to provide in order to use the tool (login, e-mail, birthdate, etc.)? What flexibility do we have to be anonymous, or to protect our data? Where is data housed; who owns the data? What are the implications for in-class use? Will others be able to use/copy/own our work there?
  3. How does this tool act or not act as a mediator for our pedagogies? Does the tool attempt to dictate our pedagogies? How is its design pedagogical? Or exactly not pedagogical? Does the tool offer a way that “learning can most deeply and intimately begin”?

Morris and Stommel also note in the article that they have also added another set of questions, around accessibility:

  1. How accessible is the tool? For a blind student? For a hearing-impaired student? For a student with a learning disability? For introverts? For extroverts? Etc. What statements does the company make about accessibility?

They also note that the point of using the rubric is not necessarily to do a takedown of specific tools but to encourage participants to think more deeply about the tools they use, or may consider using (and requiring students to use): it is “a critical thinking exercise aimed at asking critical questions, empowering critical relationships, encouraging new digital literacies” (Morris & Stommel 2017).

Continue reading

ALT’s Framework for Ethical Learning Technology (EdTechEthics Part 1)

Some context

Over the past couple of years I have been reflecting on the importance of ethical principles related to learning technology (LT), particularly as several ethical concerns have been surfaced related to use of LT during the pandemic, at our institution and elsewhere.

For example, I was part of a working group that created guidelines for use of online invigilation tools in 2020 (currently posted on the front page of the UBC Keep Teaching website), that included considerations of privacy and equity. But the institution still had and supported this kind of technology for awhile (and did before the pandemic as well). It took work by many people, both through public advocacy and behind the scenes, but eventually the UBC Okanagan and UBC Vancouver Senates voted to “restrict the use of remote invigilation tools that involve automated recording and algorithmic analysis of data captured during invigilation to only cases explicitly requiring ‘remote proctoring software’ by external accreditation bodies” (from the UBC Vancouver Senate minutes of March 2021). Looking back, there are things I wish I had done differently, but my own view is that I am happy that we have at least now reached this point where the institution no longer centrally pays for or supports this kind of online proctoring tool.

This was just one example where a focus on the ethics of learning technology came to the fore at the institution, and I had every intention of starting to dig more deeply into working on a possible set of ethical principles in the last year or so. But the pandemic, and the ups and downs of continual changes in teaching and learning that have accompanied it, along with significantly increased workload for staff in our unit and myself, have meant it kept getting pushed off. But it’s long past time to get started, and I’m taking the first steps by reviewing what others have already done. I’ll be doing summaries and reflections in a set of posts on this blog over the next … well … however long it takes!

I’m starting with the Association of Learning Technology’s (ALT) Framework for Ethical Learning Technology (FELT), a project that I have been watching from the sidelines and following updates about. It’s a comprehensive project that I think is very promising.

Continue reading