Unmuting the Hidden Costs: A Sustainability Reflection on Zoom in Corporate Training

Introduction

As a graduate student in the Master of Educational Technology (MET) program at UBC, and someone who has worked extensively in instructional support and corporate training environments, I’ve come to appreciate how indispensable tools like Zoom have become. Especially during my time as a Training Administrator and Graduate Admissions Coordinator, Zoom played a central role in day-to-day communication, team development, and learner engagement. Its convenience and scalability are hard to overlook.

But my studies—and particularly this unit’s readings—have challenged me to reflect more critically. Inspired by Kate Crawford’s Atlas of AI (2021), I’m now considering Zoom not just as a tool, but as part of a much broader, materially entangled system with real environmental and social costs. In this reflection, I examine the sustainability implications of using Zoom specifically in corporate training settings—where the scale is often large, the infrastructure needs are high, and the frequency of use is continuous.

Environmental Cost

Cloud Infrastructure and Energy Use

Behind every seamless Zoom call lies an invisible, energy-intensive infrastructure. Corporate training programs—especially those designed for onboarding, compliance, or professional development—often require extended video sessions, breakout room collaboration, and frequent recordings. Each of these features depends on powerful cloud servers, often operated by tech giants like AWS or Azure.

Although my experience as a training administrator focused on the learner-facing side, I now recognize how significant the backend operations are. Lohr (2020) acknowledges that cloud providers are becoming more energy-efficient, but Mills (2020) warns that our ever-expanding reliance on cloud computing still hinders progress toward a truly green future. It’s estimated that one hour of HD video conferencing can emit up to 1 kg of CO₂ per user, depending on infrastructure. When scaled across hundreds of learners and recurring sessions, this creates a sizable carbon footprint.

Hardware Lifecycles and E-Waste

As a practitioner, I’ve often had to recommend specific hardware or troubleshoot technical issues. Zoom, while accessible, still assumes participants have updated computers, webcams, and headsets. From an instructional design perspective, that’s manageable—but from a sustainability lens, each of these devices contributes to global e-waste. They also embody intensive resource extraction processes—something Crawford (2021) emphasizes through her metaphor of “technology as geological process.”

Each learner’s device is composed of metals and minerals mined from the Earth, and frequent upgrades create a cycle of consumption. This wasn’t something I considered deeply during my early work with educational technology—but it now feels urgent to account for.

Social Costs

Injustices in the Supply Chain

During my biology and physiology training, I was taught to trace systems and cycles—but not necessarily through a political or ethical lens. Crawford’s chapter and Buss (2018) on conflict minerals offer a stark reminder: the tools we rely on are embedded in extractive, unjust supply chains. From smartphones to MacBooks, the devices enabling Zoom sessions contain tungsten, tantalum, and cobalt, often mined under exploitative conditions in countries like the Democratic Republic of Congo.

As someone who values equitable access and ethical education, it’s sobering to realize how many of our tools depend on invisible labour and global inequalities. It’s a challenge to my own practice as an instructional designer and learning technologist—one that I’m beginning to actively engage with.

Platform Dependence and Marginalization

From my experience with LMS evaluations and open-source exploration, I’ve become aware of the limitations imposed by corporate platforms. Zoom’s proprietary nature restricts flexibility, and while it serves large institutions well, it’s less accessible for smaller organizations or learners in under-resourced regions. This is particularly relevant when developing training for diverse learners, including those outside North America.

I’ve been exploring open-source platforms like Moodle and BigBlueButton as part of ongoing LMS comparison work, and this assignment reinforces the value of decentralizing our dependence on dominant tools like Zoom.

Economic and Operational Costs

Subscription Fees and Infrastructure Demands

In my role managing training programs, Zoom’s enterprise features—like cloud storage, reporting, and breakout room capacity—came at a cost. While these are often absorbed by the institution, smaller organizations or NGOs may struggle to afford sustained access. Infrastructure upgrades (like better bandwidth, internal tech support, or data security systems) also add to the total cost of ownership.

Cost-benefit analyses in corporate environments rarely factor in environmental or social sustainability. There’s an opportunity here to broaden how we define “efficiency” or “ROI” in training contexts.

Hidden Labor

Behind every successful Zoom session are not only instructors and learners, but also IT support teams, moderators, and instructional designers—roles I’ve held or collaborated with. Crawford (2021) draws attention to the “ghost work” that supports AI and digital systems. In education and training, that ghost work includes troubleshooting, customizing learning materials, and offering tech support—efforts often unrecognized in budgeting or planning.

Gaps in Transparency

This part of the assignment was particularly frustrating. Despite my attempts to locate concrete data on Zoom’s energy consumption or supply chain impact, most of the information I found was vague or buried in corporate social responsibility (CSR) statements. Zoom’s annual reports and ESG statements provided general commitments but lacked specifics.

Similarly, device-specific lifecycle assessments were incomplete or outdated. This suggests an intentional obscuring of material costs—what Crawford might call “the aesthetic of immateriality” that dominates tech branding.

Conclusion

As a lifelong learner, instructional designer, and educator, I believe deeply in the potential of educational technology to transform lives. But as I deepen my understanding through the MET program, I’m also becoming more aware of the ecological and ethical trade-offs embedded in our tools.

Zoom has enabled powerful teaching and training experiences, especially in corporate learning environments where I’ve worked. However, it’s not a neutral platform. It depends on extractive industries, centralized cloud infrastructure, and invisible labor. As we move forward, I believe we must push for alternatives—open-source tools, hybrid models, and low-bandwidth solutions that are both inclusive and sustainable.

This reflection is just one step, but it’s reshaping how I choose, advocate for, and implement technology in educational spaces. And I know now that the best tools are not just functional—they’re also just.

 

References

Buss, D. (2018). Conflict minerals and sexual violence in Central Africa: Troubling research.
Social Politics: International Studies in Gender, State & Society, 25(4), 545–567.
https://doi.org/10.1093/sp/jxy038

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence.
Yale University Press.

Lohr, S. (2020, September 23). Cloud computing is not the energy hog that had been feared.
The New York Times. https://www.nytimes.com/2020/09/23/technology/cloud-
computing-energy-usage.html

Mills, M. (2020, December 5). Our love of the cloud is making a green energy future impossible.
TechCrunch. https://techcrunch.com/2020/12/05/our-love-of-the-cloud-is-making-a-green-energy-future-impossible/

 

 

Who Decides What We See?

Understanding Content Prioritization and Its Hidden Impact on Learning

Imagine walking into a school library and asking for resources on biology. But instead of curated content aligned with your curriculum, the librarian hands you materials based on how many students skimmed them or how long they stared at the cover. That’s the digital reality of content prioritization.

As we rely more on search engines and AI tools for knowledge, it’s important to ask:

Who decides what content appears first? And how is this shaping the way we teach, learn, and understand the world—especially in science and education?

What Is Content Prioritization?

Content prioritization is how digital platforms like Google, YouTube, or Instagram decide what you see first. Algorithms sort and rank content based on a mix of popularity, perceived relevance, user behaviour, and more. But these systems often reflect existing biases, and their decisions are far from neutral.

Safiya Umoja Noble (2018) explains that such algorithmic systems, especially Google Search, amplify dominant narratives while pushing marginalized perspectives into the background.

For instance:

When I searched for “cell biology animations” for a diverse high school class, the first videos often came from big EdTech publishers or YouTube influencers—but many lacked inclusive or culturally relevant analogies.

A search for “Indian scientists” returned mostly historical male figures like C.V. Raman or Jagadish Chandra Bose, while brilliant modern-day contributors—especially women and underrepresented researchers—were missing from top results.

Educational searches like “interactive tools for biology” often prioritize tools with the best SEO, not necessarily the best pedagogical value or accessibility for diverse classrooms.

These examples show that what gets seen isn’t always what’s most accurate, inclusive, or useful.

How These Algorithms Work (in Simple Terms)

Content prioritization algorithms weigh signals such as:

  • PageRank (how many sites link to a page)
  • Engagement (likes, clicks, time spent)
  • Search history and location
  • Recency and updates

If a resource has been widely shared, linked, or liked, it’s pushed to the top—creating a visibility loop where the “rich get richer.” But this doesn’t always reflect quality, diversity, or educational value.

As Noble (2018) puts it, when companies like Google become the “largest digital repository in the world”, they effectively control access to knowledge (p. 157)—and that has consequences, especially for communities already underrepresented in science and education.

Why This Matters in Education

As a biology teacher and a student in educational technology, I see firsthand how students often rely on the first page of Google or top YouTube results when researching.

One example that stuck with me was when a student searched “Why is the sky blue?” and ended up watching a highly entertaining—but scientifically flawed—video because it ranked higher than actual physics-based explanations. Another searched for “GMO pros and cons” and was flooded with corporate-sponsored content that didn’t present a balanced view.

For learners in diverse classrooms—some of whom may be multilingual, neurodiverse, or coming from different learning backgrounds—this creates barriers. Algorithms are not designed to prioritize inclusive or adaptive pedagogy; they’re designed to increase clicks.

This matters not just in K–12 classrooms but also in higher education, where students conducting literature reviews or online research often miss out on open-access or indigenous knowledge sources because those don’t rank well in standard search engines.

PageRank’s Quiet Influence on My Life

Google’s PageRank algorithm changed the way we navigate information. It assumes that the more links a page gets, the more “important” it is. But importance isn’t the same as accuracy, equity, or relevance.

In my personal and professional life, PageRank:

  • Prioritizes commercial tools over open educational resources when I search for LMS platforms—even though the open tools may better serve the communities I work with.
  • Bumps down newer blogs or collaborative projects like The Speaking Tree because they don’t yet have enough inbound links—even though the content is original, thoughtful, and inclusive.
  • Surfaces mainstream biology content for my students while suppressing local, decolonized science perspectives I actively try to incorporate.
Can I Impact PageRank?

Not directly—but yes, in small, meaningful ways.

Here’s what I’ve started doing:

  • Publishing blog posts and learning content under shared, high-authority domains like Medium or university repositories.
  • Encouraging my network of educators to cross-link resources from our platforms—boosting visibility for content that might otherwise stay hidden.
  • Teaching my students digital literacy by asking: “Why do you think this showed up first? Who benefits from you clicking this link?

I also build community-focused projects like The Speaking Tree to promote collaborative knowledge sharing—a kind of counter-algorithmic move where we decide what’s important, not just what’s viral.

The Bigger Picture

As Noble (2018) powerfully argues, content prioritization algorithms are not neutral—they’re reflections of who holds power and whose knowledge is valued. And when we depend on systems like Google to shape what we learn, we risk deepening the very inequities we seek to challenge.

Educators, technologists, and learners must become more than passive consumers of search results. We need to be critical curators, actively amplifying underrepresented voices and challenging the way content is ranked, framed, and delivered.

Because the question isn’t just “What did you learn today?”—it’s “Who decided that was what you should learn?”

 

Appendix

  1. Algorithmic Bias in Educational Systems
    Tetteh, G. K. (2025). Algorithmic bias in educational systems. World Journal of Advanced Research and Reviews, 17(1), 236–240. https://journalwjarr.com/sites/default/files/fulltext_pdf/WJARR-2025-0253.pdf
  2. Impact on Minoritized Students
    Taylor, J. (2024, September 22). Algorithmic bias continues to impact minoritized students. Diverse: Issues In Higher Education. https://www.diverseeducation.com/reports-data/article/15679597/algorithmic-bias-continues-to-impact-minoritized-students
  3. Social Media Algorithms and Misinformation
    Rodriguez, L., & Ahmed, S. (2024). The influence of social media algorithms on racial and ethnic misinformation: Patterns and impacts. ResearchGate. https://www.researchgate.net/publication/387503351_The_Influence_of_Social_Media_Algorithms_on_Racial_and_Ethnic_Misinformation_Patterns_and_Impacts
  4. Digital Transformation and Marginalized Communities
    Alcaraz-Domínguez, S., Martín-García, A. V., & Moral-Rodríguez, M. E. (2025). Addressing the social impact of digital transformation: A project with marginalized communities. Frontiers in Education, 10, Article 1534104. https://doi.org/10.3389/feduc.2025.1534104
  5. Algorithmic Bias in Student Progress Monitoring
    Mohamed, A. A., & Naseem, A. (2024). Bias in AI student monitoring algorithms: An analysis of age, disability, and gender. Computers and Education: Artificial Intelligence, 5, 100144. https://doi.org/10.1016/j.caeai.2024.100144
  6. AI and Racial Justice
    Royster, R., & Mertens, J. (2024, August 1). AI and racial justice: Navigating the dual impact on marginalized communities. Nonprofit Quarterly. https://nonprofitquarterly.org/ai-and-racial-justice-navigating-the-dual-impact-on-marginalized-communities/

References

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

Spam prevention powered by Akismet