Unmuting the Hidden Costs: A Sustainability Reflection on Zoom in Corporate Training

Introduction

As a graduate student in the Master of Educational Technology (MET) program at UBC, and someone who has worked extensively in instructional support and corporate training environments, I’ve come to appreciate how indispensable tools like Zoom have become. Especially during my time as a Training Administrator and Graduate Admissions Coordinator, Zoom played a central role in day-to-day communication, team development, and learner engagement. Its convenience and scalability are hard to overlook.

But my studies—and particularly this unit’s readings—have challenged me to reflect more critically. Inspired by Kate Crawford’s Atlas of AI (2021), I’m now considering Zoom not just as a tool, but as part of a much broader, materially entangled system with real environmental and social costs. In this reflection, I examine the sustainability implications of using Zoom specifically in corporate training settings—where the scale is often large, the infrastructure needs are high, and the frequency of use is continuous.

Environmental Cost

Cloud Infrastructure and Energy Use

Behind every seamless Zoom call lies an invisible, energy-intensive infrastructure. Corporate training programs—especially those designed for onboarding, compliance, or professional development—often require extended video sessions, breakout room collaboration, and frequent recordings. Each of these features depends on powerful cloud servers, often operated by tech giants like AWS or Azure.

Although my experience as a training administrator focused on the learner-facing side, I now recognize how significant the backend operations are. Lohr (2020) acknowledges that cloud providers are becoming more energy-efficient, but Mills (2020) warns that our ever-expanding reliance on cloud computing still hinders progress toward a truly green future. It’s estimated that one hour of HD video conferencing can emit up to 1 kg of CO₂ per user, depending on infrastructure. When scaled across hundreds of learners and recurring sessions, this creates a sizable carbon footprint.

Hardware Lifecycles and E-Waste

As a practitioner, I’ve often had to recommend specific hardware or troubleshoot technical issues. Zoom, while accessible, still assumes participants have updated computers, webcams, and headsets. From an instructional design perspective, that’s manageable—but from a sustainability lens, each of these devices contributes to global e-waste. They also embody intensive resource extraction processes—something Crawford (2021) emphasizes through her metaphor of “technology as geological process.”

Each learner’s device is composed of metals and minerals mined from the Earth, and frequent upgrades create a cycle of consumption. This wasn’t something I considered deeply during my early work with educational technology—but it now feels urgent to account for.

Social Costs

Injustices in the Supply Chain

During my biology and physiology training, I was taught to trace systems and cycles—but not necessarily through a political or ethical lens. Crawford’s chapter and Buss (2018) on conflict minerals offer a stark reminder: the tools we rely on are embedded in extractive, unjust supply chains. From smartphones to MacBooks, the devices enabling Zoom sessions contain tungsten, tantalum, and cobalt, often mined under exploitative conditions in countries like the Democratic Republic of Congo.

As someone who values equitable access and ethical education, it’s sobering to realize how many of our tools depend on invisible labour and global inequalities. It’s a challenge to my own practice as an instructional designer and learning technologist—one that I’m beginning to actively engage with.

Platform Dependence and Marginalization

From my experience with LMS evaluations and open-source exploration, I’ve become aware of the limitations imposed by corporate platforms. Zoom’s proprietary nature restricts flexibility, and while it serves large institutions well, it’s less accessible for smaller organizations or learners in under-resourced regions. This is particularly relevant when developing training for diverse learners, including those outside North America.

I’ve been exploring open-source platforms like Moodle and BigBlueButton as part of ongoing LMS comparison work, and this assignment reinforces the value of decentralizing our dependence on dominant tools like Zoom.

Economic and Operational Costs

Subscription Fees and Infrastructure Demands

In my role managing training programs, Zoom’s enterprise features—like cloud storage, reporting, and breakout room capacity—came at a cost. While these are often absorbed by the institution, smaller organizations or NGOs may struggle to afford sustained access. Infrastructure upgrades (like better bandwidth, internal tech support, or data security systems) also add to the total cost of ownership.

Cost-benefit analyses in corporate environments rarely factor in environmental or social sustainability. There’s an opportunity here to broaden how we define “efficiency” or “ROI” in training contexts.

Hidden Labor

Behind every successful Zoom session are not only instructors and learners, but also IT support teams, moderators, and instructional designers—roles I’ve held or collaborated with. Crawford (2021) draws attention to the “ghost work” that supports AI and digital systems. In education and training, that ghost work includes troubleshooting, customizing learning materials, and offering tech support—efforts often unrecognized in budgeting or planning.

Gaps in Transparency

This part of the assignment was particularly frustrating. Despite my attempts to locate concrete data on Zoom’s energy consumption or supply chain impact, most of the information I found was vague or buried in corporate social responsibility (CSR) statements. Zoom’s annual reports and ESG statements provided general commitments but lacked specifics.

Similarly, device-specific lifecycle assessments were incomplete or outdated. This suggests an intentional obscuring of material costs—what Crawford might call “the aesthetic of immateriality” that dominates tech branding.

Conclusion

As a lifelong learner, instructional designer, and educator, I believe deeply in the potential of educational technology to transform lives. But as I deepen my understanding through the MET program, I’m also becoming more aware of the ecological and ethical trade-offs embedded in our tools.

Zoom has enabled powerful teaching and training experiences, especially in corporate learning environments where I’ve worked. However, it’s not a neutral platform. It depends on extractive industries, centralized cloud infrastructure, and invisible labor. As we move forward, I believe we must push for alternatives—open-source tools, hybrid models, and low-bandwidth solutions that are both inclusive and sustainable.

This reflection is just one step, but it’s reshaping how I choose, advocate for, and implement technology in educational spaces. And I know now that the best tools are not just functional—they’re also just.

 

References

Buss, D. (2018). Conflict minerals and sexual violence in Central Africa: Troubling research.
Social Politics: International Studies in Gender, State & Society, 25(4), 545–567.
https://doi.org/10.1093/sp/jxy038

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence.
Yale University Press.

Lohr, S. (2020, September 23). Cloud computing is not the energy hog that had been feared.
The New York Times. https://www.nytimes.com/2020/09/23/technology/cloud-
computing-energy-usage.html

Mills, M. (2020, December 5). Our love of the cloud is making a green energy future impossible.
TechCrunch. https://techcrunch.com/2020/12/05/our-love-of-the-cloud-is-making-a-green-energy-future-impossible/

 

 

Who Decides What We See?

Understanding Content Prioritization and Its Hidden Impact on Learning

Imagine walking into a school library and asking for resources on biology. But instead of curated content aligned with your curriculum, the librarian hands you materials based on how many students skimmed them or how long they stared at the cover. That’s the digital reality of content prioritization.

As we rely more on search engines and AI tools for knowledge, it’s important to ask:

Who decides what content appears first? And how is this shaping the way we teach, learn, and understand the world—especially in science and education?

What Is Content Prioritization?

Content prioritization is how digital platforms like Google, YouTube, or Instagram decide what you see first. Algorithms sort and rank content based on a mix of popularity, perceived relevance, user behaviour, and more. But these systems often reflect existing biases, and their decisions are far from neutral.

Safiya Umoja Noble (2018) explains that such algorithmic systems, especially Google Search, amplify dominant narratives while pushing marginalized perspectives into the background.

For instance:

When I searched for “cell biology animations” for a diverse high school class, the first videos often came from big EdTech publishers or YouTube influencers—but many lacked inclusive or culturally relevant analogies.

A search for “Indian scientists” returned mostly historical male figures like C.V. Raman or Jagadish Chandra Bose, while brilliant modern-day contributors—especially women and underrepresented researchers—were missing from top results.

Educational searches like “interactive tools for biology” often prioritize tools with the best SEO, not necessarily the best pedagogical value or accessibility for diverse classrooms.

These examples show that what gets seen isn’t always what’s most accurate, inclusive, or useful.

How These Algorithms Work (in Simple Terms)

Content prioritization algorithms weigh signals such as:

  • PageRank (how many sites link to a page)
  • Engagement (likes, clicks, time spent)
  • Search history and location
  • Recency and updates

If a resource has been widely shared, linked, or liked, it’s pushed to the top—creating a visibility loop where the “rich get richer.” But this doesn’t always reflect quality, diversity, or educational value.

As Noble (2018) puts it, when companies like Google become the “largest digital repository in the world”, they effectively control access to knowledge (p. 157)—and that has consequences, especially for communities already underrepresented in science and education.

Why This Matters in Education

As a biology teacher and a student in educational technology, I see firsthand how students often rely on the first page of Google or top YouTube results when researching.

One example that stuck with me was when a student searched “Why is the sky blue?” and ended up watching a highly entertaining—but scientifically flawed—video because it ranked higher than actual physics-based explanations. Another searched for “GMO pros and cons” and was flooded with corporate-sponsored content that didn’t present a balanced view.

For learners in diverse classrooms—some of whom may be multilingual, neurodiverse, or coming from different learning backgrounds—this creates barriers. Algorithms are not designed to prioritize inclusive or adaptive pedagogy; they’re designed to increase clicks.

This matters not just in K–12 classrooms but also in higher education, where students conducting literature reviews or online research often miss out on open-access or indigenous knowledge sources because those don’t rank well in standard search engines.

PageRank’s Quiet Influence on My Life

Google’s PageRank algorithm changed the way we navigate information. It assumes that the more links a page gets, the more “important” it is. But importance isn’t the same as accuracy, equity, or relevance.

In my personal and professional life, PageRank:

  • Prioritizes commercial tools over open educational resources when I search for LMS platforms—even though the open tools may better serve the communities I work with.
  • Bumps down newer blogs or collaborative projects like The Speaking Tree because they don’t yet have enough inbound links—even though the content is original, thoughtful, and inclusive.
  • Surfaces mainstream biology content for my students while suppressing local, decolonized science perspectives I actively try to incorporate.
Can I Impact PageRank?

Not directly—but yes, in small, meaningful ways.

Here’s what I’ve started doing:

  • Publishing blog posts and learning content under shared, high-authority domains like Medium or university repositories.
  • Encouraging my network of educators to cross-link resources from our platforms—boosting visibility for content that might otherwise stay hidden.
  • Teaching my students digital literacy by asking: “Why do you think this showed up first? Who benefits from you clicking this link?

I also build community-focused projects like The Speaking Tree to promote collaborative knowledge sharing—a kind of counter-algorithmic move where we decide what’s important, not just what’s viral.

The Bigger Picture

As Noble (2018) powerfully argues, content prioritization algorithms are not neutral—they’re reflections of who holds power and whose knowledge is valued. And when we depend on systems like Google to shape what we learn, we risk deepening the very inequities we seek to challenge.

Educators, technologists, and learners must become more than passive consumers of search results. We need to be critical curators, actively amplifying underrepresented voices and challenging the way content is ranked, framed, and delivered.

Because the question isn’t just “What did you learn today?”—it’s “Who decided that was what you should learn?”

 

Appendix

  1. Algorithmic Bias in Educational Systems
    Tetteh, G. K. (2025). Algorithmic bias in educational systems. World Journal of Advanced Research and Reviews, 17(1), 236–240. https://journalwjarr.com/sites/default/files/fulltext_pdf/WJARR-2025-0253.pdf
  2. Impact on Minoritized Students
    Taylor, J. (2024, September 22). Algorithmic bias continues to impact minoritized students. Diverse: Issues In Higher Education. https://www.diverseeducation.com/reports-data/article/15679597/algorithmic-bias-continues-to-impact-minoritized-students
  3. Social Media Algorithms and Misinformation
    Rodriguez, L., & Ahmed, S. (2024). The influence of social media algorithms on racial and ethnic misinformation: Patterns and impacts. ResearchGate. https://www.researchgate.net/publication/387503351_The_Influence_of_Social_Media_Algorithms_on_Racial_and_Ethnic_Misinformation_Patterns_and_Impacts
  4. Digital Transformation and Marginalized Communities
    Alcaraz-Domínguez, S., Martín-García, A. V., & Moral-Rodríguez, M. E. (2025). Addressing the social impact of digital transformation: A project with marginalized communities. Frontiers in Education, 10, Article 1534104. https://doi.org/10.3389/feduc.2025.1534104
  5. Algorithmic Bias in Student Progress Monitoring
    Mohamed, A. A., & Naseem, A. (2024). Bias in AI student monitoring algorithms: An analysis of age, disability, and gender. Computers and Education: Artificial Intelligence, 5, 100144. https://doi.org/10.1016/j.caeai.2024.100144
  6. AI and Racial Justice
    Royster, R., & Mertens, J. (2024, August 1). AI and racial justice: Navigating the dual impact on marginalized communities. Nonprofit Quarterly. https://nonprofitquarterly.org/ai-and-racial-justice-navigating-the-dual-impact-on-marginalized-communities/

References

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

Global Health and Education: What COVID-19 Taught Us

 

If there’s one thing the COVID-19 pandemic made crystal clear, it’s that education and global health are deeply connected. When the world went into lockdown, schools and universities had to pivot overnight, shifting from traditional classrooms to online platforms. It was a wild ride—some students thrived, some struggled, and everyone had to adapt to a new normal. This experience didn’t just highlight challenges; it also set the stage for how education will evolve in the future, with technology at the forefront.

The Pandemic’s Wake-Up Call for Education

COVID-19 threw education into chaos. Schools shut down, and students across the globe found themselves staring at screens instead of sitting in classrooms. The biggest issue? Not everyone had the same access to technology. Many students in low-income communities lacked reliable internet or devices, making learning incredibly difficult (Bennette, 2020). At the same time, teachers had to become tech experts overnight, learning to navigate platforms like Zoom, Google Classroom, and Microsoft Teams.

Education systems worldwide faced unprecedented disruptions. The immediate response varied from country to country—some nations quickly implemented national online learning programs, while others struggled to get students connected. In places with existing infrastructure for e-learning, students had a smoother transition, but in rural or underdeveloped areas, the lack of digital resources deepened educational inequality (Kuhfeld et al., 2020).

One major takeaway? Online learning isn’t a one-size-fits-all solution. While some students enjoyed the flexibility, others found it isolating and unengaging. The digital divide became more obvious than ever, pushing governments and schools to rethink how they can make education more inclusive (Burgess & Sievertsen, 2020). For example, many countries took steps to distribute devices, provide internet subsidies, and even broadcast educational content on television to reach students who had no internet access.

My Experience Leading the Digital Shift

I’d like to explore the rapid shift to online education during the COVID-19 lockdown, specifically focusing on my experience leading this transition at an institute where I was the Academic Head. The sudden move to platforms like Zoom and Microsoft Teams posed major challenges, especially in a city where students, parents, and teachers had little prior exposure to digital learning tools.

Navigating this shift required structured training and clear communication. I led the development of step-by-step training manuals, conducted sessions for teachers, students, and parents, and addressed technical challenges in real time. Initially, many struggled with basic aspects like logging in, using digital whiteboards, and submitting assignments online. Over time, through repeated guidance and hands-on support, we saw a significant improvement in digital literacy.

This experience highlighted the importance of usability and adaptation. It wasn’t just about having access to technology—it was about ensuring users could effectively engage with it. This transition also revealed broader trends in digital education: the necessity of digital readiness, the importance of clear instructional design, and the role of structured training in easing the adoption of new learning technologies.

EdTech: The New Foundation of Learning?

The pandemic accelerated the use of educational technology. Schools turned to digital tools to keep lessons going, and suddenly, AI-driven learning, adaptive platforms, and virtual classrooms became mainstream. Platforms like Coursera, Khan Academy, and Duolingo saw massive growth as people sought out ways to learn online (Burgess & Sievertsen, 2020). This shift highlighted the potential of online education but also exposed gaps in accessibility, effectiveness, and engagement.

EdTech has reshaped how we view learning. Many schools now see the benefits of blended learning models, where students engage with digital resources alongside traditional classroom instruction. AI-driven platforms, such as personalized learning assistants, became more widely used, allowing students to learn at their own pace. However, the pandemic also demonstrated that while technology can support learning, it cannot fully replace in-person instruction, particularly for young children or students with special learning needs.

Another challenge was the digital literacy of educators. While some teachers were already comfortable using technology, many had to rapidly learn new skills to create engaging online lessons. This highlighted the need for ongoing teacher training in digital pedagogy. Institutions that provided professional development for their staff saw better learning outcomes compared to those that left teachers to figure things out on their own (Canadian Commission for UNESCO, 2020).

Lessons We Shouldn’t Ignore

So, what are the biggest takeaways from this whole experience? Here are a few lessons that should stick with us:

  1. Make Tech Accessible to Everyone
    The digital divide was glaring during the pandemic. Schools need to invest in infrastructure that ensures every student, regardless of background, has access to devices and reliable internet (Kuhfeld et al., 2020). Governments and private organizations must work together to create sustainable solutions for equitable access to education technology.
  2. Train Teachers, Not Just Students
    Many educators struggled with the tech transition. Schools should prioritize teacher training so they can confidently use digital tools and create engaging online experiences (Bennette, 2020). Even post-pandemic, ongoing professional development in digital literacy is crucial.
  3. Blended Learning is the Future
    The pandemic showed us that a mix of online and in-person learning can be beneficial. Schools should explore models that offer flexibility while maintaining engagement (Burgess & Sievertsen, 2020). Hybrid learning can allow students to access high-quality resources while still benefiting from in-person interactions.
  4. Mental Health Matters
    The emotional impact of the pandemic was massive. Schools need to prioritize mental health resources for students and teachers, ensuring well-being is just as important as academics (Canadian Commission for UNESCO, 2020). Support systems such as school counselors, mental health days, and social-emotional learning programs should be integrated into curricula.
  5. Be Ready for the Next Crisis
    If there’s one thing we’ve learned, it’s that the unexpected can happen anytime. Schools need crisis plans in place to ensure learning doesn’t come to a halt the next time a major disruption occurs (COVID Education Alliance, 2020). Institutions should develop emergency remote learning strategies that can be activated quickly.

What’s Next?

Looking ahead, education will likely continue evolving in response to what we learned during the pandemic. Policymakers and educators are now discussing how to create resilient education systems that can withstand future global health crises. Some schools are incorporating hybrid learning models permanently, while others are working on improving student support systems.

The key takeaway? Education must be flexible, inclusive, and technology-driven—but without forgetting the human element that makes learning meaningful. The lessons of COVID-19 should push us toward a future where education is not just reactive but proactive in meeting the challenges ahead.

 

REFERENCES:

Bennette, P. W. (2020, July 20). The educational experience has been substandard for students during COVID-19. Policy Options.

Burgess, S., & Sievertsen, H. H. (2020, April 1). Schools, skills, and learning: The impact of COVID-19 on education. Vox.

Canadian Commission for UNESCO. (2020, April 20). COVID-19 is creating a world crisis in education.

COVID Education Alliance. (2020). Primer.

Kuhfeld, M., Soland, J., Tarasawa, B., Johnson, A., Ruzek, E., & Lewis, K. (2020, December 3). How is COVID-19 affecting student learning? Brookings.

 

AI and VR in Education

Artificial Intelligence (AI) and Virtual Reality (VR) are being hyped as the superheroes of education—ready to swoop in and save students from the dreaded boredom of traditional learning. And you know what? I think they might just have what it takes, but only if we use them wisely.

AI can be that ever-patient tutor who doesn’t roll its eyes when you ask the same question five times. Personalized learning? Check. Immediate feedback? Check. But let’s be real—AI can’t replace the warmth of a teacher who genuinely cares whether you pass or just barely survive. Sure, AI can tailor learning experiences like a digital Marie Kondo, but without human interaction, education could start feeling a little… robotic (pun intended).

Now, VR—oh, what a dream! Imagine history class as an actual journey through ancient Rome or science class where you’re dissecting a virtual frog instead of traumatizing yourself with a real one. VR taps into experiential learning, making lessons more immersive and, dare I say, fun. But let’s not kid ourselves—headsets aren’t cheap, and let’s not even start on motion sickness. Plus, if we put too much faith in VR, we might just end up with students who can navigate a digital medieval castle but can’t find their way to the school library.

At the end of the day, educators are still the captains of this ship. AI and VR should be their trusty sidekicks, not replacements. The real magic happens when technology enhances human teaching, not when it tries to replace it. So yes, AI and VR are game-changers—but only if we play the game right.

Reference:
EDUCAUSE. (2017). Horizon report: K-12 edition (2009-2017).

 

Artificial Intelligence

Artificial intelligence is a cornerstone of educational technology, making it crucial to critically evaluate its scope and limitations. Understanding the impact of machine intelligence on both our own and our students’ learning and educational development is vital. In this post, we explore the differences in responses between humans and ChatGPT to gain insights and reflect on its implications.

ARTIFICIAL INTELLIGENCE – click to read more.

While both my responses and ChatGPT’s answers offer an informative explanation of AI-related topics, my responses are more personalized and reflect a deeper connection to the subjects. I try to humanize the technical explanations, emphasizing not just the facts but also offering my perspective on what makes these concepts important in practical, real-world contexts. For instance, when discussing Alan Turing, I highlight his contribution to challenging the boundary between human and machine thought, which mirrors the broader implications for AI today. I bring an emotional and philosophical dimension to the discussion, which goes beyond merely summarizing his work.

In contrast, ChatGPT’s responses are more neutral and concise. It excels in providing factual summaries that are direct and clear. However, ChatGPT’s responses can sometimes lack a human touch in terms of the emotional depth or personal insights that I attempt to bring out, such as the deeper significance of concepts like creativity and ethical reasoning in human intelligence versus machine intelligence.

When addressing more technical aspects, such as machine languages versus human languages, ChatGPT is highly efficient at providing structured, clear contrasts. Yet, I try to incorporate broader societal and emotional implications—how programming languages prioritize logical clarity, while human languages enable rich, empathetic communication.

Regarding machine learning versus human learning, ChatGPT’s version is very factual and precise, whereas I focus on how human learning integrates intuition, culture, and personal motivations, offering a more holistic understanding. In summary, ChatGPT’s responses are technically sound but lack the nuance, reflection, and personal insights that a human like me might bring to these topics.

References:

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460. https://doi.org/10.1093/mind/LIX.236.433

McCarthy, J. (2007). What is artificial intelligence? Stanford University. https://www-formal.stanford.edu/jmc/whatisai/whatisai.html

UBS. (n.d.). Meet the Nobel laureates in economics: Do we understand human behaviour? UBS. https://www.ubs.com/global/en/our-firm/people-and-culture/nobel-laureates-in-economics/understanding-human-behaviour.html

BBC News. (2016, January 24). AI pioneer Marvin Minsky dies aged 88. BBC News. https://www.bbc.com/news/technology-35301023

Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of Google. Here’s what it says. MIT Technology Review. https://www.technologyreview.com/2020/12/04/1013359/timnit-gebru-google-ai-diversity-paper/

Harris, J. (2018, September 27). Languages vs. programming languages. Medium. https://medium.com/@jackharris_/languages-vs-programming-languages-35b582b4d6e9

Chollet, F. (2019). On the measure of intelligence. arXiv. https://arxiv.org/abs/1902.04197

Heilweil, R. (2020, March 5). Why algorithms can be racist and sexist. A computer can make a decision faster. That doesn’t make it fair. Vox. https://www.vox.com/recode/2020/3/5/21165938/algorithm-bias-ai-discrimination-racism-sexism

Buolamwini, J. (2019). Artificial intelligence has a problem with gender and racial bias. Here’s how to solve it. Wired. https://www.wired.com/story/artificial-intelligence-problem-with-bias/

Usability

What Makes Technology Truly Usable? A Deep Dive into Usability

Usability is all about making technology easy and enjoyable for people to use. It’s more than just how quickly someone can figure out a system or how few mistakes they make—it’s about the whole experience. From reading Issa and Isaias (2015), I’ve come to see usability as a journey. It’s about creating a bridge between what technology can do and what people actually need it to do. The key? Put people at the centre of the process. Listen to their feedback, make adjustments, and keep improving. Usability is never a one-and-done deal; it’s an ongoing evolution.

But here’s the catch: in education, usability has to go further. Learning tools can’t just be easy to use—they have to help people learn effectively. Educational usability means designing tools that support learning by managing cognitive load, offering step-by-step guidance, and providing clear feedback. For example, a great tool doesn’t overwhelm users with too much information at once; instead, it guides them along in a way that’s engaging and meaningful. Accessibility is also critical, ensuring that learners of all abilities can use the tool. So, while general usability is about simplicity and efficiency, educational usability is about creating an environment where people can grow and succeed.

When Usability Goes Wrong: Lessons from Woolgar

Woolgar’s work is a bit of a reality check. I would admit it was not an easy read but it was indeed cynically humorous and engaging. It shows how even with the best intentions, things can go wrong. He uses the term “configuring users,” which is basically about the assumptions designers make about who will use a system and how. Sometimes, those assumptions are way off, and that’s where the problems start.

Take one example Woolgar discusses: designers assumed that all users were tech-savvy. They built an interface that required users to know shortcuts and technical terms. But when real users tried it, they struggled because they didn’t have that knowledge. It’s like handing someone a manual car without teaching them how to drive stick shift. Another example is how designers simplified tasks for usability trials. They broke tasks into neat little steps to make testing easier, but real-life tasks are messier and interconnected. The system worked fine in a lab, but when people tried to use it in the real world, it didn’t meet their needs. These examples are a reminder: you can’t design in a bubble. Real-world testing with real users is essential.

Additionally, while I was reading both texts, I couldn’t help but think back to my own experience with my work team during the transition from existing system to a new system launched recently. It was a tough change, and I remember asking why the program admin staff—the ones using the system daily—weren’t consulted more. Apparently, new system did consult users, but they focused on higher-level graduate program advisors rather than the admin staff who dealt with the system every day. To make things worse, they didn’t even include students as users, even though they’re a key group. That experience really drove home the lesson for me: systems need to be designed for the actual users, not just a select group. The user shouldn’t have to adapt to the system—the system should adapt to them.

Two Ways to Think About Usability

This brings me to two very different ways people think about usability. One sees usability as a technical problem to solve with metrics and testing. It’s about speed, accuracy, and reducing errors. The other sees usability as a human-centered journey, focusing on how technology fits into people’s lives and adapts to their needs. It’s less about numbers and more about a holistic and compassionate approach to understanding one’s experiences.

Both approaches have their strengths. The first is great for ensuring a system is functional and efficient. But the second makes sure it’s meaningful and adaptable. The best designs, I think, strike a balance between these two views. They deliver solid performances while also feeling natural and intuitive to use.

Why It Matters

After much exploration, one thing is clear: usability isn’t just a box to check off. It’s about creating experiences that truly meet people’s needs. For educational tools, that means going beyond the basics to support learning in a way that’s engaging and effective. Woolgar’s examples highlight how easy it is to make wrong assumptions, and my own experience with Workday reinforces the importance of consulting the right users. The two views on usability remind us that good design needs both efficiency and a holistic and compassionate approach to understanding one’s experiences. At the end of the day, the best systems meet people where they are and help them get where they want to go. That’s what makes technology truly usable.

 

References:
1. Issa, T., & Isaias, P. (2015) Usability and human computer interaction (HCI).In Sustainable
 Design (pp. 19-35). Springer.

2. Woolgar, S. (1990). Configuring the user: The case of usability trialsThe Sociological
R
eview38(1, Suppl.), S58-S99.

Spam prevention powered by Akismet