Uncategorized

Individual Report – Project Retrospective

For our project our group was thorough in considering usability as a process and an outcome. We were careful in evaluating our choice of tools based upon how likely those tools would let us build what we envisioned. Before selecting our final set of tools (Javascript, HTML, and CSS with GatsbyJS as the host and infrastructure) we took a look at more drag-and-drop style tools including Google sites and WordPress. Ultimately, we decided that even though our chosen infrastructure was more challenging, and could not be as easily divided between the team members, it best fit what we intended.

We played to our individual strengths and shared knowledge through regular group meetings – in these meetings we would share findings, what we learned, and did the occasional “tutorial” so that even if the technical aspects were largely done by myself, we all had a clear picture of the group’s progress. One of the reasons that we chose to include a thorough project summary was to highlight each member’s main contributions. My main contribution was to actually build the tool, but I was only able to do so in iteration and with feedback / testing from my group members.

You can see the code here: https://github.com/alisonmyers/resource-management-tool. Because we used GitHub for our repository, I was happy to see the insights of Code changes over time:

Personally, I wrote more dynamic functions that allowed the ingestion of data from multiple sources (Google Sheets being an attempted new addition), and allowed different kinds of information to appear in each resource card. This was only my second JavaScript project, and I was able to learn a lot by revisiting my old code! The trickiest bit of programming was making the search functionality work – and I am glad I persevered on this. If I were to do one thing differently, it would be to start earlier on the parts of code that were new to me – so that the presentation to the class would have been more complete.

See how the search functions (left) and the code written (right).

Our team was able to assess and demonstrate usability of our tool because we ensured that our decisions were grounded in literature, and were relevant to our original proposal and the initial reasons we decided that LOCR needed a facelift.

Standard
Uncategorized

Digital Labour

Welcome to a day in the life of Patty P. (Note – the video has chapter timestamps to pause and move through as needed if too fast or too slow).

A day in the life of Patty P.

References

Crawford, K. (2021). Atlas of AI. Yale University Press.

Duffy, B. E. (2017). (Not) Getting Paid to Do What You Love: Gender, Social Media, and Aspirational Work. Yale University Press . http://www.jstor.org/stable/j.ctt1q31skt

Duffy B.E. & Sawey S. (2022). In/Visibility in Social Media Work: The Hidden Labor Behind the Brands. Media and Communication, Cogitatio Press, vol. 10(1), pages 77-87. https://ideas.repec.org/a/cog/meanco/v10y2022i1p77-87.html

Standard
Uncategorized

Have Your Cake and Page Rank it Too

Dear Reader, this piece has intentionally been written in the style of a blog post, but don’t worry I don’t have any affiliate links.

That Chocolate Cake“That Chocolate Cake” by SliceOfChic is licensed under CC BY-NC-ND 2.0

PageRank, Algorithms, and Corporations (oh my)

PageRank makes finding a popular cake recipe website really easy, but getting to that cake recipe really frustrating. You have to read through someone and their dog’s entire life history, numerous links to other recipes, all as a way to maintain engagement and show you as many ads as possible.

Let’s try to get to that cake recipe …

My Google Search for a cake recipe, followed by one minute of scrolling to get to a recipe, sped up to 10 seconds. This video emphasizes the scale, volume and specificity of advertisements shown to me by Google's algorithm.

Now, this seems to fall into the category of and annoying but innocuous part of daily life. Maybe the blogger is making some amount of revenue (probably not) for their digital labour and placement of those ads (via Google AdSense) and your engagement with them. More likely, however, Google is profiting off of the work and associated advertisements. Profiting financially, but also in the collection of your data, which is now seen as a commodity or source of capital (Crawford, 2021).

Page What? 

But, why is this happening? PageRank (and other algorithms involved in Search Engine Optimization, SEO) and content prioritization. The higher the scoring the page you’ve come from, the higher score you get. If I’m an advertiser, I want the most traffic to my site, and I want to be improving my PageRank as I do it. So, I want my ads to be placed in as many high traffic websites as possible, because I also want to be able to share in the PageRank of those sites. This will increase my websites score, and hopefully push it further to the top of a search result. There is probably not a young marketing guru sitting and deciding which websites I should work with to make careful and thoughtful placements of those ads – it is more likely an algorithm. The more resources we interact with on the internet, the more likely we are going to be shown related ads, and often those ads have been promoted or placed by some algorithm. After all, algorithms are fast and cheap (Neyland, 2019). Advertisement and its proliferation is an important part of PageRank and content prioritization, as Noble (2018) states, “Google is an advertising company” (p. 5).

My search for cake didn’t directly bring me to an advertisement, although in some cases affiliate companies will be the “top search result”. Google Search might indicate ads in the search results, it will still “want” to show you pages that don’t seem like ads that you are likely to interact with that will ultimately benefit Google.

The public generally trusts information found in search engines. Yet much of the content surfaced in a web search in a commercial search engine is linked to paid advertising, which in part helps drive it to the top of the page rank, and searchers are not typically clear about the distinctions between “real” information and advertising. (Noble, 2018, p. 38)

A ConsumerWatchdog report showed evidence of Google prioritizing its own subsidiaries  partners over competition (Noble, 2018, p. 56). So, PageRank brings me to my cake which may also have affiliations with Google partners, all of whom are waiting for my clicks, and now the invention of PageRank interrupts me all the way to my recipe.

Slavin (2011) introduces the “big red STOP button” as the only form of human interaction in some systems that are algorithmically controlled. He provides examples of elevators designed to group you to your destination, and financial algorithms that exist in a black box, unguided, unsupervised and that will run until the big red button is pushed. However, the button is only included in systems that are deemed to need a failsafe – but who decides on the inclusion of that failsafe? Why would I need a failsafe on my journey to cake?

The risk comes when we forget that Google is a multi-billion dollar company that just so happens to be seen as a reliable source for information, and whose prominence as a “portal to the Internet” (Noble, 2018, p. 153) overshadows other public access points (which cannot afford to compete – after all, they aren’t making any money from my desire for cake). Google’s algorithms are tuned to bring you to advertisements. Google Search uses information about you, your previous interactions, its advertising partners and their information – and in combination with this data from every other user decides which content to show you and in what order. Part of that order may be useful, like the relevance of your search key words (based on how many other people searched cake and ended up clicking the link). However, there is still bias and systemic issues in the algorithms that prioritize content. For example, marginalized and oppressed groups may have some “keywords” negatively associated with them that come from public (as opposed to digital) racism which has been incorporated into a black-box algorithm. Noble (2018) gives the example of her search term “black girls” and how she was immediately shown websites containing porn or racist content.

We must ask ourselves how the things we want to share are found and how the things we find have appeared (Noble, 2018, p. 155)

I Google every day – in a mix of personal, academic, and professional settings that all ultimately influence each other, and me. My life in all of these areas are affected by the information I interact with. It creates a web of information that the algorithm uses to “decide” what else to show me – and does so in a way that seems reliable if not carefully interrogated. The advertisements shown to me in my “journey for cake” are clearly attuned to this – Google sees me as a tech savvy (ad: Dell), mattress needing (ad: Endy), individual, who hasn’t done her taxes (ad: Blackbaud).

What now?

Ultimately, having an understanding of algorithms and recognizing that they are not objective or neutral, but actually shape the world we are in is an important step in interacting with Google and its information. I can use the privilege of having institutional access to various databases to avoid PageRank in some instances. Alternatively, I could influence PageRank by rallying my mass of social media followers (hi, Mom) to search a particular phrase and then always select the same result  popular instances of Google Bombing (see “Idiot” https://fortune.com/2018/07/19/donald-trump-idiot-google-bombing/). But first, I’ll have to post my own cake recipe and get it to the top of a Google Search…

References

Crawford. (2021). Atlas of AI Yale University Press.

Meyer D. (July 19, 2018) Reddit users are manipulating Google images to associate ‘idiot’ with Donald Trump. Fortune. https://fortune.com/2018/07/19/donald-trump-idiot-google-bombing/ 

Neyland, D., Springer Social Sciences eBooks 2019 English/International, DOAB: Directory of Open Access Books, SpringerLink (Online service), & SpringerLink Fully Open Access Books. (2019;2018). The everyday life of an algorithm (1st 2019. ed.). Springer International Publishing. https://doi.org/10.1007/978-3-030-00578-8

Noble, S. U. (2018). Algorithms of Oppression : How Search Engines Reinforce Racism. New York University Press.

Slavin, K. (2011). How algorithms shape our world [Video]. TEDGlobal. https://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world?language=en#t-896320

Varagouli, E. (Dec 23, 2020). Everything you need to know about Google PageRank (and why it still matters). Semrush Blog. Retrieved from: https://www.semrush.com/blog/pagerank/

Standard
Uncategorized

My Patterns of Attention

Data Collection

Legend – categories of intentional task (where I intended to focus my attention).

The day in my life that I chose was one where I knew I had specific tasks to focus on, but where I also know I was likely to be distracted. I chose the second day of a virtual conference that I was attending. This conference mostly involved video presentations, included one social virtual activity and was being held live, with recorded content also available. This conference took place on a day that I also had some work to do, and had a group project meeting (school). I chose a day that was heavily virtual – school, work, and the conference are all activities that I participate in virtually. I did so because throughout Covid I noticed a growing awareness of the inability to focus after hours on the computer, and I wanted to see how this would show in the data, especially when attending a heavily engaging event like a conference. The unique part of this conference is that all of the seminars are recorded and made available very quickly. After day 1 of the conference, I knew I would need more breaks, and so I was deliberate in parts of the conference to listen to the recordings while purposefully multitasking (I also had many chores to do that day!).

I began by creating a spreadsheet to help me track where my attention was. I divided the day into 30 minute segments and took notes about what I wanted to focus on (“intentional focus”), what I was actually focused on, and the % of attention I think I was paying to the intentional focus activity. I also categorized whether the intentional task requires deliberate attention, whether I was attempting to multitask, and the “area of attention”.

Enlarge

attention_2
My data collection (cleaned up version).

Data Analysis

I developed a dashboard to help me explore my day and investigate any patterns in my activity. I used the exploration and building of the dashboard to find the insights discussed further in this blog post. Here is a quick walkthrough (the dashboard and video are meant to be self contained, so may repeat some components discussed here.) The walkthrough shows the full-scale version of the dashboard which can be found here: Attention Dashboard

A quick description of the dashboard and interaction features.
A short video walkthrough of the project and dashboard.

A smaller version (that “fits” in WordPress) can be interacted with here:

Key Findings

Device Overload & A Few Stats

I spent my day interacting with screens! Of the 13 hours of recording, I spent only 4 hours not looking at a device. However, even in those 4 hours I was listening to either a conference recording or an audio book.

Using categories of “Virtual” and “Physical Environment” (to be anything not-device related) I spent just over 65% of my day paying attention to virtual devices. I purposefully multitasked for about 22% of the day, and attempted to focus on tasks that require deliberate attention for just about 61% of the day.

Multitasking

One pattern that I find interesting is when I look at the data based upon whether or not I was attempting to multitask. In these scenarios the “distraction” is really a secondary intentional task. My first instinct was that these deliberate multitasking sessions were followed by improved focused attention – however, this may be a misleading interpretation. The multitasking sessions involved lower focused attention because I was being deliberate. So, the upswings of attention after a multitasking event may just be a data collection issue. Reconsidering the data then, perhaps I was having difficulty with focused attention which was why I had so many multitasking events in the middle of my day.

Enlarge

attention_4-1
Attention by hour, coded as whether or not I was attempting to multitask. Notice the arrows identifying deliberate multitasking followed by peaks of focus.

When I look further into the data, I can see that I only really attempted to multitask during life and conference events. And that conference event multitasking brought my attention to 40%.

Concluding Thoughts

De Castell and Jenson (2004) point to examples of asymmetrical attentional relations, where attention can be paid unidirectionally rather than reciprocally in an argument against Goldhaber’s propositional that attention must be “paid back”. While I spent some of my day being paid attention to (in a group meeting, and in an online chat with friends), the majority of my day was simply ingesting information. Even then my attention was often divided, either purposefully or not. My experience of multitasking is closer to the “youth” than the “elder”, I do multitask, however my thought process regarding attention is closer to traditional views of attentional economy.

The data I have shared and visualizations created are somewhat cleaned up versions of originally “messy” data collection. Upon reflection toward the data cleaning process, I recognize that I developed a data collection and categorization framework that seemingly put emotional value toward “paying attention”.  In the details about what I was distracted by I have a negative association when writing and reviewing: “mind wandering, got distracted by phone, thinking about other”. I felt bad that I was distracted when I was not intending to – when I was purposefully multitasking, less so.

De Castell and Jenson (2004) propose a new way to consider attention in an educational context with a need to “better identify and develop forms of productive engagement in which dynamic, multimodal learning environments are animated by students’ deliberate and sustained attention” (De Castell & Jenson, p. 18, 2004). If I reframe my day toward learning – I was engaging with various kinds of media (audio, video), communications (text, video conference), and I was making connections between conference materials and my day-to-day work. If I look at what I was distracted by (when it wasn’t TikTok), when I was working I was distracted by the conference, and vice-versa. These could be considered distractions, or they could be considered as opportunities for re-engaging with material in a new way, for contextualizing new information, or for being creative in my choices of digesting various forms of media. Indeed, what I was taking part in could be the organic convergence of media (Jenkens, 2001) – where the context can be considered an educational one. Reconceptualizing the digestion of media in this way makes it seem a standard of the 21st century consumption of information, rather than a negative action.

References

De Castell, S. and Jenson, J. (2004), Paying attention to attention: New economies for learning. Educational Theory, 54: 381-397. https://doi.org/10.1111/j.0013-2004.2004.00026.x

Jenkins, H. (2001). Convergence? I Diverge. I . MIT Technology Review.

Standard
Uncategorized

IP2 – Artificial Intelligence

Key Figures of AI

Alan Turing was a mathematician, more recently most famously known for his role in code breaking during WW2. The “Turing Test” was a thought experiment, now named for Turing, that asked if responses from a computer or human could be distinguished by humans. Turing thought that machines could indeed “think”, and this has been foundational in AI as we know it (Salecha, 2016). For Turing, anything could be intelligent if they can “think” – and this can be demonstrated by responses to questions.

John McCarthy created the term Artificial Intelligence in 1955 and an early programming language Lisp in 1958 (Computer History Museum, 2021). Lisp introduced operators, notation and functions that has lead to algorithm creation and contributed to modern AI research (Lisp, 2022)  McCarthy might say that intelligence depends on the use of common sense knowledge (Allganize, 2020), and it is simply the completion of simple to complex tasks (Crawford, 2021)

Herbert Simon was a social scientist who along with Allen Newel developed the Logic Theory Machine (Newell & Simon, 1956). This machine has been said to be the first artificially intelligent machine – it was able to use logic to solve problems once thought unsolvable by machines (Britannica, 2021). Simon might define intelligence in the same way – the ability to apply appropriate logic to solve given problems.

Marvin Minsky built the first neural network simulator (Dennis, 2022). Neural networks are used as models for machine learning that mimic the neural networks of the human brain. Data is an input that goes through a neural network layer to create an output – the network can be “trained” to know whether the output is correct or incorrect and improve accuracy over time. Minsky would define intelligence as the ability to come to a correct answer given appropriate input.

Timnit Gebru is an AI researcher. Gebru is known for her questioning of AI models, for example she left Google after being censored for work related to racist algorithms (Perrigo, 2022). For all of the benefits of AI, there are social issues of algorithm bias that need to be considered more thoughtfully as the technology advances. Gebru might argue that an important distinction between human and artificial intelligence is that the latter must be evaluated from social and ethical constructs.

Language

Programming languages are predefined and do not change without human intervention over time, they do not “share” vocabulary between languages. Although, they may interpret data similarly (i.e. “meta-data” in the Semantic Web) (Jones, 2020). Human languages evolve and change over time, which includes the adoption of phrases and words from various languages. The largest difference between these kinds of language is how they are expressed and interpreted: human language can be expressed by voice, sign, expression, etc, and are similarly interpreted by humans. While programming languages are designed to express any variety of outputs which are defined in the code itself and can be expressed as such by computers, while being simply “read” by humans.

Learning

Machine learning (ML) is an area of AI regarding the development of algorithms that allow computers to detect patterns in provided data. ML provides only a model or an algorithm which allows the machine to determine how to complete said task. In this sense, machine “learning” might better be described as training – as it is not spontaneous, autonomous, or rational (Crawford, 2021), as we would describe learning to be in humans. A danger in this is the potential for biased or unethical algorithms that come from skewed or biased data when we ignore the social, political, and technical infrastructures that ML has arisen from. Human learning is a cognitive process that occurs in any environment, with any input and can be demonstrated through a range of actions – there is no inherit danger in human learning.

Intelligence

In psychometrics cognitive abilities can be described as a hierarchy from task-specific skill to generalized intelligence (Chollet, 2019). Machine intelligence occurs when a task can be complete without prescriptive orders. We would be remiss to make the same connections about human intelligence – how/what/that a human learns does not necessarily indicate a level of intelligence. For example, a human who plays chess might be deemed “intelligent”, but a machine that does so may not, they would be solving a specific task (Chollet, 2019). Additionally, human intelligence can be said to be more complex than simply solving a problem and may include emotion, and self and social awareness.

Turing Test

I don’t think the answers to these questions, in their original format, greatly differ from what a machine could generate, given the appropriate input to that machine. The generation of my answers do include more contextual information than what could be input into a machine – I have a more nuanced sense of what the expectations are and how these questions fit into the context of the ETEC course. Perhaps my answers, as a person with a certain style of writing and knowledge in the area might be distinguishable by a human who knows my writing. However, given appropriate models and time, I think there are enough advancements in artificial intelligence that could generate responses indistinguishable to a human, and perhaps even this human.

There are two versions of this piece in its original form – A and B. One was written by me (a bona fide human), another was generated using some free online AI systems (https://app.inferkit.com/demo, https://narrative-device.herokuapp.com/createstory)

I had the opportunity to revise this piece – which I hope has created some distinguishing features that don’t make the answers comparable to a machine! In its original format the responses were basically definitions (easy for a machine to recreate). This revision (hopefully) incorporates important information about the social and human constructs surrounding AI that are important to consider – which make this a more “human” piece of writing that is not machine-replicable.

References

Allganize. (September 4, 2020). How John McCarthy Shaped the Future of AI. Retrieved from:  https://blog.allganize.ai/john-mccarthy/

Britannica, T. Editors of Encyclopaedia (2021, June 11). Herbert A. SimonEncyclopedia Britannica. https://www.britannica.com/biography/Herbert-A-Simon

Chollet, F. (2019). On the Measure of Intelligence. ArXiv, abs/1911.01547.

Computer History Museum (2021). John McCarthy. Retrieved from: https://computerhistory.org/profile/john-mccarthy/

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press. https://doi.org/10.12987/9780300252392

Dennis, M. Aaron (2022, January 20). Marvin Minsky. Encyclopedia Britannica. https://www.britannica.com/biography/Marvin-Lee-Minsky

Jones, R. H. (2020). The rise of the Pragmatic Web: Implications for rethinking meaning and interaction. In Message and Medium (pp. 17-37). De Gruyter Mouton.

Lisp (programming language). (2022). Retrieved February 11, 2022 from https://en.wikipedia.org/wiki/Lisp_(programming_language)

Newell, A. & Simon H. (1956). The logic theory machine–A complex information processing system.  in IRE Transactions on Information Theory, vol. 2, no. 3, pp. 61-79, doi: 10.1109/TIT.1956.1056797

Perrigo B. (January 18, 2022). Why Timnit Gebru isn’t waiting for big tech to fix AI’s problems. Time. https://time.com/6132399/timnit-gebru-ai-google/

Salecha M. (June 30, 2016). Turing Test: A key contribution to the field of Artificial Intelligence. Analytics India Mag. Retrieved from: https://analyticsindiamag.com/turing-test-key-contribution-field-artificial-intelligence/#:~:text=The%20Turing%20Test%20proposed%20by,machines%20too%20can%20be%20intelligent.

Standard
Uncategorized

IP2 – Artificial Intelligence – Version A

Language

Most people know computers as the devices that they use every day to surf the internet, write emails, and play games. But behind the scenes, these machines are packed with code. The programs that make them work are written in programming languages like Python, Java, and C++. Programming languages are powerful tools that allow humans to create complex systems. They allow us to express our thoughts in a way that can be easily executed by a computer.

Humans are capable of a vast array of complex language. While some languages are more difficult to learn than others, all languages have their own unique set of rules and grammar. Some people may find learning a new language daunting, but with a little dedication, anyone can learn to speak it fluently.

Intelligence

Human intelligence can be compared to artificial intelligence (AI) in that it is a system that functions with high level of intelligence, but lacks the ability to gain actionable knowledge and response to complex problems. Human intelligence can be compared to AI in that it is a system that functions with high level of intelligence, but lacks the ability to gain actionable knowledge and response to complex problems. AI systems have gained tremendous understanding in image recognition, natural language processing, digital pattern recognition and decision-making. These technological advances, alongside powerful cognitive models and algorithms, have enabled AI to perform on a wide range of tasks. On the other hand, human intelligence can form intricate relationships with the environment to manipulate or communicate. This is different from computers’ inability to comprehend an infinite amount of data and take action accordingly.

Learning

Machine learning has been gaining more and more prominence in recent years, as researchers have begun to find ways to make machines smarter by teaching them how to learn on their own. While this technology is still in its early stages, it has the potential to revolutionize a variety of industries, from healthcare to finance. Human learning, on the other hand, is a centuries-old phenomenon that has been used by educators to teach students how to learn new information. While there is still a lot of research to be done in this area, human learning is thought to be more effective than machine learning in some cases.

Turing Test

I don’t think the answers to these questions greatly differ from what a machine could generate, given the appropriate input to that machine. The generation of my answers do include more contextual information than what could be input into a machine – I have a more nuanced sense of what the expectations are and how these questions fit into the context of the ETEC course. Perhaps my answers, as a person with a certain style of writing and knowledge in the area might be distinguishable by a human who knows my writing. However, given appropriate models and time, I think there are enough advancements in artificial intelligence that could generate responses indistinguishable to a human, and perhaps even this human.

In fact, there are two versions of this piece – A and B. One was written by me (a bona fide human), another was generated using some free online AI systems … can you tell the difference?

References

The AI systems used:

  • https://app.inferkit.com/demo
  • https://narrative-device.herokuapp.com/createstory
Standard
Uncategorized

IP2 – Artificial Intelligence – Version B

Language

Programming languages are predefined and do not change without human intervention over time, they do not “share” vocabulary between languages. Human languages evolve and change over time, which includes the adoption of phrases and words from various languages. The largest difference between these kinds of language is how they are expressed and interpreted: human language can be expressed by voice, sign, expression, etc, and are similarly interpreted by humans. While programming languages are designed to express any variety of outputs which are defined in the code itself and can be expressed as such by computers, while being simply “read” by humans.

Intelligence

Artificial intelligence (AI) and learning are more closely related, and less subjective than human intelligence and learning. If we ascribe intelligence to a machine we can say that it completes a given task without prescriptive orders, and does so through machine learning. We would be remiss to make the same connections about human intelligence – how/what/that a human learns does not necessarily indicate a level of intelligence. Additionally, human intelligence can be said to be more complex than simply solving a problem and may include emotion, and self and social awareness.

Learning

Machines don’t know that they learn (yet). Machine learning (ML) is an area of AI regarding the development of algorithms that allow computers to detect patterns in provided data. While classic computer programming involves writing code to tell a computer how to perform a task, ML provides only a model or an algorithm which allows the computer to determine how to complete said task. Human learning can also be described as relating to pattern detection and data input for task completion, but I think very few would limit its understanding and complexity by using this kind of language.

Turing Test

I don’t think the answers to these questions greatly differ from what a machine could generate, given the appropriate input to that machine. The generation of my answers do include more contextual information than what could be input into a machine – I have a more nuanced sense of what the expectations are and how these questions fit into the context of the ETEC course. Perhaps my answers, as a person with a certain style of writing and knowledge in the area might be distinguishable by a human who knows my writing. However, given appropriate models and time, I think there are enough advancements in artificial intelligence that could generate responses indistinguishable to a human, and perhaps even this human.

In fact, there are two versions of this piece – A and B. One was written by me (a bona fide human), another was generated using some free online AI systems … can you tell the difference?

References

The AI systems used:

  • https://app.inferkit.com/demo
  • https://narrative-device.herokuapp.com/createstory
Standard
Uncategorized

IP1 – Humans, Computers and Usability

Word Count: 741

Human Computer Interaction (HCI) is a discipline concerned with the interaction between humans and computers, the principles of which can be applied to the development or design of computer systems and their software. Users (“humans”) interact with interfaces that are part of the computer systems which makes changes to the virtual world depending on the programmed functionality. Usability in the context of HCI is both an attribute and a process – an application has “usability” if it is easy and enjoyable to use, and “usability” can be methods in a design process to improve the user experience, or to ensure that a design is functional, useable, and useful.

To conceptualize usability, it is important to first establish an understanding of related words – functional, usable, and useful. If something is functional it is said to work as intended. If something is usable then the function will create a desired reaction Finally, if something is useful, then the usable function can be said to be positive. For example, consider a light switch: it is functional if it moves up and down, if it is usable the light changes state, and it is useful when we can claim the experience and outcome was positive. In this example, we are talking about interaction with objects in the physical world that change the physical world – but we can also interact with objects in the physical world which make changes in a virtual world – when we interact with computer systems. Jakob Nielsen (2018) makes the distinction between the physical world and the virtual one with regard to usability with the example of a complicated coffee menu. In the physical world we are willing to tolerate less usability than in the virtual world, one reason being the effort involved in “course correction” (i.e. the effort in leaving a coffee shop for another vs. exiting a website when one is deemed unusable). This distinction between the physical and virtual worlds is important when we consider that usability relates to physical interaction that results in functionality which has been designed and engineered.

Woolgar (1990) experienced usability testing “gone wrong” where the design failed but instead of considering this as demonstrating a lack of usability, the testers intervened on behalf of the user (assuming real world users would somehow be different) or failed to acknowledge that the failure scenario may be representative of real-world experience. In one case the user was asked to setup a printer, however, it turned out that the ports were not functional and that the task was impossible. In another scenario, the tester identified a hardware issue (“a possible loose connection”) and pointed this out to the user. As previously mentioned, there is an important distinction between usability in the physical and virtual worlds – namely in our expectations of each and the user’s alternative options that can influence their experience. Interestingly in the examples I’ve mentioned, the challenges were “explained away” by physical issues, and we can assume that these challenges were not explained as impediments to usability by the testers.

Issa & Isaias (2015) position on usability is about the process of evaluating software in order to iterate on design. Woolgar (1990) recognizes constraints are placed on the user when we ignore that a machine exists in context and an essential part of that context is the user. Both perspectives are related to a “perfect environment” where the HCI perspective aims to create a perfect environment for testing and understanding the user’s interaction with a prototype, and Woolgar (1990) accepts that perfection means ignoring that a user is a human in some environment outside of the usability testing. Woolgar (1990) does not take for granted that machines are new entities where user action may be set by parameters of the designer.

From these perspectives of usability, we have learned that the user is often prescribed or given parameters in order to consider the usability of a computer system when given a specific task. In the case of education, it is not so simple to ascribe such parameters, in particular because we cannot say there is always a defined task for a student. Given that learning is not as simple as “completing a task”, we must incorporate learning and cognition in the concept, the discussed physical-virtual interaction also needs to incorporate the mind and intention of the student. Finally, the educational environment is not constrained to a single user – usability therefore needs to consider the teacher and student.

Issa T., Isaias P. (2015) Usability and Human Computer Interaction (HCI). In: Sustainable Design. Springer, London. https://doi.org/10.1007/978-1-4471-6753-2_2

Nielsen, J. (September 21, 2018). Usability in the Physical World. Nielsen Norman Group. https://www.nngroup.com/videos/usability-physical-world-vs-web/

Woolgar, S. (1990). Configuring the user: the case of usability trials. The Sociological Review38(1_suppl), 58-99.

Standard
ETEC524, Flight Path

What connections can I make to my previous ETEC courses?

I am pleased to be taking this course later in my M.E.T journey, because the previous courses have provided a better infrastructure for the technology we will learn about to be a part of. I see one of my goals of this course as making the connection between the technology and what has been learned so far. I will challenge myself to reference what I have learned in previous courses as well as specific articles or readings from those courses as relevant to the current module in ETEC 524.

June 6, 2021: ETEC 520 – the assignments in this course had us reflecting upon and analyzing institutional “readiness” for e-learning. For assignment 1 in this course, I found that the technical analysis could have been balanced well by some of the information I learned previously (had there been more “space” available in our report). #ETEC520

Throughout this course, I have identified many connections to previous courses. I have been reviewing assigned readings as well as literature I have found throughout my MET journey, and hope to find a way to categorize these readings based on some of these connections. I am in the final stretch of the MET program, and asking this question in this course has led me to realize that I would like to complete the Graduating Project (ETEC 590) in order to make connections between the courses I have taken, and to formally synthesize what I have learned.

Standard
ETEC524, Flight Path

What do I need to know about “digital” vs. “mobile” technologies?

Looking ahead in the course, there is a module regarding mobile technologies which states that mobile technologies are not delivery platforms. I hope to investigate where the difference is today. My laptop is mobile, my cell phone has the Canvas app – what are the key distinctions that are important when discussing the differences or similarities in these technologies? Do educators see a distinction where students may no longer? Is there a more important semantic difference that I am unaware of or have I created a distinction that I need to break down?

In my initial formulation of this question I was focused on my personal experience of “mobility” when it came to technology – the technology I have access to is fairly mobile, and I have the luxury of choosing multiple devices to engage with / work on. Grant et al. (2015) examined how mobile computing devices were used in k-12 classrooms, and noted that while some teachers allowed the use of mobile devices that were truly mobile (remained with the student throughout the day), some used devices as a substitute to a stationary computer. Upon reflection, I think my initial inquiry to find the difference between digital and mobile was too narrow. The class discussion “If I build a house, will they come?” allowed me to expand this narrow viewpoint and consider communities of people who do not have the same access to infrastructure and technology that blur the distinction between mobile and digital that I initially had in my mind.

Standard