Task 12: Speculative Futures

PHASE I

I am old man, a sick man, past retirement age and still working in a macro-economy of knowledge workers creating micro-learning courses with micro-learning credentials for the ever-changing human strategists who need to consume learning objects at rapid pace to maintain viable working status in a nearly fully automated manufactured Ontario. It doesn’t feel like it was long ago that Ernesto Peña recommended reading Hariri’s 2017 article, Reboot for the AI revolution. Maybe it feels that way because it is gloomy, rainy, just like it was in April 2021. It was not exactly dystopian, but an honest projection of change in the face of beneficial automation. Automation that secured health and safety on the roads with AI-driven automobiles, advanced machine-learning tools that could also predict the questions that I no longer could from large data sets. AI put many of us out of physical work and we hailed its advance. We forged new economic, social, and educational systems, precisely the way Harari suggested we should.

Formal traditional education institutions were challenged by their network of employment partners to give up on non-employability skills in academia for the greater economic good. Here are the historical artifacts, news outputs and policy papers, that were considered beneficial documents to help improve society by promoting learning for all and rapid economic recovery after the first pandemic:

When we were all sent home in 2020, the government of Ontario first invested $50 million to create virtual learning for everyone. High School education was funded only in areas that supported future training and development provisions supported by the FinTech and physical object corporations. I was as guilty as everyone for the outcome, clicking on the thumbs-up icon every time I saw some future-supporting digital education initiative that eCampusOntario announced. Why wouldn’t we support the building of a connected micro-credential ecosystem?  Access and Empowerment were the catch terms of the 2020-2021 re-adjustment. “eCampusOntario is a not-for-profit centre of excellence and global leader in the evolution of teaching and learning through technology.” The advent of standardized micro-credentials after the first virus ended aided deployment of seemingly benign propaganda. We learned new skills in small bytes, old undesirable jobs were made obsolete, and we moved along. After the fourth pandemic, it was clear that education technology supported by government funding was merely a marketing mechanism to encourage employees to spend all of their earnings on government-sponsored products like solar panels, the energy from which is diverted to the networks that deploy AR learning objects in the millions every day. Augmented indeed.

 

PHASE II

In a similar alternate future, I lost my money in a poor investment strategy with education technology and was converted to the Graphic Debtor’s Prison file format and deployed as a pop-up ad to educate learners about the importance of critically assessing the materials always available in front of them. This is that future:

CREATE YOUR OWN: https://thrilling-tales.webomator.com/derange-o-lab/pulp-o-mizer/pulp-o-mizer.html

 

Hariri, Y. N. (2017). Reboot for the AI revolution. Nature International Weekly Journal of Science, 550(7676), 324-327.

Final Project: AI Tutors Affect Original Authorship. Should we care?

Introduction

Artificial Intelligence (AI) tutors affect authors and their creations. They influence teaching and learning and are considered both boon and bane. At issue with machine-learning algorithms is their role in teaching and learning, their value for education and the workforce, and how they will indelibly (digitally) leave their mark on intellectual and creative writings. This is a meandering excursion more than it is research paper that defines a problem and draws conclusions from testing and literature review. We will consider a few examples of writing using AI tools before leaving with something interesting on which to reflect.

Bane

Universities have become increasingly dependent on education technologies like Grammarly (https://www.grammarly.com/edu) and Turnitin (https://www.turnitin.com/) plagiarism detection software to seek and address problems with lack of authorship attribution and with low language proficiency in student writing. They are supplementing hands-on teaching support with AI-driven technology coaches to help address foundational issues of grammar and spelling. Herrington and Moran (2001) outline some of the problems with arguments claiming that technology is faster, better, and less expensive, citing numerous concerns for the profession of English. They claim that machine-reading software has a negative effect on student learning, and large-scale use of technology combined with attractively rising stock process for the companies leading the software industry lend an artificial credibility to the tools, even suggesting that education should follow practices more like business functions (p. 495). At best, Herrington and Moran offer that technology companies provide good marketing campaigns to combat large class sizes, high student-faculty ratios, and the burden for instructors who must read large quantities of student writing samples. They believe the adoption of technology will permanently embed those structural problems into the institutions that use them.

Boon

McKee and Porter (2018) take a far less ludditic approach to engagement with AI in writing. They address three areas for concern that we should consider about the current and future state of AI technology development:

  1. AI chatbots are now operating in the workplace.
  2. AI writing bots, aka “smart writers,” will soon do our writing for us.
  3. AI-based teachers (AKA “smart teachers”), or at least teacher assistants, are now in use at some universities.

Addressing the concerns that teachers are being driven out of the institutions by technology, McKee and Porter suggest this was already happening in introductory writing programs with the introduction of advanced placement writing tests before advanced technology entered the scene. The researchers take a healthier look at the technological and cultural shifts that are happening with increasing use of AI. They ask serious and important questions like when and how AI-driven chatbots should be used as professional writers, whether there be restrictions on their use, if there are contexts where their use is inappropriate, and whether they should be used transparently.

AI Tutors provide support for writing. How does this affect originality, and does it matter? IBM Watson natural language understanding is being used in experiential learning simulations in formative higher education and professional learning contexts. NLU is a step up from the type of natural language processing (NLP) that many of us encounter daily with the use of Alexa and Siri, two search engines with NLP, aural capabilities, and oral responsiveness. If an AI tutor steers student written and verbal comments with informed feedback and fills in gaps to improve their understanding, what part of student reasoning may be eroded and what part of the work is considered their authorship? What constitutes original work, and does it matter as much now if the goal is support understanding? Does NLU challenge original authorship, enhance it, ‘remediate’ it? Let’s take a look at an example.

Experiential learning use case

A Canadian start-up company, Ametros (https://ametroslearning.com/), was founded by university educators that wanted to support experiential learning in a digital environment with smart tutors under watchful supervision. The Ametros application uses IBM Watson NLU to gain powerful insights into the language habits and learning needs of students and the product claims suggest that AI helps learners develop key skills in communication among others. NLU is used to recognize problems with student decision-making and appropriate workplace communication, processing email-like written interactions from students who engage with AI chatbots that may be co-workers, employers, or customers. If a student practices empathy in communication and presents aggressive comments or uses an unsympathetic tone, the AI trainer corrects the behaviour with continuous adaptive feedback until the student demonstrates a capacity for understanding empathy. [Ametros is the application that I have recently been working with professionally to see if it is a good fit. Unfortunately, I was unable to record a walkthrough of the application in use and so turned this reflection elsewhere.]

Courses with writing components like memos and informal reports can be supported by chatbots that improve engagement and provide feedback with a timeliness and level of professionalism (because search engine-enabled) that is not possible with educators who manage large classes. Moving a human-led, in-person class to an online environment does not necessarily enhance the quality of learning unless innovative methods are used. Education technology with NLU components might meet the same quality of teaching and learning standards as those used in the classroom. If a chatbot uses the Socratic method to support student learning while making minor corrections for spelling and grammar, what part of the student submission in writing could be considered authentic? Original work may not always be a requirement in an experiential learning context. In a co-operative work-integrated-learning context, and employer may have partnered with a university network to find students that need real-world, authentic learning experiences that both benefit the educational institution, the employers, and the learner. How a person acquires business communication skills or basics of Microsoft Excel are less important than their correct application to support business purposes. Is that such a threat to academic integrity? Weitekamp, Harpstead, and Koedinger’s (2020) efforts to define a machine learning engine that is user friendly and employs a “show-and-correct” process is an interesting way to show teachers rather than programmers how to use these powerful tools for the benefit of learners, and would be a supportive approach to institutions that embrace education technology solutions.

McKee and Porter (2020) offer some support with growing number of the unanswered questions here. They construct supporting arguments for two ethical principles that may be used to guide the design of AI writing systems:

  1. First, that there be transparency about machine presence and critical data awareness
  2. Second, there there be a methodological reflexivity about rhetorical context and omissions in the data that need to be provided by a human agent or accounted for in machine learning

The learning goal of AI tutors is to highlight strengths and areas of improvement within an interactive writing experience and to provide research-based support to responses regarding the correctness of natural language using formal rules. Does working through an AI interactive improve a learner’s understanding? Does it help make the writing feel less abstract or more? Will exposure to writing tutors help later when solving problems using critical thinking that require opinions and idiomatic expressions? Questions like who bears responsibility when your super-charged grammar checker makes a mistake is no different with AI supports than with a common word processor. Popenici and Kerr (2017) and Rouhiainen (2019), like McKee and Porter (2020) above, recognized the importance of advances in use of AI technologies in higher education writing services for learners, and focuses more on the ethical matters related to data privacy and data ownership that accompany the benefits of personalized learning.

Further exploration of these accumulating questions may be accomplished through working with design considerations when creating an AI-tutored session for writing. Instructors, employers, and learners are looking for real-life examples. AI can help differentiate high quality from low, and originality from plagiarism. When prompted to rethink a writing sample, learners may not notice the changes recommended. What are the AI prompts to highlight essential words or structural changes in the instructional sentence to make the writing and idea-generation (and thus reading) easier? Increasing interactivity using Chi’s ICAP framework (2009) could be one solution. Considering the learning design principle of cognitive load, Chi’s ICAP framework and supporting hypothesis demonstrates that the more interactive the learning element, the greater student engagement and learning. Four levels of interactivity in the framework are:

  1. Interactive activities that involve social interaction. The Ametros AI simulation provides a character bot to engage with.
  2. Constructive activities that involve writing or creating. Writing a business communication to a client with the support of the chat ‘boss.’
  3. Active activities that involve manipulating media. Perhaps clicking through a simulation and answering multiple choice questions.
  4. Passive activities that involve reading text or viewing images/videos.

Chi’s research suggests that adding a constructive piece to the experience would positively impact learning outcomes. Focus on the outcomes and not the originality of the work may be right approach for some learning contexts.

A bright or bleak future?

Poetry chapbooks do not often sell well throughout a poet’s lifetime. An AI writer may be an intelligent and economical alternative to expert human cultural tutors who fail to earn an adequate living as a creator. Could there be a social responsibility to promote AI in this case to decrease the unhappiness of human artists? While this excursion meandered more than it should have for its brevity, I have wondered how important these arguments for and against the use of AI in teaching and learning and writing composition will matter in the not very distant future. What is all the fuss? In his 2015 TED Talk, “Can a computer write poetry?”, Oscar Schwartz walks through the algorithms that compose poetry by scraping his Facebook feed for the muse and vocabulary and compared that to a poem by William Blake, asking the audience if they can guess which composition is human-created and which machine-generated. His final words fit well into the question about the changing spaces of reading and writing.

“But what we’ve seen just now is that the human is not a scientific fact, that it’s an ever-shifting, concatenating idea and one that changes over time. So that when we begin to grapple with the ideas of artificial intelligence in the future, we shouldn’t only be asking ourselves, “Can we build it?” But we should also be asking ourselves, “What idea of the human do we want to have reflected back to us?” This is an essentially philosophical idea, and it’s one that can’t be answered with software alone, but I think requires a moment of species-wide, existential reflection.” (Schwartz, 2015)

 

References

Chi, M. T. H. (2009). Active-Constructive-Interactive: A Conceptual Framework for Differentiating Learning Activities. Topics in Cognitive Science, 1(1), 73–105. http://doi.org/10.1111/j.1756-8765.2008.01005.x

Herrington, A., & Moran, C. (2001). What happens when machines read our students’ writing? College English, 63(4), 480-499. https://doi.org/10.2307/378891

McKee, H., Porter, J. (2018). The Impact of AI on Writing and Writing Instruction. Digital Rhetoric Collaborative. April 25, 2018. https://www.digitalrhetoriccollaborative.org/2018/04/25/ai-on-writing/

McKee, H., Porter, J. (2020). “Ethics for AI Writing: The Importance of Rhetorical Context.” AIES ’20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. February 2020, pp 110–116. https://doi.org/10.1145/3375627.3375811

Popenici, S., Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning. DOI 10.1186/s41039-017-0062-8

Rouhiainen, L. (2019). How AI and Data Could Personalize Higher Education. Harvard Business Review. October 14, 2019. https://hbr.org/2019/10/how-ai-and-data-could-personalize-higher-education

Schwartz, O. (May 2015). Can a computer write poetry? [Video]. TED Conferences. https://www.ted.com/talks/oscar_schwartz_can_a_computer_write_poetry

Weitekamp, D., Harpstead, E., and Koedinger K.R. (2020). “An Interaction Design for Machine Teaching to Develop AI Tutors.” CHI ’20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. April 2020, pp 1–11. https://doi.org/10.1145/3313831.3376226

Concatenation

There shall be no string theory in this concatenation! These characters, also MET students, may be a snowball, but which one of us is snow, and which ball? Cold articulation of the scenario with some juggling may judge. What is offered is more a chain-link fence than nicely designed set of media-rich connections.

Task 03: Voice to Text — Ben |Dierdre D.

I was drawn to the visual aspect of Deirdre’s assignment before the argument when I saw the annotations on the image of the text, simply because I was inclined to mark up my own text using former editing skills (though I did not); however, Deirdre offers an interesting comparison of text-to-speech options, first with an iPhone and again with Microsoft Surface. Because she took that deeper level of interest in the mechanical comparison, I became interested. I merely used the recommended Speechnotes application because I had not tried it before.

Both the iPhone and Surface seem to offer a larger range of colloquialisms compared to Speechnotes, and fared no better or worse with spelling. Interestingly, all three technologies generated text options in a stream of consciousness format with very little punctuation and where most mechanical errors were equally poor. Dierdre mentions that she thinks this is due to the nature of spontaneous speech and to errors in speech-to-text technology. I tend to agree with this, and remarked on my stammering or mumbling and the technology’s need to make sense of it.

Task 04: Manual Scripts — Ben | Allison J.

Allison makes the link for no special reason other than she likes a “good old Bic pen” for taking notes. Nostalgia is sufficient reason for me to linger. An important difference between our reflections is that I did not connect my thoughts to the documents for the week, for no particular reason other than I preferred to enjoy a reflection of Wittgentein’s paradox of following a rule in his Philosophical Investigations while actually reflecting on a reading of Ian Bogost’s book Play Anything.

Connecting Harris’ (2017) illustrative example of preparing a podcast and how mistakes may be rectified, Allison’s connection to her own writing about what constitutes an error and how such failings were not permitted in monk-copied biblical manuscripts was a nice distraction from the mechanical preparation of content creation and publishing. There are some comical images in the marginalia from some of the less orthodox scriveners. Had Allison, a teacher, reflecting on the affect of covid-related changes to education and family life, mentioned anything about connections to Aristophanic or Manandrian Old and Middle Greek comedy and how bible copyists would sometimes draw obscene scenes from comic plays in marginalia, I think there would have been a stronger connection worth reflecting on; however, her reflections about “how to type efficiently and properly on a ‘QWERTY’ keyboard” can make dissemination faster as opposed to the Bic pen were succinct. Wouldn’t it be nice to take a deeper look at why QWERTY is a “proper” approach?

Task 06: An Emoji Story — Ben |Kirsten M.

I cheated! I do not watch much TV these days and would never have guessed The Queen’s Gambit without clues. I am grateful for the hint, and I am sympathetic to Kirsten’s concern that we are restricted to a non-text format. I did not quite capture an idea tied to the readings as well as Kirsten did when she writes about “Kress’ (2005) notion of collaboration between maker, receiver and/or remaker.  The co-creation of communication is far more compelling (and realistic) than trying to control the message and its meaning as it is received.” Bolter offered a similar argument that icons, in our case emojis, are not designed to fix authorial intent. To affix a standard structure and meaning is what gives us comfort in plain language speech and writing, but the lack of flexibility makes more boring poetry and suppresses the evolution of language.

More interesting to me was the way Kirsten redefines the steps in her approach at the end of her assignment:

  1. recalling a memorable moment
  2. recalling its emotional experience
  3. conjuring words associated with the emotional experience
  4. using those words to find symbols to tell the story

Those four steps are rather similar to the foundations of the oral tradition that we read, watched, and listened to in module three.

Task 09: Network Assignment Using Golden Record Curation Quiz Data — Ben |Melissa D.

Melissa’s analysis was not similar to my own production. I enjoyed the discovery of filters and their uses, but moved on rather quickly to the limitations of the data set. While I thought that there could be some random connections between the methods for selection among the members of the group that were tied more closely, knowing that the starting point of the process was indeed random I moved away from seeking meaning in the connections and merely observed what and how the groupings were formed.

Where I stopped, Melissa started. She not only proved that she learned how to use Palladio by showing how she used the tool to parse the data, she then read the observations and selection processes from the peers in her group to determine whether there were any significant qualitative connections or quantitative. Knowing that a 6/10 match was derived from an apparent interest in joyful music is interesting, but the argument for anything deeper gets weaker thereafter. What’s interesting here is that Melissa completed the quantitative analysis even after recognizing the limited value. Had we both been working with the complete data set in Palladio including the orphaned songs, I think that we would present very different research questions and conclusions. Melissa’s data set would include the qualitative data set form the blog posts, which could be coded in a manner that might be added to the selections in Palladio creating what would appear to be a richer connection. Without an established set of metrics for selection, I undervalued the viability of that qualitative data for my interpretation and might continue with that type of analysis.

Task 10: Attention Economy — Ben |Nathan B.

Nathan’s reflection on the User Inyerface UI experiment was methodical to the screen. Our analyses were conducted differently, but the results were similar. I looked at the html page source to find the offending culprits that were preventing my progress through the site, and  Nathan reflected on how his experience matched many of the dark patterns described in the readings for the week. The site design had clear purpose, to prevent users from progressing, suggesting there is a darker purpose to obstruct a user from leaving the site. I will agree with Nathan that it is interesting to see many of the dark practices active in one place to show the obfuscation that occurs with technology and digital services. We could both have offered additional attention to Tufecki’s Ted Talk and the implications on society, security, buying and buy-in patterns. In short reflections like these two where the analysis is on the site itself, it is difficult to assess how deeply the authors connected with the course materials to aid in their responses.

Task 11: Algorithms of Predictive Text — Ben | Lori J.

B: Education is not about the problem of being able to remember facts about how an elearning system works best with your computer skills or similar systems that can crunch your time using software development technology, but it’s important to understand how you work and create

L: Education is not about the risk of being able to make the necessary money for a non governmental organization. 

Ok, so where’s the connection? Lori’s prediction simply reminded me that I work with a for-profit education company, and I wondered what that perspective about risk from the user perspective could mean in my own professional context. I immediately and hopefully imagined a college or university instructor musing about how great publisher-provided (at a cost) content is for supporting teaching and learning and how if they do not offer praise for it’s value or connect their colleagues to the publisher in a value-driven manner that the content co-creation well will dry up and students will ultimately suffer from poorly vetted, low-quality materials that will eventually fade with the current technology they are using to suffer through their exams. But that thought departs from Lori’s purpose.

Comparing predictive text from her iPhone, used for professional purposes, and Gmail, used for personal conversations, she discovered an interesting difference between the systems. The iPhone offered a more professional representation of words and idea, properly reflective of her growing vocabulary data base of workplace conversations. Gmail was trained to express informally in alignment with communications among friends and family. I think Lori’s experiential connection to O’Neil’s notion of a context-oriented world grappled with the weekly reading at a deeper level than my analysis did, and I appreciate the second thought about contextual weapons of math destruction that O’Neil raises.

Broken link:

Overall, I found the thoughts and presentations and comparatist approach an interesting way to spend some time with peers. I did not particularly excel in my mainly text-based responses to meet the pedagogical underpinnings of this course where media-based stimulations would be more welcome. Packaging, even for chain-link fences, can be important selling tools.

Task 04: Manual Scripts

These are some thoughts generated from a reading of Ian Bogost’s 2016 book Play Anything: The Pleasure of Limits, The Uses of Boredom, & The Secret of Games.

I rarely write by hand, because most of my writing activities are generally consumed quickly by others and so must be immediately shared. No, I am not thinking that my email messages to my mother about the wonderful progress of her garden seedlings must be important and shared, but I do write a lot of process documentation for my work. Writing by hand is fast and simple, and clearly messy = less consumable by others. What makes this writing task difficult for me is that the idea generation is destined for the recycling bin nearly immediately. Had I typed the thoughts and saved them in a document, they would have a longer life and could at some point become something. Like seventeen of the songs on the Golden Record, I will summarily kill these thoughts for the salvation of others.

There were mistakes. Does use of shorthand constitute a mistake in handwritten text? I often use an alpha, Greek letter A, to represent and. Could just as easily type with an ampersand &. But I do not use alpha or and consistently. Typos receive a strikeout. I did not start over, just scribbled. These types of character assassinations make manuscripts from interesting people interesting, but in my case the erasure by desktop software makes it easier on the reader who will understand there were no lost gems. Which makes the act of editing rather simple with this method. A line or scribble is much faster and simple than reflecting on the spelling and grammar accuracy of mechanical editors before taking action of any sort.

Perhaps the most significant difference between writing by hand and typing is that I write very quickly by hand with ideas flowing easily and with fewer regrets about malformed sentences or inadequate clauses to support an argument. I prefer to type thoughts, mainly because I am slower, take the time to reflect, and can wipe out offending articles without thinking about them again. With handwriting, the failed thoughts sit on the page as a reminder and that is distracting.

Task 03: Voice to Text

For this voice-to-text experiment I used Speechnotes to record a slowly delivered reflection about network effects, or the human affects of peer endorsements in professional development, used in theory to help connect knowledge workers to new gigs in a post-covid world after their livelihoods were lost due to process automation. This topic has a growing life of its own, and working in the higher education courseware space I tend to think a lot about what it means to represent something with veracity. Are you reliable, are your endorsements valuable, are your skills verifiable, does an experiential learning module represent authentic learning? I attempt to describe the current state of things from my perspective and ask questions as if in a conversation, and what is immediately clear from the voice-to-text paragraph below is that verbal imprecision leads to lost meaning as early as the beginning, in line one.

First, the artifact:

This brief reflection began as a two-part 1 professional one academic about Network effects in the knowledge economy and was intended to be a podcast sinking in their capacity decision making brain power creating a podcast about this would have been professionally inauthentic content for this prolific email or digital workload to Mentor so let’s frame the problem simply in text we are not actively contributing to the knowledge economy by ranking rating creating sharing and Co validating the skills and experiences of colleagues instead of merely considering the product we are out of the future employee ability games and more likely to be subsumed and dissolved by process what worries me about this is I’m not yet sure on which side of the date on most comfortable I trust my colleagues who Windows me do not accept reviews from people that I have not worked with and do not fully understand which and why I appear in some that are less relevant pure effects of term commonly acacian literature refers to external social interaction effects that affect affect our behaviours and possible outcomes I wonder whether the professional networking sites have truly transformative power 2021 marks my Thirteenth Year working in e-learning with higher education group in Canada cover learning strategy learning technology and delivery crop cross-functional leadership Innovation and design thinking and there are a handful of skills in my workplace I said on an innovation team to foster a culture of innovation march 10th I attended business strategy strategyzer demystifying Innovation theatre the point of the webinar was to help readers understand Innovation Innovation management from the sea levels through the come through a company exploring the point of the webinar with help readers understand Innovation management from the sea level throughout the company practices Associated tweet about it and there is no digital certificate for participation in profile while I discuss the importance to be avoided with my local Innovation team that would help guide the conversation using Post-it notes breaking something with value I did not receive any sort of digital artifact or pure Bell my contribution to the company’s organizational experience knowledge gained or shared results into my access to Future opportunities within my company and Beyond according to LinkedIn learning 5th annual workplace learning report resilience digital fluency number one and number two most important skills qualifying the value of learning or resilience is not exactly straightforward open is this This activity obscure perceived value indicates that someone has belly to contribution or even better has actually witnessed some effort in the back it with their own credibility the power in The credibility to help build value in what part of those. Multiplier rule order by their experiences should give yours a boost machine learning algorithms work the same way they’re not just making choices for you and your social networks in the sympathetic credible many jobs can be automated in those in transition may require human relatability what will be the recommend headlines from your network appears or will this be left in metadata scrapers that check validation scripts and digital badges 2 years before the outbreak of covid-19 current and future knowledge workers home from University and workplaces to Lloyd’s feature work-study how tomorrow’s job Seekers would increasingly need to find others who helped them get better faster small working groups organizations and broader and more diverse what is the process has not found it slower and Reviving would that be better evidence of this shift to control substance and peer Network verifiability on a smaller and focussed and Innovative easy company that currently Fosters a closed community a certified dentist tool share. Using the power of local and Global professional networks and platform analytics to help individuals construct a professional reputation Gather continuing education credentials and benefits from industry-specific professional networking that can be relayed to certification bodies that seems like a worthwhile voices across Industries with upscaling and automation change the focus of the community covid related expansion of Online skills develop and employment will continue in the post covid-19 linkedIn learning report mentioned above clearly outlines examples from large well-funded come place value on Learning and Development the state of the job seeking industry relies not only on individual resilience and adaptability but also on the building. Active networks of digitally archived experience has the potential impact factor for opportunities is clear a mentoring connector database an active online community online community has developed credit for the value that they are currently this could become the norm is the bar value when there’s no permanent and where individual values temporary is it another challenge that requires opening mobile tech an interesting facet flexibility in mermaid for companies presented as a perfect day workers value with options for working hours to be mindful of what’s present in the non knowledge labour market by comparison schedules and payment like working for Fedora of working hours at design and control policy Dora app

On the turning away:

What was returned to me in text form was inauthentic authorship. Neither the ideas intended to be conveyed nor the voice are distinctly mine. I tried to speak slowly and clearly with rising intonation for questions and pauses between sentences, and yet Hass’ leading question becomes relevant faster than I anticipated, “What is the nature of computer technologies, and what is their impact on writing?” (2013, p. 3). In this case, perhaps due to mumbling on my part, there are words missing or misplaced in nearly every sentence. I was deliberately careful to time pauses in a manner to indicate pause versus stop, but punctuation was not captured. Another technology editor would be needed to clean up this text block, perhaps something using IBM Watson Natural Language Understanding.

This sample could reasonably be considered the written word of an inebriated person writing about their professional context. That is not the case, though the result is similar. How does the application select capital letters for some nouns and verbs and not others. How, even with emphasis, does the software not capture ends of sentences and add the period consistently? There are at least three ideas that would be formed into distinct paragraphs, and this text conversion loosely elongates one idea into one long paragraph with poor, run-on sentence structure and fewer conjunctions to join clauses.

There are obvious lacunae in the text where some form of emendation is needed to complete an idea or sentence. Correcting punctuation alone would not repair lost the meaning. Ways of knowing are clearly impeded with this technology. Some words are missing, parts of sentences are missing, and some words have been recorded as others. For example, “digital workload to Mentor” is “digital workflow manager”. It is not all bad. With some simple editing, some repairs could fix this text to create something meaningful. Spelling is correct for the most part, and including Canadian spelling.

In what ways does oral storytelling differ from written storytelling? Oral storytelling is more likely to contain mnemonic devices like formulaic phraseology, standardized epithets, and metre to help the orator recite the message multiple times. Gnanadesikan (p. 2) reminds us that in oral traditions, information only exists if someone can remember it. I like Foley’s comparatist papers on oral epic poetry in South Slavic cultures and Homeric epic. His brief 2005 article hints at his lifelong argument that “Oral traditions work like language, only more so, …. In order to escape a mechanistic conception of oral traditional language, we must inquire into the idiomatic implications of the registers involved. Beyond philological description, in other words, lies traditional referentiality …. that we need to become aware of its entire spectrum rather than attempting a false, unrealistic reduction to some primal concept that field research does not support.” There’s a lot to unpack there, but this voice to text exercise by poor analogy could show a representation of an undefined idiom that we are simply not aware of. It may not be as simple as breaking down the text mechanically, though I would be surprised if there were any more going on here than mumbling, poor mic quality, and a rudimentary application trying it’s level best. I reread a couple of lines back in the manner that I intended them. There was no significant difference in the quality of the revised text, and no improvement at all in securing the meaning.

 

Foley, J.M. (2005). South Slavic Oral Epic and the Homeric Question. Acta poét vol.26 no.1-2 México abr./nov. 2005. Retrieved from http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S0185-30822005000100004

Gnanadesikan, A. E. (2009).The first IT revolution. In The writing revolution: Cuneiform to the internet (Vol. 25, pp.1-12). John Wiley & Sons.

Haas, C. (2013). “The Technology Question.” In Writing technology: Studies on the materiality of literacy. Routledge. (pp. 3-23).

Task 10: Attention Economy — A worst-practice UI experiment

On the surface, User Inyerface is a normal website that challenges user interactions and web site design.

It is not normal. Right click on the blue background and select View page source and you will find the following title and site description in the HTML code:

<title>User Inyerface – A worst-practice UI experiment</title>
<meta name=”description” content=”User Inyerface – A worst-practice UI experiment”>

The site is a modern web design experiment meant to challenge designers and web users in a Web 1.0 manner intended to uncover multimodal challenges and to encourage the implementation of Web 2.0 standards. Or is it? The site is tracked with Google Analytics. Here’s a deliberately annoyingly small image with the GA tag highlighted in light grey for obscured viewing:

And here is the code snippet:

What are the behaviours that are tracked and why are they desirable to track? Grit, perseverance, critical thinking? At least anonymous tracking is set to true, otherwise I might have been inclined to spend less time testing accessibility and code and more on trying to complete the test faster for an anonymous data collector to ignore my improved timeliness of completion.

Did you test the site for Web Accessibility? I did. Buttons are properly identified. Images have alternative text. If a web user is aware of keyboard shortcuts and screen readers, this site is more navigable with closed eyes than to a well-sighted user (sometimes). A web accessibility colour contrast checker proves the site is a fail for people with colour-blindness. Foreground Colour: #0C58DA | Background Colour: #29C566 has a contrast ratio of 2.7:1. Follow that permalink and you will see that normal text, large text, and graphical objects all fail WCAG AA standards that have become the legal and desirable minimal norm in Canada for web accessibility.

Should we be happy that the click path in this annoying but benign web site is not an ad generator and is not linked to other information-feeding mechanisms leading to the dystopia outlined in Tufekci’s (2017) Ted Talk? Yes, quite certainly, because the challenge to complete the exercise exposes our critical thinking abilities and prior knowledge with application used in ways that make targeted information funnels easier to manipulate. And yet I am not completely sure how benign it is. If you disable cookies and javascript do you experience something less confusing (better)? Did you try it on a mobile device and see a message that there isn’t an app version?

Brignull and the darkpatterns.org community have created an interesting pattern library intended to name and shame users of manipulative design that make people sign up for or pay for things they do not need or want or intend to consume. If you are not interested, simply make your way through the labyrinth to the end, take a screenshot, and never return. What is gained from a little perseverance is an understanding of the underlying structure and deliberate means of attention seeking for beneficial aims:

 

Brignull, H. (2011). Dark Patterns: Deception vs. Honesty in UI Design. Interaction Design, Usability338. Retrieved from https://alistapart.com/article/dark-patterns-deception-vs-honesty-in-ui-design/

Tufekci, Z. (2017). We’re building a dystopia just to make people click on ads. Retrieved from https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads?language=en

 

Task 06: A 90s emoji story

Emojis are a shortcut to informal expressions of thoughts and feelings, but be careful how you use them because the result may have unintended consequences! This is an attempt to use emojis in a context for which they were not intended, but also without the familiar, standard decontextualized connotations of fruits and vegetables.

TITLE

PLOT

Combining knowledge of a movie from the 1990s and emoji literacy is not a simple task for a person that conventionally avoids the use of emojis and who is generally aware of their subverted uses and subsequent pitfalls. This reflection would not be complete if I left out the feeling of dread of attempting something that I openly resist in day-to-day personal and business communications; and that is why I accepted the challenge.

I started with the title. It seemed like it would be an easy win for a tyro. Relying on the concept of the film, on character, and plot became immediately necessary to create something descriptive enough to hint at the title. Why immediate? Proper nouns and pronouns are not in emoji form. What then is possible? What material is available to work with? Buchholz (2020) writes that there were 3,136 emojis available on devices in 2020 and that we can expect 3,353 by the end of 2021. That seems sufficient.

Bolter (2001, p. 73) describes a scenario where writers of prose exploit decontextualization and produce polyvocal texts in order to manipulate their reader’s perspectives. My aim was to produce as literal a 1:1 representation of the title and plot as I could muster, not to interpret language unless it became necessary. Can emojis help to create the univocality that I seek and under the same conditions that Bolter writes about regarding icons? I started with an assumption that the standardization of emoji’s would help, but what I found even with the creation of the title is the potential emergence of idiolects by people and cultures that create new interpretations and meanings derived from original ideas and transformed into something else. I preferred to avoid the consequences of the produce section, and simply tried to standardize my own use of emojis throughout the plot description.

These icons (emojis) are quite flexible and are not designed to fix authorial intent as Bolter describes. One or two emojis combined can create complete sentences. With text-based emoticons, meaning is entirely the function of the shape (think semi-colon plus hyphen plus closed bracket to represent wink-nose-smile). Emojis are pictograms and seem to transform the movie plot into visual poetry. While the original media was a form of visual representation, the expressed ideas were not intended to be reduced to poetry. Not relying on syllables, I attempted to reduce the plot to the simplest set of icons and while working out the structure nearly created a haiku (too many parts and images).

What are the affordances of emojis? They are a convenient shorthand for people using mobile devices, are in transit, or short on time. Can they be used as features in a revised grammar the way Kress describes pitch-variation, syntax, vowel quality, energy variations, lexis, or textual organization (2005, p. 12)? They can share common features or stand alone, but are they as diverse and complex as a language? I did not get the impression that I was able to break the mould of literary convention and express with images exactly what I wanted, contrary to Kress’s remark (p. 15) that this can be so. The avoidance of emojis in chat and email leave the impression that this writer is conventional, and I would agree. I leaned heavily on an ability to read a poem more than an ability to represent the movie visually with emojis. Meaning and subjectivity, tone, mood, pace, syntax, rhythm, meter, and then an exploration of images that might be used to describe the main characters and rewrite the plot. Emoji-mediated language use for this context seems less concise to me than the written or spoken word.

 

Bolter, J. D. (2001). The Breakout of the Visual. In Writing space: Computers, hypertext, and the remediation of print (2nd ed.). Mahwah, N.J: Lawrence Erlbaum Associates. doi:10.4324/9781410600110

Buchholz, K. (2020). In 2021, Global Emoji Count Will Grow to 3,353. Statista, September 25, 2020. Retrieved from https://www.statista.com/chart/17275/number-of-emojis-from-1995-bis-2019/

Kress (2005), Gains and losses: New forms of texts, knowledge, and learning. Computers and Composition, Vol. 2(1), 5-22. doi.org/10.1016/j.compcom.2004.12.004

Task 11: Algorithms of Predictive Text — Education is not about…

Auto-complete predicts many possible paths for sentence construction and contrived meaning. Except for the brain emoji that I deliberately selected at the end of the micro-blog, how much of the decision-making process was mine? How much of the auto-complete was predicted based on my past activity texting, writing email, searching with Google in the Chrome mobile app, and across other mobile apps that share data? Would that make this auto-complete micro-blog more my creation based on selection over time (Darwinian?), or is it still more machine application based on hints that it fed to me according to my LinkedIn profile (Connect with me!) and ad opt ins? Am I an autonomous agent, or might there be agentive and biased algorithms at work here?

Education is not about the problem of being able to remember facts about how an elearning system works best with your computer skills or similar systems that can crunch your time using software development technology, but it’s important to understand how you work and create 

At first glance this seems like a poor browser-based translation and benign, and if there are agentive algorithms at play they are clearly appropriately appealing to my educational, personal, and professional interests as evidenced by hundreds of dozens of searches, responses, posts, opt-ins, plus adjacent hyperlinked explorations. It is not my voice, but not so far off to be alarming. I recently posted a data set with Google Data Studio and received an AI prompt to translate my work into Danish. Perhaps a translation to Danish and back would generate this auto-complete sentence. Algorithms originally developed with human intent are now crunching big data sets and are no longer managed with human understanding. There is something not quite right. Does opting in blindly to all app services to allow full surveillance make the loss of authorship ethical? Are my texts to colleagues, friends, and family authentically mine?

My online activity may be measured and counted, but I am not a victim of algorithm bias as described by Cathy O’Neil (2017). O’Neil describes a scenario whereby a useful software tool is intended to help solve crime, and whilst performing as intended it targeted neighbourhoods with prior cases of physical crime and avoided crimes like financial fraud in nearby privileged communities. O’Neil calls this predictive tool a weapon of math destruction, “math-powered applications that encode human prejudice, misunderstanding and bias into their systems.”

Just how biased is unjust? Do you know what a CEO looks like? What is your first impression? Using Google Search in a Chrome browser, type CEO and look at the images. Internet algorithms search personal profiles, job descriptions, and images. The majority of top 100 images for CEO are white males with short hair in the 45-55-age range wearing blue suits. Are search results similar in more populous and technologically advanced countries like India and China, and can women become chief executive officers?

Auto-complete favours my browser search and professional networking preferences but not my syntax, most-used vocabulary, or key terms in related email, text, and LinkedIn activities. It is certainly networked, but that network is not quite right. For example, if you paste the auto-complete statement into Google Search in Chrome you will find a host of elearning systems, ads for Learning Management Systems (LMS), and related blogs and technology news. And because I am enrolled in UBC, I now often find posts from Stella Lee associated with my elearning, LMS, and AI searches. I do not mind. Stella’s writing style is succinct and the information is relevant (for me). Timely and opportunistic is that search: a relevant article from 2020 Inside Higher Ed by Peter Herman about the future of online learning. Rather, Online Learning Is Not the Future. Search alternatives in Chrome are very much like ads on LinkedIn for an LMS, where strengths and weaknesses for compliance training are outlined in pro and con statements. And in 2021 I have regularly ‘liked’ posts by Robert Luke from eCampusOntario where a focus on LMS, micro-learning, and digital credentials are core components of their networked services. There is no co-incidence, only design. My contributions, searches, and likes will be selected to support online learning in Ontario and in turn offer me industry-related information that I seek and will validate with my professional opinions, rankings, and ratings and potentially lead to my next job.

 

 

Herman, P. C. (2020). Online Learning Is Not the Future, Inside Higher Ed June 10, 2020. Retrieved from https://www.insidehighered.com/digital-learning/views/2020/06/10/online-learning-not-future-higher-education-opinion

O’Neil, C. (2017). Justice in the age of big data. Ideas.Ted.Com, April 6, 2017. Retrieved from https://ideas.ted.com/justice-in-the-age-of-big-data/

Task 09: Networked Golden Records

The original Golden Record networked data set from the class is initially an interesting visualization, but it shows only weak potential with a display of random connections between nodes based on personal choices that were completely free. Larger nodes indicate busier traffic intersections where songs were selected more frequently, but why those bottlenecks occurred and what the traffic rules were are largely unknown until we read through our peers’ rationales for musical salvation.

Palladio was designed as a tool for reflective practice, meant to visualize complex historical data. As far as data sets go, this one is not very complex. We know that 23 students selected 10 songs each from a set of 27 tracks. Unfortunately, we do not know what methods were used to determine inclusion and exclusion. Palladio does allow us to see each target and source and that they connect. That is not uninteresting, but the tool cannot tell us why a person made a selection or how they felt about erasing part of the historical record when doing so. Had other standard dimensions been set up at the start, we might assume the graph was intended to do something.

Sizing the nodes makes the visualization more interesting, showing which songs are more favoured by greater number of students.

Selecting “Sum of modularity_class” seems to show that Track 4 is an orphan, meaning it has the weakest (or no) connection in the group.

I use a similar graphing tool at work to visualize knowledge graphs. In my professional context, knowledge graphs are meant to do something; they support student learning by providing a personalized learning path based on mastery. By connecting one node to another, or pre-requisite and post-requisite knowledge (learning objectives) with a Strong or Weak connection and including a Justification, we see a different type of picture:

Edges are a relation of some sort between two nodes (a nice description by Systems Innovation, 2015 April 18 and April 19). The visualization above serves a few purposes. It shows that one chapter.section.objective node may be connected with other chapter.section.objective nodes with a Weak justification as shown by narrow lines (e.g. mastery of the objective is not completely dependent on having prior knowledge of another node, or pre-requisite edge), and Strong justification as shown by thick lines (e.g. mastery is dependent on understanding a prior objective/node, and success on the following objective/node similarly relies on that knowledge). It’s an algorithm based on a variation of item response theory that is eerily similar to Amazon’s recommendation engine, where a weak link between nodes is where we see that other people were interested in X, while with a strong link we see people who purchased that book on Stoicism also bought Y.

The Palladio experiment does indeed enable reflective practice; however, it is missing the power of nodes. What might this activity look like if we added Strong and Weak dimensions with clearly defined categories for justifying one over the other link, and what if we left in the remaining 17 songs to visualize which are orphaned? We might get a closer glimpse into rational decision making about the historical record (or trash bin) if we justify links between nodes by indicating dimensions like key signature, tempo, time signature, and type (classical, jazz, rock, blues, percussion, string, orchestral, instrumental).

 

Systems Innovation. (2015, April 18). Graph Theory Overview. Retrieved from https://youtu.be/82zlRaRUsaY

Systems Innovation. (2015, April 10). Network Connections [video file. Retrieved from https://youtu.be/2iViaEAytxw

Task 08: Golden Record Curation

The archive of ten songs, or parts of songs, from the Voyager record that I wanted to control while other artifacts were erased in favour of their salvation is the following:

  1. Java, court gamelan, “Kinds of Flowers,” recorded by Robert Brown. 4:43
  2. Senegal, percussion, recorded by Charles Duvelle. 2:08
  3. “Johnny B. Goode,” written and performed by Chuck Berry. 2:38
  4. Bach, “Gavotte en rondeaux” from the Partita No. 3 in E major for Violin, performed by Arthur Grumiaux. 2:55
  5. Peru, panpipes and drum, collected by Casa de la Cultura, Lima. 0:52
  6. Azerbaijan S.S.R., bagpipes, recorded by Radio Moscow. 2:30
  7. Holborne, Paueans, Galliards, Almains and Other Short Aeirs, “The Fairie Round,” performed by David Munrow and the Early Music Consort of London. 1:17
  8. India, raga, “Jaat Kahan Ho,” sung by Surshri Kesar Bai Kerkar. 3:30
  9. “Dark Was the Night,” written and performed by Blind Willie Johnson. 3:15
  10. Beethoven, String Quartet No. 13 in B flat, Opus 130, Cavatina, performed by Budapest String Quartet. 6:37

Their unique survival advantage in this scenario was my interest in demonstrating how connected they are in the greater human context, even when there is very little to no actual connection among them locally or temporally. The songs fall into one or more of three categories:

  1. Musical drone and repetition
  2. Interesting changes in tempo
  3. Representation of beginnings/greetings

One example of cross-categorical connections is how the morning raga from India is a greeting of the day with similar drone and tonality to the Peruvian Panpipes, Azerbaijan bagpipes, percussion from Senegal, and the gamelan music from Java. Ragas and gamelan music change rhythm and tempo in dissimilar ways compared to western classical music, but no so dissimilar to drone-like music found with bagpipes or percussive music elsewhere. We did not agree on the types of categories that might be used to draw strong connections between the musical selections of peers, which will be a limiting factor in the networking exercise. I think that we will see interesting, though weak connections and will demonstrate how this may be in Task 9.

The rationale for salvation may be personal, and maybe the inscriptions in gold will linger on in time for many generations. But like a digital copy, Smith (1999) interestingly explains that while the record will have value, it is not actually preservation. Durability of the medium of preservation is important, as is the means to read it. Solar flares and dust particles may wear away the record and instructions for its use. Smith (2017) rightly remarks that plans for data storage are under immense pressure, and we  need to figure out what users want and how to stabilize that in a platform that will preserve it.

Five songs were more problematic to cut because I could readily fit them into one or more of the three categories; however, they all distort one or more of the messages conveyed by each category….

  1. Stravinsky, Rite of Spring, Sacrificial Dance, Columbia Symphony Orchestra, Igor Stravinsky, conductor. 4:35 [This anxiety-driving conundrum does not match the cadence, tone, and message of the top ten.]
  2. Bach, The Well-Tempered Clavier, Book 2, Prelude and Fugue in C, No.1. Glenn Gould, piano. 4:48 [Because a Fugue pulses repetitively, it belongs in SPACE!]
  3. Bulgaria, “Izlel je Delyo Hagdutin,” sung by Valya Balkanska. 4:59 [It is reminiscent of the saxophone solos in Baker Street by Gerry Rafferty, which for popular similarity could have been retained.]
  4. China, ch’in, “Flowing Streams,” performed by Kuan P’ing-hu. 7:37 [Similar to the Azerbaijan bagpipes and raga from India, I would like to have preserved this one because it is similar to ancient Greek Kythera string music that represents a dynamic lyric cultural context that there’s something to it that should live on.]
  5. “Melancholy Blues,” performed by Louis Armstrong and his Hot Seven. 3:05 [Felt bad about this one. Ain’t no cure for the Summertime blues.]

…. and so they were all put down in the end!

 

Smith, A. (1999). Why digitize? Retrieved June 15, 2019, from Council on Library and Information Resources website: https://www.clir.org/pubs/reports/pub80-smith/pub80-2/

Smith, A. (2017). “Digital Memory: What Can We Afford to Lose?” Brown University. Retrieved from https://youtu.be/FBrahqg9ZMc