Please enjoy my final project for ETEC 540! In this short video I examine the history….and the future of the Portable Document File type.
Archives: November 2019
Task 12: Speculative Futures
This week we are asked to create two different speculative narratives that demonstrate what education, technology, and literacy will be like in 30 years. As we also have freedom to select the medium of the narrative, I have selected a medium that I think is very suiting for what the future looks like: advertisements. If there is one thing I do not think will disappear easily, it is social media and advertisements. We may change how we use it, but given that it presents such a significant element of the marketing, current events, and information streams that generates huge revenue through advertisements, I think it will remain present in our lives at least for the next 30 years.
My two narratives take the form of digital advertisements for a university, much like what we may see on LinkedIn or Facebook. For each narrative, I have created two advertisements.
Narrative 1
In this first narrative, the advertisements are appealing to how technology can be used to increase personal effectiveness towards social good. While there is the possibility that we will not recognize society 30 years from now because technology is rapidly changing, I personally think the changes will be more gradual, much like the last 30 years has been. As such, reading these ads may seem familiar or of a content that advertisements may use today.
While technological changes may be more gradual, what I do think will happen is we will get more sophisticated in our use of technology. Instead of seeing technology and self as separate, I think we will look for ways that we can become more effective, professionally, socially, and personally. Similarly, environmental concerns are very present now and this will continue into the future. The literacies of education will shift to include more environmental and psychological aspects. Educational will be less about preparing for a specific job, but learning how you can learn, adapt, and use your own talents.
Narrative 2:
In the second narrative, I focus on another aspect of the future of technology: Robotics, Automation, and Artificial Intelligence. We are already seeing the integration of robotics, automation, and artificial intelligence in many industries. Often times, these technologies are approached with fear, as in some industries they have the potential to eliminate jobs. Playing with this notion, I thought about what professional and career trajectories and education programs may look like in 30 years. Imagine that the reality of working with robots is a reality for many people, what business skills might the career colleges advertise?
In a world that normalizes working with robots, there would need to be some sort of robot literacy. One would need to be able to communicate with robots, be it through computer programs, learning a particular verbal or visual code, or another operational aspect. Working with and managing robots would then become a skillset that business look for and career colleges would appeal to this demand.
While both of these narratives are different, I do not think they are necessarily contradictory. Rather, they could very well occur at the same time and at the same school, as they outline different focuses in education of the future. A great example is the advertisements you see for our local colleges. Sometimes they focus on social or personal development aspects of education, while others focus on tangible career-focused education. I did this intentionally, so as not to paint a picture of a uni-dimensional education system in the future. If anything, it will become more diverse and we will have more fields of inquiry to contend with. Additionally, as the medium is advertisement, I certainly think ads will become even more dynamic and tailored to user preferences in the future. In this way, simultaneous, and maybe even sometimes competing narratives, will exist for most businesses.
Task 11: Predictive Text
For this week, we are asked to develop a microblog using a sentence starter. In exploring predictive text apps online I discovered https://botnik.org/apps/writer/. This website allows you to generate predictive text based on different text sets, so for an example, you can generate predictive text in the voice of John Keats. You can also upload a text file to generate your own voice. So in the spirit of examining predictive texts, I have generated several microblogs to examine the differences.
John Keats “voice” microblog:
My idea of technology is wither’d from a wayward midnight palely draught with pleasant fans no heat in language.
Bob Marley lyric microblog:
Every time I think about our future takin yoy youths want every chorus be jammin right out.
Beauty advertisement microblog:
Education is not about diapers your skin will feel perfect with vitamin enriched curves
Jeff Bezos quotes microblog:
As a society, we are now choosing creativity more than any other cause on amazon.com
Generated from the text of my task submissions 1-10 (aka my voice):
This is not my idea of the algorithms that most things really need to communicate with our dog
This experiment demonstrates that the text sources predictive text algorithms use matter! As you can see in the microblogs, as the text source changes, so too do the predictive text options. While the John Keat voice is eloquent, moving to beauty advertisement lead to a non-sensical microblog. I will admit that I definitely had fun generating the microblogs…and I may have been attracted to the most ridiculous option in the predictive texts, so they might be slightly skewed. However, this also has to do with the options themselves. Some of the generators were so far from how I think and speak normally that I felt I might as well just pick something random. For me, the beauty advertisement and Bob Marley texts were the most difficult, while the John Keats and Jeff Bezos seemed easier to find a voice. While I would love to think this is because I am both eloquence and an intelligent business mind, I think the more probable answer is cultural influence, as it seems to be the common thread that binds me to the two figures. Plus, both Keats and Bezos have influenced contemporary culture in the West in different ways, so there is some familiarity with their texts.
Interestingly enough, the result of the generator using my own texts I do not feel is completely my voice. They are all words I use fairly regularly yes, but the actual result is not something I would ever say. I played around further with the tool trying to generate something I would say or write, but it was always off in some way.
Why is it that it these generated statements feel awkward, while predictive text on my smartphone is often accurate?
The most obvious answer is the algorithms–they could be different. It is likely that the predictive text on my Samsung is more sophisticated than this free web application. However, I think blaming the algorithm might be too simplistic an answer. After all, an algorithm is just math. Anyone who enjoys the arts, language, music should hopefully think it is more than just math or sticking different elements together that makes the works great.
One of the major differences is the quality of text. When you are texting and the predictive text seems accurate, you are likely communicating short statements “I’m running late”. If you are engaging in a philosophical debate via text, it becomes less accurate (I know, as I do this often). The statements we used to start the predictive text require a deeper engagement with language and ideas, so it is unlikely predictive text will inspire the quality of language needed.
This is very similar to the Crime Story podcasts we listened to this week. The machine directed police officers to target specific activities and people to pad numbers in a particular way. While the summon and arrest numbers went up, the quality of police activity was suspect. The machine only looked for ‘how many’ not ‘why’. Similarly, predictive text algorithms look for frequent word combinations to present options, but they do not read the content of the text.
This becomes problematic when we start to investigate what algorithms include and exclude. O’neil and the Age of Algorithm podcast refer to a few great examples of this. I attended a speech of Meridith Boussard this summer. As a data journalist, she investigates the way artificial intelligence and algorithms might go wrong. One of the examples she gives is how some automated soap dispensers sometimes do not work for people with darker skin colour. Her argument is that the lack of diversity in the Tech sector creates these blindspots in technology. What’s worse, instead of improving technology to be more inclusive, inventions are pushed out in the spirit of innovation and few ever circle back to fix the blindspots.
Task 10: User Inyerface
So this week, our task is to play the User Inyerface game. Above is a printscreen of my completion (with horrible time).
So that was irritating!
On my first attempt, I tried on my smartphone. It is next to impossible to get past even the first screen using a smartphone! The chat window dominates the screen frequently, such that you only get a few seconds in between that and the timeclock popup. I eventually had to give up on the second screen, as the upload link did not initiate any selection option.
While it was easier to finish on a larger screen, it still took me forever. As I had figured out the main tricks in the first and second screens on my smartphone, it was the captcha that caught me up.
I wish I could say these experiences were unique only to this game, but these are dirty tricks we put up with on a daily basis, albeit in lower doses. The game reminded me of browsing the web 10 or 20 years ago, when there were no tools like Wix or Weebly and anyone who wanted a website had to use HTML. The results were so mixed that when you visited a new site, you would need to spend some time orienting yourself to their design. Now standards are towards creating intuitive and user-friendly websites, so it is less like user inyerface and the annoyance is subtle. Unless you work in my organization…then every department wants to be different leading to the most confusing internal website ever!
Swinging back to the attention economy, it is pretty scary that most of us will fill out internet forms almost without thinking. Each field we fill out creates data points, which strengthens the segments advertisers and researchers can use to manipulate our behaviour. Even if you do not fill out forms like this, they are still able to get information about you through your friends. Almost Anytime you grant application access to your profile they gain access to information about your connections.
When we talk about literacy, I think this is one area where we are behind. Data, Security, Digital, and even design concepts are all areas that are becoming increasingly important that they should be included in literacy training. In my teaching context as an instructional designer for a non-profit, more and more of my time is spent designing courses on data and digital literacy topics, as lack of knowledge in these areas poses a substantial risk to the organization. Most studies show that over 90% of data and security breaches are a result of employee error. And as the best practices in these areas change so much, one really needs an understanding of network and data architecture to be critical of new practices and make smart technical decisions.
Another question I think about a lot is whether advertising and internet data use should be regulated. I can understand both sides of the argument, but I am starting to lean towards pro-regulation. The only people who seem to be getting any value out of advertisements and internet data are businesses–to the rest of us, it’s just background annoyances we put up with.
Task 9: Network of Texts
Above is a screenshot visualization of the network created by the class’s Golden Record selections. The colours indicate communities of selections.
My selections can be found in the red community. There are two other ‘reds’ in the class. Interestingly, the criteria the other ‘reds’ used to select their choices were very different from mine. Both indicated an interest in representing the diversity and musicality that can be found around the world. One of the other ‘reds’ even indicated that they perceived the list as inclusive of cultures around the world, which is opposite to the Eurocentric and male-driven list I perceived. My main objective in making my selections was to attempt to balance the bias in the list. It is so interesting how our criteria were different, even opposite, yet our selections were similar.
When I think about what this means, I am reminded of how it is important to balance quantitative information with qualitative information. If someone unfamiliar with our criteria and were making decision sheerly based on the visualization, it would be subject to misinterpretation and error. In my teaching context, I see this a lot. As I work in a corporate environment there is almost an over-reliance on data and visualizations. We spend hours each month preparing different ways we can visually depict progress and meeting expectations. In some cases, I think this leads to doing only that which is measurable, instead of objectives that might lead to real growth. For example, in corporate training, some of the popular metrics are training hours, number of courses/events, or training evaluation scores. Each of the metrics have their own merits, but none represent the full scope of what a trainer, instructional designer etc… actually does or how the work connects to broader organizational goals. While visualizations can be handy for interpreting connections between data, particularly in a novel way, in practice, the way they are used is quite arbitrary and only shared when they make the objective look good.
Moving this back to networks…
I think the way we develop networks and connections between most things on the web is problematic. Items that have more connections, end up valued over things that have less connections. As we are building more and more data each day, it puts us in a precautious situation where we could bury important cultural artefacts deep into a web of near-nothingness. To think of this is non-internet era terms, imagine if connections to others where the basis of selection for literature or philosophy. There is a good we would not have the works of Emily Dickinson (a recluse) or Jean Jacques Rousseau (made an enemy of everyone) today.
To me this raises many ethical and political questions. What is the best way to rank items on the web? Who decides this? Right now, most of the decisions are being made by for-profit tech companies–should our governments regulate this?
And even if we could get passed those ethical questions, more pop up when we examine the algorithms. Many theorists are quick to point out how much of our algorithms and artificial intelligence is biased. I had the pleasure of attending a keynote delivered my Merridith Broussard this summer. She is a data journalist who has done extensive research in the area of race, gender and artificial intelligence. In her speech, she emphasized how most of the algorithms and machine learning used today can still be connected to a small number of white, middle-class, ivy-league educated men. The lack of diversity in technology design creates these blindspots where groups of people are excluded or forgotten in new technology. Similarly, I think we can connect this back to the concept of networks–works and items by the dominant class are likely to have more connections and have more value in the network. As the digital divide is something we still struggle with, it is unlikely that we will see a web that is balanced anytime soon.