Above is a screenshot visualization of the network created by the class’s Golden Record selections. The colours indicate communities of selections.
My selections can be found in the red community. There are two other ‘reds’ in the class. Interestingly, the criteria the other ‘reds’ used to select their choices were very different from mine. Both indicated an interest in representing the diversity and musicality that can be found around the world. One of the other ‘reds’ even indicated that they perceived the list as inclusive of cultures around the world, which is opposite to the Eurocentric and male-driven list I perceived. My main objective in making my selections was to attempt to balance the bias in the list. It is so interesting how our criteria were different, even opposite, yet our selections were similar.
When I think about what this means, I am reminded of how it is important to balance quantitative information with qualitative information. If someone unfamiliar with our criteria and were making decision sheerly based on the visualization, it would be subject to misinterpretation and error. In my teaching context, I see this a lot. As I work in a corporate environment there is almost an over-reliance on data and visualizations. We spend hours each month preparing different ways we can visually depict progress and meeting expectations. In some cases, I think this leads to doing only that which is measurable, instead of objectives that might lead to real growth. For example, in corporate training, some of the popular metrics are training hours, number of courses/events, or training evaluation scores. Each of the metrics have their own merits, but none represent the full scope of what a trainer, instructional designer etc… actually does or how the work connects to broader organizational goals. While visualizations can be handy for interpreting connections between data, particularly in a novel way, in practice, the way they are used is quite arbitrary and only shared when they make the objective look good.
Moving this back to networks…
I think the way we develop networks and connections between most things on the web is problematic. Items that have more connections, end up valued over things that have less connections. As we are building more and more data each day, it puts us in a precautious situation where we could bury important cultural artefacts deep into a web of near-nothingness. To think of this is non-internet era terms, imagine if connections to others where the basis of selection for literature or philosophy. There is a good we would not have the works of Emily Dickinson (a recluse) or Jean Jacques Rousseau (made an enemy of everyone) today.
To me this raises many ethical and political questions. What is the best way to rank items on the web? Who decides this? Right now, most of the decisions are being made by for-profit tech companies–should our governments regulate this?
And even if we could get passed those ethical questions, more pop up when we examine the algorithms. Many theorists are quick to point out how much of our algorithms and artificial intelligence is biased. I had the pleasure of attending a keynote delivered my Merridith Broussard this summer. She is a data journalist who has done extensive research in the area of race, gender and artificial intelligence. In her speech, she emphasized how most of the algorithms and machine learning used today can still be connected to a small number of white, middle-class, ivy-league educated men. The lack of diversity in technology design creates these blindspots where groups of people are excluded or forgotten in new technology. Similarly, I think we can connect this back to the concept of networks–works and items by the dominant class are likely to have more connections and have more value in the network. As the digital divide is something we still struggle with, it is unlikely that we will see a web that is balanced anytime soon.