On first glance, I would argue that the visualizations are almost useless when considering a broader analysis of the communities they generated; raw, numerical data aside, there are no justifications, no reasonings behind the initial choices expressed in the data set that these visualizations are derived from. For all intents and purposes, the choices made in the initial quiz from which this data was derived, could have been made at random by the users. Without the justifications or some level of reasoning behind the choices, the only information we can extrapolate from these visualizations are the choices individuals made, and the correlations between the choices made by individuals who had similar or identical selections. Speaking from my own choices, I selected music that represented as wide a range of different instrumentations and vocalizations as possible, despite the fact that I greatly dislike some of the songs that I chose. By grouping me with individuals who made the same selections but for different justifications, the data is skewed and possibly even misleading when analyzing the visualizations.
It becomes a similar issue that can be seen in, surprisingly, policing practices; the use of uninterpreted or misleading, or even biased, data leads to ill-informed processes and frameworks being built upon incorrectly or misconstrued data. If this data is being used by an algorithm to suggest music for example, (or in the policing connection to allocate resources to a specific region), it doesn’t matter that the data being fed into the algorithm is biased; we will continue down this feedback loop of self-fulfilling predictive data that continues to discriminate more and more communities due to the biased nature of the data being provided by its own system; if the algorithm states that a region is more likely to be crime-addled, more police will be dispatched to that area, which leads to more arrests, which leads to more data which points out that yes, in fact, this area is crime prone, so more officers need to be sent there and the cycle continues to the point of collapse. In this situation, “the algorithm is presented as a new actor in these forms and relations of power” (Neyland, 2019, p. 7). The same can be applied to the groupings of the musical choices; if an algorithm sees that I have made the choices that I have, regardless of my justification, it will continue to funnel similar music into my feed, further constricting what I am being exposed to, thus limiting my future choices and creating an inwardly-spiraling restrictive outcome. Alternatively, if it groups individuals based on their choices within the same data set, it will group me with individuals who I potentially have nothing in common with, other than the fact that we both through one of the same songs should be on the curated list; my justification of it being a god representation of percussion instrumentation may be completely different from their justification of really loving Mozart. The data that is missing or not found within this visualization or quiz is just as important as the data that is displayed; the reasoning behind these choices or the factors surrounding them are what inform the connections between the choices themselves.