Tag Archives: Gilliard

Polarization and profits (#engageMOOC)

For topic 2 of #engageMOOC (Engagement in a Time of Polarization), we read an article by Chris Gilliard called “Power, Polarization and Tech,” and Chris was also part of live conversation for the course (I couldn’t join it live but watched the video recording). We also watched a couple of videos, and some of what by Zeynep Tufekci had to say in a Ted talk from September 2017 really stood out to me. Here I’m going to present some somewhat random reflections on both of these–things that really made me think.

Gilliard

I and a few others engaged in some annotations on his article “Power, Polarization and Tech” through hypothes.is. I noted there that, while I’m embarrassed to admit it, I hadn’t really fully grasped how social media, and perhaps other aspects of the web based on making money through keeping our attention, are designed in order to increase polarization. “Polarization is by design, for profit,” Gilliard notes, because it keeps our attention on the platforms that drive it (I mentioned this in my previous post for #engageMOOC as well).

It’s not just that Facebook and Twitter (for example) attract people who get enraged and abuse each other, nor that they don’t do enough to stop abuse (though they don’t), it’s also that people getting angry and outraged and posting about that new horrible thing the other side did is what these platforms require in order to continue to be financially viable…it’s what makes them tick. It’s built into the design of their profits so it’s not going to go away. At least not so long as those who create and run the platforms make their money through our attention and our data.

In the recording of the live discussion with Chris for the course, he points out how many of the apps and social platforms we use suck up our data in ways we don’t realize, and do with it things that we don’t know. He noted that when you update apps, you should re-do your privacy settings, which I hadn’t thought about before. The problem with this is not just “do you have anything to hide” but also that you have lost agency if you don’t know what’s happening. You can read the Terms of Service, of course, but they are often vague and don’t really tell you what is happening with your data. And it can end up, through being sold to others, affecting what kind of insurance you’re able to get (for example). Again, the issue here is in part about agency, about being in control, and we’re losing that with regard to our data.

Which is why, in my previous post, I wondered if one way to help address this issue would be to rethink how we engage on social media and in other apps. We have gotten used to the idea that the web is free (of cost in the sense of money) and so all of these wonderful free services seem like just the way things should be. But of course we are paying in other ways, and not just with our data; we are paying with divides between people built on outrage that is part of the bread and butter of our free services. And as we’ve been hearing lately, it’s all too easy for people to create bots who will stir up that outrage for political (or other) gains.

I have started to make a point of finding online apps and platforms I think are useful and paying for them. Partly this is to support those who I think are providing good things in the world, and partly because I think that this is one small way forward: if the people who create such things can make money in other ways, there will be less need for us to pay in data and attention (at least, I hope so). I realize I’m privileged in this regard; not everyone can pay for such things. And Gilliard notes in the live discussion the limitations of individual actions–just because I take shorter showers doesn’t mean things are going to change. I agree that bigger efforts on a larger structural level are required too. But smaller efforts aiming towards what one wants to see are at least something (and Gilliard notes they aren’t a problem, just not enough usually).

That’s one of the many reasons I prefer Mastodon to Twitter: I pay with money, not my data. And there are actually enforced rules against abuse (and a specific no-Nazi policy, as the instance I’m on is based in Germany). No emphasis on “freedom of speech is always good and we just need more of it to drown out the Nazis” kind of rhetoric on the instance I’ve joined. Find me at clhendricksbc@mastodon.social. I’m also at chendricks@scholar.social, but I post less there.

 

Tufekci

Zeynep Tufekci, by Bengt Oberger, licensed CC BY-SA 4.0 on Wikimedia Commons

I really found her Sept. 2017 Ted talk quite powerful. I don’t have a lot of time so I’ll just mention one or two things in particular. Tufekci was talking about machine-learning algorithms and how the mountains of data that are being provided about us through our interactions with platforms and apps can lead to personalization of content. Some of it seems innocuous, like when you look at some product online and then ads for that product follow you around in other apps and platforms. Some of it even seems beneficial, like how you might get discounts on something you want, like tickets to Vegas. But it can be dangerous too, because the algorithms may realize that the people who are really likely to buy tickets to Vegas are those addicted to gambling, and since they have no ethics the algorithms will target such people. And further, they can work to provide you with more and more of worse and worse content once you start, e.g., watching something a little bit fringe or violent on YouTube–the suggestions on the right are poised to take you further and further down that path (which, as a parent  of a pre-teen boy I really paid attention to).

One thing that hit me in particular was that the personalization these algorithms can do can lead to use getting different content in our social and news feeds–that’s not news to me, but Tufecki pointed out something I hadn’t really focused on before: “As a public and as citizens, we no longer know if we’re seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible ….”

If the algorithms are showing us different news stories (e.g. on Facebook) and posts from different people with very different political leanings (because they think you will like one kind of post and I another, and we don’t see the other posts even when we’re following the same people), then no wonder we end up unable to have effective public discussions.

I guess I have always held hope in the idea that people who genuinely want to come together and find solutions will do so. There are many people who really want to consider various sides carefully, who want to listen and consider the “other side” and whether there is anything there they should be paying attention to. But people like that are going to have a really hard time coming together if they don’t even have a shared basis of information or if the “other side” they see is interpreted through lenses that demonize them because this is what keeps your attention, and this is what the algorithms think you want in order to keep your attention.

 

Awareness as a first step?

This is all very depressing and all I can hope right now is that helping people see what is going on will encourage us to change the structures that continue to support it. Gilliard talks about looking at the EU as a start, where some of the privacy regulations are much more stringent than in the US as regards companies like Google and Facebook collecting data. It may take governmental regulation to help us move in the right direction. But it’s also going to take awareness on many people’s part to even see the problem.