Category Archives: engageMOOC

Polarization and profits (#engageMOOC)

For topic 2 of #engageMOOC (Engagement in a Time of Polarization), we read an article by Chris Gilliard called “Power, Polarization and Tech,” and Chris was also part of live conversation for the course (I couldn’t join it live but watched the video recording). We also watched a couple of videos, and some of what by Zeynep Tufekci had to say in a Ted talk from September 2017 really stood out to me. Here I’m going to present some somewhat random reflections on both of these–things that really made me think.

Gilliard

I and a few others engaged in some annotations on his article “Power, Polarization and Tech” through hypothes.is. I noted there that, while I’m embarrassed to admit it, I hadn’t really fully grasped how social media, and perhaps other aspects of the web based on making money through keeping our attention, are designed in order to increase polarization. “Polarization is by design, for profit,” Gilliard notes, because it keeps our attention on the platforms that drive it (I mentioned this in my previous post for #engageMOOC as well).

It’s not just that Facebook and Twitter (for example) attract people who get enraged and abuse each other, nor that they don’t do enough to stop abuse (though they don’t), it’s also that people getting angry and outraged and posting about that new horrible thing the other side did is what these platforms require in order to continue to be financially viable…it’s what makes them tick. It’s built into the design of their profits so it’s not going to go away. At least not so long as those who create and run the platforms make their money through our attention and our data.

In the recording of the live discussion with Chris for the course, he points out how many of the apps and social platforms we use suck up our data in ways we don’t realize, and do with it things that we don’t know. He noted that when you update apps, you should re-do your privacy settings, which I hadn’t thought about before. The problem with this is not just “do you have anything to hide” but also that you have lost agency if you don’t know what’s happening. You can read the Terms of Service, of course, but they are often vague and don’t really tell you what is happening with your data. And it can end up, through being sold to others, affecting what kind of insurance you’re able to get (for example). Again, the issue here is in part about agency, about being in control, and we’re losing that with regard to our data.

Which is why, in my previous post, I wondered if one way to help address this issue would be to rethink how we engage on social media and in other apps. We have gotten used to the idea that the web is free (of cost in the sense of money) and so all of these wonderful free services seem like just the way things should be. But of course we are paying in other ways, and not just with our data; we are paying with divides between people built on outrage that is part of the bread and butter of our free services. And as we’ve been hearing lately, it’s all too easy for people to create bots who will stir up that outrage for political (or other) gains.

I have started to make a point of finding online apps and platforms I think are useful and paying for them. Partly this is to support those who I think are providing good things in the world, and partly because I think that this is one small way forward: if the people who create such things can make money in other ways, there will be less need for us to pay in data and attention (at least, I hope so). I realize I’m privileged in this regard; not everyone can pay for such things. And Gilliard notes in the live discussion the limitations of individual actions–just because I take shorter showers doesn’t mean things are going to change. I agree that bigger efforts on a larger structural level are required too. But smaller efforts aiming towards what one wants to see are at least something (and Gilliard notes they aren’t a problem, just not enough usually).

That’s one of the many reasons I prefer Mastodon to Twitter: I pay with money, not my data. And there are actually enforced rules against abuse (and a specific no-Nazi policy, as the instance I’m on is based in Germany). No emphasis on “freedom of speech is always good and we just need more of it to drown out the Nazis” kind of rhetoric on the instance I’ve joined. Find me at clhendricksbc@mastodon.social. I’m also at chendricks@scholar.social, but I post less there.

 

Tufekci

Zeynep Tufekci, by Bengt Oberger, licensed CC BY-SA 4.0 on Wikimedia Commons

I really found her Sept. 2017 Ted talk quite powerful. I don’t have a lot of time so I’ll just mention one or two things in particular. Tufekci was talking about machine-learning algorithms and how the mountains of data that are being provided about us through our interactions with platforms and apps can lead to personalization of content. Some of it seems innocuous, like when you look at some product online and then ads for that product follow you around in other apps and platforms. Some of it even seems beneficial, like how you might get discounts on something you want, like tickets to Vegas. But it can be dangerous too, because the algorithms may realize that the people who are really likely to buy tickets to Vegas are those addicted to gambling, and since they have no ethics the algorithms will target such people. And further, they can work to provide you with more and more of worse and worse content once you start, e.g., watching something a little bit fringe or violent on YouTube–the suggestions on the right are poised to take you further and further down that path (which, as a parent  of a pre-teen boy I really paid attention to).

One thing that hit me in particular was that the personalization these algorithms can do can lead to use getting different content in our social and news feeds–that’s not news to me, but Tufecki pointed out something I hadn’t really focused on before: “As a public and as citizens, we no longer know if we’re seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible ….”

If the algorithms are showing us different news stories (e.g. on Facebook) and posts from different people with very different political leanings (because they think you will like one kind of post and I another, and we don’t see the other posts even when we’re following the same people), then no wonder we end up unable to have effective public discussions.

I guess I have always held hope in the idea that people who genuinely want to come together and find solutions will do so. There are many people who really want to consider various sides carefully, who want to listen and consider the “other side” and whether there is anything there they should be paying attention to. But people like that are going to have a really hard time coming together if they don’t even have a shared basis of information or if the “other side” they see is interpreted through lenses that demonize them because this is what keeps your attention, and this is what the algorithms think you want in order to keep your attention.

 

Awareness as a first step?

This is all very depressing and all I can hope right now is that helping people see what is going on will encourage us to change the structures that continue to support it. Gilliard talks about looking at the EU as a start, where some of the privacy regulations are much more stringent than in the US as regards companies like Google and Facebook collecting data. It may take governmental regulation to help us move in the right direction. But it’s also going to take awareness on many people’s part to even see the problem.

 

 

Creating meaningful communities (#engageMOOC)

I am participating in a two-week long MOOC called “Engagement in a Time of Polarization,” hosted on EdX. It’s related to the Antigonish 2.0 project (you can read Bonnie Stewart‘s May 2017 Educause article for more: “Antigonish 2.0: A Way for Higher Ed to Save the Web.”

The course is broadly about what the title says: how to engage people together when the context around us emphasizes polarization. At least, that’s what I’m getting from the bit I’ve done of the course so far. As Chris Gilliard notes in his post “Power, Polarization, and Tech,” polarization is profitable for social media and many other parts of the web: “Polarization keys engagement, and engagement/attention are the what keep us on platforms.” So how do we engage with one another, create meaningful communities, work together to address issues we face in our local contexts, when our digital lives are influenced by moves towards polarization?

Meaningful participatory communities

Today I watched three videos in the course, from people working on different aspects of creating meaningful communities, of helping to create “participatory public engagement” (in the words of the introductory video for Topic 1 of the course). The Highlander Center in Tennessee, USA, is one group working on such efforts. According to the video interview of two people from the Center, (among other things) they bring people together to learn from each other about issues in their own communities, plan actions to address them, and carry those out. The emphasis here is on people bringing their own experiences, their knowledges and cultural practices, and learning from each other through those.

This sounded like an amazing group (they’ve been at it since the 1930s), and I couldn’t help thinking: okay, so the people involved learn and take action, but what about the next generation? And the next? How do we build something like this into just part of what it means to be living in a society (or at least, a democratic one)? I suppose one can do so through the three levels of the Angtigonish movement: local groups, k-12 and higher education, international networks.

Meaningful participatory communities on the web?

How might online platforms or other aspects of digital life contribute to such communities? Right now, often they don’t. As noted above, much of social media is based on profits that can be made through engagement, for which polarization is an important driver. What other opportunities exist?

I like to think blogs are one option here: they can encourage longer writing, longer reflections, deeper engagement. And then through comments people can connect together. But I’m finding there aren’t that many people commenting lately, whether on my blog or others’ (this is purely anecdotal!), and I wonder if social media is taking its place. Just do a quote-tweet instead and that’s your comment! And maybe doing so can connect more people (the message about the post gets amplified beyond one’s own original message on social media, e.g.). Still, social media posts, while permanent in some ways, can often be hard to find later, whereas blog posts and comments are a bit easier I think. Not to mention that social media platforms can filter what you see, or when you see it.

drawing of an elephant with an "m" and "joinmastodon.org" underneath it

Find Mastodon logos and stickers through their press kit at http://joinmastodon.org

Another interesting option is social media that isn’t driven by profit. I posted about Mastodon on this course’s discussion board as one such option. The discussion question was about this quote and whether it is correct:

“It’s not that there’s anything particularly healthy about cyberspace in itself, but the way in which cyberspace breaks down barriers. Cyberspace makes person-to-person interaction much more likely in an already fragmented society. The thing that people need desperately is random encounter. That’s what community has.”

– John Perry Barlow, 1995, from http://www.lionsroar.com/bell-hooks-talks-to-john-perry-barlow/

I posted on the discussion board and then realized that that is not visible to people outside the course, and also that it will disappear once the course is finished in two weeks.

So I’m reproducing my post below.


There are probably never any truly random encounters in online spaces, because where you go depends on your past, your current experiences, your desires, etc., and others end up there too because of their context…but I have found that I have been able to reach out beyond my fairly small social bubble online through a new-ish social media platform, Mastodon (http://joinmastodon.org). When I joined in the Fall of 2016 it was a small space with a small number of users, few of whom were from my own, already-established online circles. As soon as I made my first public post someone replied to welcome me, and for awhile there was a culture of welcoming new people with a friendly reply, which I joined in as well. I got to know a number of people in those early days, some of whom are still there and some of whom have moved on. But it felt like a small community of friendly folks that I didn’t meet randomly, but that led to connections I would not have made on my usual social circles online.

Now Mastodon has gotten much bigger, but the nature of how it works still fosters small communities if one wants them. It is a federated social network, which means no one big person or company owns all of it. You join an “instance,” which can be big or small, general or focused on particular interests, and run by one or a group of people. The code is open source and anyone with the tech know-how can spin up an instance (and the helpful thing to do is to contribute to your instance host’s expenses and time with a regular donation such as on Patreon or something like that). Each instance has its own rules and policies; many are much stricter on hate speech and harassment, for example, than what you’ll find on Twitter.

But the beauty of federation is that you can still talk with people on other instances. You can either just see the posts from people on your own instance, or on all instances that yours federates with (usually, if one person follows someone on another instance, then that leads to the instances connecting to each other; but this may differ according to different instances’ rules/practices).

Not random encounters, but expanding circles. the ability to stay in a small community or branch outwards, The ability to be on an instance with policies you agree with and people you want to spend time with. And while there are disagreements and some ugliness at times, it is nowhere near the deep polarization and horror I sometimes see on places like Twitter.


I should also mention that there is even a co-op instance on Mastodon, where the members collectively run the instance, decide on policies, etc.: social.coop

I am not claiming that Mastodon will solve all our social ills…far from it. But I think it’s a move in the right direction because the way it’s structured has the capacity for instances to focus on engagement rather than attention, connection rather than polarization … in a way I don’t see current mainstream social media platforms doing.

And if you don’t want your attention and your data to be the product, be willing to put forward a little money to support your instance host! :)