Why I am not Quitting Meta (Yet).

Social media is a tool. And it’s a power tool, not a simple tool like a hammer. Social media should be used with care, finesse, and preferably not under the influence.

Social media platforms are also companies: huge, multi-billion dollar, companies with the goal of making their shareholders wealthy. Furthermore, they don’t only run social media apps. Deleting an app is a step and it makes a statement, but in all likelihood, you’re still supporting the company through other things you consume. As with anything else, we should be educated consumers.

Being an educated consumer is difficult. Instructions (to continue the power tool analogy) come in the form of complex legal agreements written in language inscrutable to anyone lacking a law degree. We click “I Agree” and get to scrolling.

In addition to complex user agreements that hide the company’s intention to share user data for the purposes of marketing and even surveillance, social media companies rely on algorithms that compile a list of things we’ll likely be drawn to and feed us what we want to see. This keeps us engaged (addicted) so that we’ll spend more time on the platform, keeping advertisers (and thus shareholders) happy. Because algorithms don’t have ethics, they view cute cat videos and white supremacist propaganda equally. Once you click (or even if you pause scrolling too long), the software will keep feeding you similar content.

In addition to this complicated stuff (which I am simplifying considerably), there are the shareholders, CEOs, presidents, and owners to consider. What are their ethics? As humans are they dedicated to well-being, flourishing, and the pursuit of truth and justice? Do they donate to causes that support those values? Can we even know for sure?

I try to be educated in all my consumption, not just social media. I try to support companies that are working for good, that are against intolerance, and that believe science is real. However, companies consist of individuals whose politics vary widely. The cars and motorcycles in my garage, the food in my cupboards, the clothes keeping me warm, and the MacBook I’m writing this on - all are produced by companies.

It may be that the owner(s) of social media companies should be held to different standards than car companies. After all, a car company doesn’t influence public opinion, and typically a type of vehicle is not related to Nazi sympathies. These are important considerations that must be wrestled with.

An additional factor is free speech, and particularly how it is defined by federal law. We can absolutely have a conversation about whether the current legal definitions of free speech are just; however, what’s at issue is what they currently ARE. Again, this is complicated enough for a whole book (and books have been written ad nauseum), but the short version is that hate speech isn’t against the law unless it’s directed against individuals. So I am protected by free speech if I say “Punch Nazis” but not protected if I say, “You’re a Nazi, I’m gonna punch you.” What’s interesting about the latter statement is that the first clause – “you’re a Nazi” – is protected speech. It’s the second part that’s against the law.

It gets even more complicated due to state laws. The state where I live protects speech directed at protected classes of citizens. These classes include race, color, religion, ancestry, national origin, disability, sexual orientation, and gender identity. However, there is nothing on the law books differentiating between someone from a traditionally marginalized class and someone who is not. While it is true that bigoted words are typically aimed at traditionally marginalized identities, “honky” and “nigger” are equal before the law.

Let’s turn to the question of fact-checking. Algorithms aren’t good at it! As much as it pains me to say this as an academic, facts can be difficult to define. While it’s pretty easy to determine that the planet is a roughly spherical object (at least if you have a moderate grasp of physics, have been high enough in an airplane to observe the curvature of the earth, or can read a map), it is much more difficult to determine factual information in an unfolding terrorist attack or natural disaster. And don’t get me started on scientific research! The whole point of science is to discover new facts and refine old ones. Therefore, scientific facts change all the time. As they should!

And the whole thing is further complicated because many people cannot reliably differentiate between a fact and an opinion. (Look it up: it’s fact.) Add in satire (thank you The Onion) and humor, and it becomes a nearly impossible task for machines.

Social media (and regular media) companies have to parse all of this carefully. They have users from all over the world, and the world has lots of individualized laws regulating speech and expression. The gesture (Nazi salute) Elon Musk made (twice) at the inauguration of Donald Trump would get him arrested in Germany. But in the United States, where he made the gesture, it’s protected freedom of expression. Companies usually err on the side of caution, which is often too conservative of a position for most of us (no matter what our politics happen to be).

Now, given all of that complexity (of which we have only barely scratched the surface), how is a social media company to go about writing an algorithm that will reliably catch hate speech and fact check? Facebook implemented a program in 2018 and recently turned it off. A lot of progressives took this to indicate that Mark Zukerberg doesn’t care if there’s hate speech on his platform. And it’s true that he specifically turned off the algorithm that looked for hate speech against gendered identities and immigrants. The platform continues to remove speech and expressions of Blackface and Holocaust denial.  Zuckerberg went on to say that he didn’t think the algorithm was working, and that they are contributing to the undermining of trust, especially in the U.S. Here’s the bottom line: I see his point. I’m not sure I agree with it, but I see it.

We may be entering an era when artificial intelligence can start to correctly identify hate speech and violence. But it will be awhile before AI can identify a fact. There may even come a day when AI will be able to identify the region someone is posting from and apply local laws. We are not there yet. Until then, our humanity will have to serve.

I will never judge anyone’s decisions regarding social media consumption, provided that you’re making careful, educated choices. Everyone needs to figure out what this looks like for themselves, but here are things I do:

  • I “unfollow” and “block” with a vengeance. If there’s someone who I want to stay connected with but I don’t want to see their content, I unfollow or hide them. If someone is an offensive bigot I want permanently out of my life, I block.

  • I use settings in my phone to limit my time on certain apps. Typically, I limit to 45 minutes per day.

  • I “hide” ads. All of them. This confuses the algorithm because it can’t figure out what to show me. It also results in long periods where I only see my friend’s posts.

  • I never purchase anything through a social media app. If I do see an ad for something I’m interested in, I open my web browser and find the company.

  • Set aside an hour to go through every single privacy setting. Turn off all data sharing that you can. Lock down your personal information. Lie about as much personal info as possible: name, DOB, etc.

  • Never participate in “then and now” prompts. This is training facial recognition software.

  • My profile picture is always unrecognizable as me. To find me, you gotta know me.

  • Always apply healthy skepticism. If it seems too good to be true, it probably is. If it looks like AI, it probably is. Don’t share that shit: block it.

 

Now, let’s talk Meta specifically. I have had a Facebook account for over a decade and I’m connected with many friends and loved ones across the world. While other platforms have tried to replace it, I’d lose a huge and valuable community if I left. I appreciate how it lets me control who sees my content; other platforms are a public free for all. Like any multinational, billion dollar, company, Meta is ethically problematic: there are upsides and downsides, both to how the company functions, and the behaviors and activities of its CEOs. For the time being, I will continue using it while adhering to the practices listed above.

Twitter – excuse me, X – can go fuck itself.

Catlyn Keenan