Is Facebook Promoting a Political Agenda?

The year 2020 has been a political battleground like no other. This year, political bias has creeped into many areas

The year 2020 has been a political battleground like no other. This year, political bias has creeped into many areas of public interest previously left out. From healthcare issues to schools, people have begun to slide into opposing camps, often bringing politics into completely unrelated arenas of discussion. Partly to blame for this is the fact that social media has wormed its way into nearly every aspect of our lives.

With so much influence and reach over people’s access to information, social media platforms are under more scrutiny than ever regarding political bias. Of course, the political sides disagree on who exactly is benefitting from this bias. This year alone, both Twitter and Facebook have come under fire for having apparent bias, and both politicians and users of the platforms, as well as critics, wonder what responsibility the company owes to the public.

Do they have to represents all sides equally and with the same level of access to users, or can the company itself choose to promote one side over the other?

Employees of Facebook have spoken out against what they consider to be political bias. They accuse the company of unequally applying fact-checking resources, and penalizing some pages while showing preferential treatment and forgiveness to others. Most recently, a Facebook employee was fired for apparently attempting to address this issue on an internal website, Workplace, where Facebook employees can communicate and discuss work-related concerns and issues.

With 2020’s presidential election looming, the discussion has turned again to election integrity and concerns that outside forces may attempt to influence or sway the election results. President Donald J Trump himself has stoked those fears, refusing to say whether or not he would accept the results because of what he claims are concerns about election fraud, specifically related to mail-in voting. With the stage set for the results to be called into question on nebulous claims and accusations, the involvement of social media has become more pressing than ever.

In 2016 and the years prior, Russia launched a massive disinformation campaign on social media platforms. Individual operatives, posing as like-minded Americans, created groups and posts intended to sway public opinion in Trump’s favor at times, discrediting Hillary Clinton at others. According to investigators who uncovered the effort, it is highly likely that this campaign did affect the 2016 election outcome.

Despite the fact that evidence has emerged showing that right-wing policies have been receiving preferential treatment and that campaigns of misinformation have worked in their favor, right-wing influencers and big-name platform users often make the loudest stink about being censored. Donald Trump famously went after Twitter in June, accusing them of censorship after they slapped a fact-checking warning over a post of his. While Trump may feel that he is being unfairly targeted, critics have made waves for years about the President’s unfettered use on Twitter, accusing the platform of allowing him to spread lies and misinformation.

Trump’s feud with Twitter has escalated and again, this week, Twitter penalized Trump for spreading inaccurate information, this time about the COVID-19 pandemic. The President tweeted a video that suggested children are, “almost immune,” to COVID-19, and the platform removed it citing a violation of their policy regarding the dissemination of COVID-19 information that could be harmful.

Facebook also removed the same video from their platform, with the same cited concerns. This has led to renewed cries from the right that they are being unfairly censored. But, aside from these very visible and public events, is this really what’s happening in the day-to-day with right-wing groups?

Employees say no. They claim that the opposite is occurring, and have renewed concerns that Facebook is not doing enough to ensure that information is being honestly and factually disseminated to platform users.

Buzzfeed reports that one employee said on the Workplace platform, “I do think we’re headed for a problematic scenario where Facebook is going to be used to aggressively undermine the legitimacy of the US elections, in a way that has never been possible in history.”

This accusation comes after a week of boiling internal disagreement among Facebook employees who feel that Facebook is not only not addressing the need for un-biased representation on the platform, but that their negligent fact-checking policies are allowing dangerous or election-swaying information to be disseminated similarly to the way the Russian campaign was able to affect public opinion.

Zuckerberg responded in a half-hearted manner to employee concerns, claiming that he does feel their company is in, “an unprecedented position.” While this is obvious, what is less obvious is what Facebook will do to insure they are not part of the problem.

Buzzfeed reports, On Facebook’s internal message boards, discussion about the Trump election question remained civil prior to Thursday’s all-hands meeting. Employees debated the merits of censoring a sitting president’s potentially false statements about election results with one person nothing that, ‘it would be a really troubling policy to apply globally.’”

Jessie Lehrich, former foreign policy spokesperson to Hillary Clinton and co-founder of Accountable Tech, says, “America can’t afford for Facebook to take a wait-and-see approach when it comes to the integrity of our democracy. Unless they proactively outline clear policies and enforcement mechanisms to safeguard the election, the platform will be weaponized to undermine it.”

At the Thursday meeting, Zuckerberg tried to get ahead of the curve and informed employees that they would need to ready themselves for the possibility that the election results would not be known for days or weeks after election day. He cautioned that some political figures would attempt to call the election results early, and that the platform should be prepared to slap a warning on such posts to explain that the results are not final yet. But he did not address concerns about what the company would do should Trump declare the results invalid.

Zuckerberg said, “This is where we’re in unprecedented territory with the president saying some of the things that he’s saying that I find quite troubling. We’re thinking through what policy may be appropriate here. This is obviously going to be a sensitive thing to work through.”

But employees were not overly reassured by Zuckerberg’s half-in, half-out promise to address the upcoming election results. Despite their willingness to stand up to some of Trump’s more egregiously inaccurate posts, Facebook employees claim that right-wing groups frequently circumvent fact-checking policies and that the high-profile battle for facts does not extend to the day-to-day posts of some of the right’s more vocal, and therefore financially lucrative, pages and groups.

Employees gathered evidence that Breitbart, Turning Point USA founder Charlie Kirk, Trump supporters Diamond and Silk, and Prager University – all radically right-wing sources – were given preferential treatment and used their popularity to avoid penalties for violating policies. Part of Zuckerberg’s unwillingness to, “censor,” right-wing groups may come from the fact that they are ardent and vocal in their complaints of censorship if they suspect they are being silenced, regardless of the validity or truthfulness of whatever they’re sharing.

A Facebook spokesperson told Buzzfeed, “We defer to third-party fact-checkers on the rating that a piece of content receives. When a fact-checker applies a rating, we apply a label and demotion. But we are responsible for how we manage our internal systems for repeat offenders. We apply additional system-wide penalties for multiple false ratings, including demonetization and the inability to advertise, unless we determine that one or more of those ratings does not warrant additional consequences.”

But what exactly would exempt an entity from warranting additional consequences is murky, and allows potential for bias. On July 22, an employee posted on Workplace that, “misinformation strikes against Breitbart had been cleared by someone at Facebook seemingly acting on the publication’s behalf. ‘A Breitbart escalation marked, ‘urgent: end of day,’ was resolved on the same day, with all misinformation strikes against Breitbart’s page and against their domain cleared without explanation.’”

That employee also reported that a partly-false rating applied to an Instagram post by Charlie Kirk was flagged for priority escalation by Joel Kaplan, Facebook’s vice president of global policy. Kaplan, a former Bush administration member, has received criticism after supporting the nomination of Brett Kavanaugh to the Supreme Court.

Aaron Sharockman, executive director at PolitiFact, a fact-checking platform, told Buzzfeed News that they were contacted by someone at Facebook to discuss Kirk’s post. He told Buzzfeed, “We had a call with them where they wanted to know how this post was aligned with the program. Was this just a minor inaccuracy or was it something we thought was something that had potential harmful effects?” Sharockman says they did not change their ratings, they, “stuck to their guns.”

This is not the first time questions have been raised about Kaplan’s involvement in fact-checking and ratings control. Former employee Yaël Eisenstat, once global election ads integrity lead, told Buzzfeed News that she observed a member of Kaplan’s Washington policy team attempt to influence ad enforcement involving conservative organizations.

These sorts of involvements apparently violate Facebook policy, which requires publishers who have concerns about a fact-check rating to contact the fact-checking partners responsible. But one employee says of this policy, “it appears that policy people have been intervening in fact-checks on behalf of *exclusively* right-wing publishers, to avoid them getting repeat-offender status.”

Employees who are drawing attention to and questioning this sort of, “turn a blind eye,” policy enforcement have dealt with consequences for doing so. One senior Facebook engineer who collected evidence of multiple instances of conservative figures receiving help from Facebook employees to remove fact-check labels from their content. His post on Workplace exposing these incidents was removed.

Shortly after his post was removed, Buzzfeed tells us that, “the related internal, ‘tasks,’ he’d cited as examples of the alleged special treatment were made private and inaccessible to employees, according to Workplace post from another employee. ‘Personally, this makes me so angry and ashamed of the company,’ wrote the employee in support of their colleague.”

That engineer no longer worked for Facebook as of Wednesday. Buzzfeed News reports that one employee said on a Workplace thread that they were given permission from the engineer to say that his dismissal was, “not voluntary.” But while Facebook denies that the employee was terminated for that post, they did say that it was because, “they broke the company’s rules.”

Buzzfeed shares the experience of a journalist who works for one of Facebook’s fact-checking partners, who says that, “conservative pages often complain directly to the company, [instead of following policy by contacting the fact-checking partners.]” The journalist, who spoke anonymously, says, “Of the publishers that don’t follow the procedure, it seems to be mostly ones on the right. Instead of appealing to the fact-checker, they immediately call their rep at Facebook. They jump straight up and say, ‘censorship, First Amendment, freedom.’ I think Facebook is a bit afraid of them because of the Trump administration.”

According to Buzzfeed, Facebook assigns dedicated partner managers to pages with large followings or big ad budgets. They are intended to help the publishers maximize their platform usage. However, in the post by the now-fired engineer, Buzzfeed says that, “partner reps appear to have sought preferential treatment for right-wing publishers. This resulted in phone calls to fact-checking partners from people at Facebook, and instances where misinformation strikes appear to have been removed from content without a fact-checker’s knowledge or involvement.”

The former engineer offered many examples of violation of established policy, and it does seem to be in response to threats by right-wing publishers to go public with their complaints of censorship. Diamond and Silk, vocal supporters of Trump, were fact-checked in March. After bouncing an appeal to the fact-checker and back again, their Facebook partner manager opened an internal ticket to resolve the issue. He warned that Diamond and Silk, “[are] extremely sensitive and [have] not hesitated going public about their concerns around alleged conservative bias on Facebook.”

While Facebook claims to be a platform for people to share opinions and information, it is still, at its core, a business. This means that it’s still subject to the same forces that motivate other businesses; namely, money. If right-wing publishers are more vocal about their displeasure than others, it isn’t shocking to find that Facebook kowtows to them to some degree. A public relations campaign by right-wing publishers against the platform, accusing them of bias and violation of the First Amendment, could be harmful to the company’s bottom line.

While it is unclear whether or not Facebook and other social media platforms are even subject to the First Amendment, the platform does benefit from maintaining an outward appearance of unbias. This sort of rumbling among employees and low-key whistleblowing spells trouble for Zuckerberg, who has tried desperately to maintain a distance from the controversy currently plaguing Twitter in its head-to-head with the President.

It does appear as though policies are not being applied equally to publishers across political ideologies, and users of the platform may not be receiving as honest of a representation of the facts as they are expecting. Zuckerberg will eventually have to address these concerns more ardently. His fence-sitting and aloof response to concerns and accusations has not increased user confidence, which is vital for a social media platform to flourish.