Introducing Inquisitorial by Indianaut, a long-form newsletter where we explain and analyze important stories stemming out of the Indian entrepreneurial ecosystem & economy. New articles every Saturday & Sunday.
Facebook recently purchased a 9.99% stake in Reliance Jio, which gives both companies an opportunity to make key inroads into India’s Internet landscape, which were earlier denied to Facebook’s infamous Free Basics platform by the Telecom Regulatory Authority of India (Trai) due to net neutrality concerns. "Facebook, it's quick and easy” as true as this tagline is, it's also one of the most toxic traits of the multi-million dollar social media platform. In a recent research it was found out that most people in Myanmar started using the words Facebook and Internet interchangeably. But the question is, should Facebook be considered the new Internet?
Facebook has become more than just a medium to catch up with your friends. It has become the primary source of information for most people. Much like the internet, people rely on Facebook for up-to the minute updates on the issues they are interested in, like news, celebrities, sports, etc. It's low bandwidth consumption, easy- to-create customized feed and share views and opinions has made it a behemoth with an active user base of over 2 billion people worldwide. And just like the internet, Facebook more often than not disseminates information that is unchecked, false and fake.
(Source: Twitter)
According to a study, Facebook is the fastest and the worst perpetrator of spreading fake news, ranking above other platforms like Twitter, WhatsApp and Google.
‘The Epicenter for Misinformation’
Like all other user-generated content platforms, Facebook has community standards and reviews posts, pictures, and videos that have been flagged by AI or reported by users for violating the content policy. However, it has paid inadequate attention to how its platform is used to spread misinformation, fake news and at worst violence. While some instances of Facebook’s failures may be attributed to content moderation being a large and complex issue made more difficult by the gargantuan amounts of content being generated, there have also been incidents that demonstrate Facebook’s willful neglect and even attempting to profit off problematic content. In 2016, Facebook’s internal researchers stated that its algorithm recommended extremist groups to users and suggested steps to fix the issue but Facebook executives dismissed recommendations. More recently, The Markup’s investigation revealed that Facebook was allowing advertisers to profit from ads targeting people that the company believes are interested in “pseudoscience.” According to Facebook’s ad portal, the pseudoscience interest category contained more than 78 million people.
(Source: NLUJ)
Facebook has taken steps to curb misinformation. In April Mark Zuckerberg wrote a post pledging to combat coronavirus misinformation on Facebook, however they have refused to fact-check political advertisements or statements made by politicians on the platform. Facebook has maintained that it's not about revenue but freedom of speech, but we must remember that we live in the times of information warfare where misinformation has become a tool to subvert free and fair elections. More often than not we see, political representatives with extremist views consciously or unconsciously upload misleading facts and figures to support hatred towards a particular community or activity. Why is it that “future leaders” are left out of this fact-checking exercise when it is them who need to come under the radar the most.
This move was criticized by Facebook's own staff in an open letter, where they said “[The policy] doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy”.
The campaign group, Avaaz recently discovered that more than 40% of the coronavirus-related misinformation it found on Facebook, which had been debunked by the fact checking organisations remained on the platform, even after the company was told that these posts were fake.
“Facebook, given its scale, is the epicenter for misinformation," Fadi Quran, Avaaz's campaign director.
The problem is not only the failure in fact-checking posts, but also the delay in the removal of these fake posts. Recently, a famous movement took over the internet called #StopHateForProfit which led to a group of civil right groups meeting Mark Zuckerburg to discuss Facebook’s “hate speech” policies.
“It was abundantly clear in our meeting today that Mark Zuckerberg and the Facebook team is not yet ready to address the vitriolic hate on their platform,” read a statement issued by the leaders after the meeting.
Tons of Facebook’s largest advisers are boycotting the platform, among them are Coca-Cola, Ben & Jerry’s, Starbucks and Hershey. This came as a reaction to Facebook’s unwillingness to police hate speech or monitor posts for misinformation. The campaign has a list of 10 demands, including a permanent civil rights infrastructure, independent audits of identity -based hate and misinformation, and an internal mechanism to automatically flag hateful content in private groups for human review.
In March, a United Nations investigator said Facebook was used to incite violence and hatred against the Muslim minority group. The platform, she said, had “turned into a beast.”
The Battles They Lost
We have already seen in the past, how Facebook failed to fight the war of hate speech in Myanmar. Even after Zuckerburg pledged to work on controlling the wildfire, some examples of Facebook posts were advertisements featuring Rohingya-styled food. “We must fight them the way Hitler did the Jews, damn kalars!” the person wrote, referring to the Rohingyas. “These non-human kalar dogs, the Bengalis, are killing and destroying our land, our water and our ethnic people,” wrote another user .“We need to destroy their race.” That post went up, as the violence against the Rohingya peaked.
"Facebook has given [advertisers] no other option because of their failure, time and time again, to address the very real and the very visible problems on their platform," Robinson, president of the civil rights group Color of Change
The remarks are among more than 1,000 examples Reuters found of posts, comments, images and videos attacking the Rohingya or other Myanmar Muslims that were on Facebook.
(Source: Medium)
After the protests which were prompted by George Floyd’s death, Donald Trump, testing the boundaries of what and cannot be shared online, posted on Facebook and Twitter a message which said “when the looting starts, the shooting starts”.
Twitter, noting and interpreting it as a potential call for violence, restricted the tweet, preventing it from being replied to or liked. They also hid it behind a warning declaring that the tweet broke their rules. However, left it up, citing the inherent newsworthiness of a statement by an elected official with millions of followers.
On the other hand, Facebook left the post untouched. On his personal page, Zuckerburg said that he interpreted the statement not as incitement to violence but as “a warning about state action”. He said, “we do not have a policy of putting a warning in front of posts that may incite violence because we believe that if a post incites violence, it should be removed regardless of whether it is newsworthy, even if it comes from a politician.”
Facebook should hold political ads to the same standards as others, ideals of free speech cannot be applicable to paid speech as misinformation by political advertisers has a detrimental impact on a community. A laissez-faire approach should not come at the stake of morality.Facebook should refuse to amplify political messages without applying standards that other ads have to follow. In case Facebook doesn't want to change its stance on political ads, they definitely need to update the way they are displayed and inform the user about its policy so they can adopt a more cautious approach.
Facebook needs to make an actual change in their policies, and not just carry on with the facade of work in progress. Its content moderation policies have often been reactive and inadequate. One can appreciate the steps that they have taken to control the damage, but there is a long list of battles to be won. A long list of changes to be made, and a long list of policies to be incorporated. We need Facebook to make use of their power as a responsible social media giant and give access to people to high quality information instead of peddling fake news and do a disservice to its users.
Aanya Wig is a B.A. (Hons.) History student at Lady Shri Ram College for Women, University of Delhi. She is currently working as the Campus Coordinator with The Jurni, a London based Travel and Culture Magazine and has previously worked as a journalist with The Quint. She is also the founder of Aghaaz & Girl Up Rise, two student-led social entrepreneurship projects to empower women.
I think the article covers some really great points and shows anyone where Facebook is lacking right now. I didn't see it get mentioned in the article so I will put a link in here(https://about.fb.com/wp-content/uploads/2020/07/Civil-Rights-Audit-Final-Report.pdf) of their own civil rights audit report which finally was finished this week after 2 years! This report coupled with the report of equality labs on Facebook India (https://www.equalitylabs.org/facebookindiareport) really demonstrate how bad the content moderating system is right now. I also don't see it improving any time soon actually, the content moderation teams have no clear guidelines from the bosses about the limits they are told to enforce nor are they sufficiently knowledgeable about the cultural sensitivities of different regions and the languages. It would be bad business for Facebook to impose stricter guidelines simply because controversial posts have the highest engagement and hence more money. Even the ad boycotts are mostly hiatuses of 6-12 months as no company can resist advertising on the platform with largest reach. Facebook right now isn't responsible for the content posted and unless Trump manages to make social media companies responsible for it, I don't see any company really taking any concrete steps here.
With respect to Facebook becoming Internet you can already see that here 🙂. Facebook, Instagram and WhatsApp are the most used apps in India. You do have some competition from Snapchat for the younger demographic and Google is obviously there with Search and YouTube. Internet is so much more than being in a bubble of 4-5 companies. Decentralisation is key, and using decentralised social media is actually no more difficult than Facebook today. We also need to support indie-web, the independent creators and their blogs and websites. Untill we are not ready to leave are comfort bubbles we will be stuck these conglomerates and their policies.
Oh and I agree with Zuckerberg's interpretation of the message, this was not a Kapil Mishra, "Goli maaro saalo ko" moment, it was really him warning protestors of state action on them. It was a good read, keep writing more!