Facebook’s Fight Against Fake News Keeps Raising Questions

Facebook wants you to know it’s trying really hard to deal with the ways people use its platform to cause harm. It just doesn’t know exactly what to do. What separates hate speech from offensive ideas, or misinformation from coordinated disinformation intended to incite violence? What should the company allow to remain on the platform, and what should it ban? Two years after Russians weaponized Facebook as part of a large-scale campaign to interfere with US democracy, the social network is still struggling to answer those questions, as the past two weeks have made clear. But it’s trying to figure it out.

As Facebook reaffirms its commitment to fighting fake news in recent weeks, it has also been forced to defend its decision not to ban sites like Alex Jones’ InfoWars. Instead, the company says, it reduces the distribution of content that is flagged and confirmed to be false by fact checkers.

On Wednesday, Recode’s Kara Swisher aired a podcast interview with CEO Mark Zuckerberg, in which he outlined Facebook’s approach to misinformation. “The principles that we have on what we remove from the service are, if it’s going to result in real harm, real physical harm, or if you’re attacking individuals, then that content shouldn’t be on the platform,” Zuckerberg said. By way of example, he explained that he wouldn’t necessarily remove Holocaust denial posts from Facebook. “I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong,” he said.

click here for info
click here for more
click here for more info
click here now
click here to find out more
click here to investigate
click here to read
click here!
click here.
click now
click over here
click over here now
click this
click this link
click this link here now
click this link now
click this over here now
click this site
click to find out more
click to investigate
click to read
clicking here
company website
consultant
content
continue
continue reading
continue reading this
continue reading this..
continued
conversational tone
cool training
Get the facts
Related Site
Recommended Reading
Recommended Site
describes it
description
dig this
directory
discover here
discover more
discover more here
discover this
discover this info here
do you agree
enquiry
experienced
explanation
extra resources
find
find more
find more info
find more information
find out here
find out here now
find out more
find out this here
for beginners
from this source
full article
full report

People freaked out, and later that day Zuckerberg tried to backtrack, clarifying that “If a post crossed line into advocating for violence or hate against a particular group, it would be removed.”

But Facebook has come under fire for its role in amplifying misinformation which might not cross that line but still has led to violence, in countries like India, Sri Lanka, and Myanmar.

Since Wednesday, the company has announced a series of changes to its products that appear to address this criticism. Its private-messaging service WhatsApp launched a test Thursday to limit the number of chats that users can forward messages to. “Indian researchers have found out that much of the misinformation on WhatsApp is coming from political operatives who have 10 or 20 interlaced groups,” says Joan Donovan, of the group Data & Society, who has studied online disinformation and misinformation for years. She described the structure of those disinformation campaigns in India as a honeycomb, at the edges of which are paid operatives forwarding fake messages widely. Limiting their ability to forward messages should help, and WhatsApp said it would continue to evaluate the changes.

Facebook also announced a new policy targeting misinformation specifically—but only when it risks imminent violence. “There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down,” a spokesperson said. Deciphering that kind of context is a challenge that Facebook has already encountered when it comes to things like hate speech, and its willingness to take this on now might represent a shift toward taking responsibility for the role its platform plays in society. By deciding what may lead to violence and taking action, Facebook is taking on duties normally reserved for governments and law enforcement. But without more details about how it will decide what falls under the policy, and given past accusations of arbitrary content moderation, some researchers are skeptical of how effective it will be.

Leave a Reply

Your email address will not be published. Required fields are marked *