On Google Podcasts, a Buffet of Hate

The platform’s tolerance of white supremacist, pro-Nazi and conspiracy theory content pushes the boundaries of the medium.

#freedom-of-speech-and-expression, #hate-speech, #jones-alex-1974, #podcasts, #right-wing-extremism-and-alt-right, #social-media

0

For Political Cartoonists, the Irony Was That Facebook Didn’t Recognize Irony

As Facebook has become more active at moderating political speech, it has had trouble dealing with satire.

#bors-matt, #cartoons-and-cartoonists, #comedy-and-humor, #computers-and-the-internet, #facebook-inc, #fringe-groups-and-movements, #hall-ed, #hate-speech, #instagram-inc, #rumors-and-misinformation, #social-media, #violence-media-and-entertainment, #zyglis-adam

0

Tech’s Legal Shield Appears Likely to Survive as Congress Focuses on Details

Section 230 isn’t expected to be revoked, but even the more modest proposals for weakening it could have effects that ripple across the internet.

#facebook-inc, #freedom-of-speech-and-expression, #google-inc, #hate-speech, #law-and-legislation, #online-advertising, #social-media, #united-states-politics-and-government

0

Donald Trump is one of 15,000 Gab users whose account just got hacked

Promotional image for social media site Gab says

Enlarge (credit: Gab.com)

The founder of the far-right social media platform Gab said that the private account of former President Donald Trump was among the data stolen and publicly released by hackers who recently breached the site.

In a statement on Sunday, founder Andrew Torba used a transphobic slur to refer to Emma Best, the co-founder of Distributed Denial of Secrets. The statement confirmed claims the WikiLeaks-style group made on Monday that it obtained 70GB of passwords, private posts, and more from Gab and was making them available to select researchers and journalists. The data, Best said, was provided by an unidentified hacker who breached Gab by exploiting a SQL-injection vulnerability in its code.

“My account and Trump’s account were compromised, of course as Trump is about to go on stage and speak,” Torba wrote on Sunday as Trump was about to speak at the CPAC conference in Florida. “The entire company is all hands investigating what happened and working to trace and patch the problem.”

Read 10 remaining paragraphs | Comments

#biz-it, #ddosecrets, #gab, #hacking, #hate-speech, #leaks, #policy, #tech

0

The Economic Case for Regulating Social Media

The core business model of platforms like Facebook and Twitter poses a threat to society and requires retooling, an economist says.

#antitrust-laws-and-competition-issues, #conspiracy-theories, #facebook-inc, #freedom-of-speech-and-expression, #fringe-groups-and-movements, #hate-speech, #online-advertising, #political-advertising, #regulation-and-deregulation-of-industry, #rumors-and-misinformation, #social-media, #twitter, #united-states-economy, #united-states-politics-and-government, #youtube-com

0

Facebook’s ‘oversight’ body overturns four takedowns and issues a slew of policy suggestions

Facebook’s self-regulatory ‘Oversight Board’ (FOB) has delivered its first batch of decisions on contested content moderation decisions almost two months after picking its first cases.

A long time in the making, the FOB is part of Facebook’s crisis PR push to distance its business from the impact of controversial content moderation decisions — by creating a review body to handle a tiny fraction of the complaints its content takedowns attract. It started accepting submissions for review in October 2020 — and has faced criticism for being slow to get off the ground.

Announcing the first decisions today, the FOB reveals it has chosen to uphold just one of the content moderation decisions made earlier by Facebook, overturning four of the tech giant’s decisions.

Decisions on the cases were made by five-member panels that contained at least one member from the region in question and a mix of genders, per the FOB. A majority of the full Board then had to review each panel’s findings to approve the decision before it could be issued.

The sole case where the Board has upheld Facebook’s decision to remove content is case 2020-003-FB-UA — where Facebook had removed a post under its Community Standard on Hate Speech which had used the Russian word “тазики” (“taziks”) to describe Azerbaijanis, who the user claimed have no history compared to Armenians.

In the four other cases the Board has overturned Facebook takedowns, rejecting earlier assessments made by the tech giant in relation to policies on hate speech, adult nudity, dangerous individuals/organizations, and violence and incitement. (You can read the outline of these cases on its website.)

Each decision relates to a specific piece of content but the board has also issued nine policy recommendations.

These include suggestions that Facebook [emphasis ours]:

  • Create a new Community Standard on health misinformation, consolidating and clarifying the existing rules in one place. This should define key terms such as “misinformation.”
  • Adopt less intrusive means of enforcing its health misinformation policies where the content does not reach Facebook’s threshold of imminent physical harm.
  • Increase transparency around how it moderates health misinformation, including publishing a transparency report on how the Community Standards have been enforced during the COVID-19 pandemic. This recommendation draws upon the public comments the Board received.
  • Ensure that users are always notified of the reasons for any enforcement of the Community Standards against them, including the specific rule Facebook is enforcing. (The Board made two identical policy recommendations on this front related to the cases it considered, also noting in relation to the second hate speech case that “Facebook’s lack of transparency left its decision open to the mistaken belief that the company removed the content because the user expressed a view it disagreed with”.)
  • Explain and provide examples of the application of key terms from the Dangerous Individuals and Organizations policy, including the meanings of “praise,” “support” and “representation.” The Community Standard should also better advise users on how to make their intent clear when discussing dangerous individuals or organizations.
  • Provide a public list of the organizations and individuals designated as ‘dangerous’ under the Dangerous Individuals and Organizations Community Standard or, at the very least, a list of examples.
  • Inform users when automated enforcement is used to moderate their content, ensure that users can appeal automated decisions to a human being in certain cases, and improve automated detection of images with text-overlay so that posts raising awareness of breast cancer symptoms are not wrongly flagged for review. Facebook should also improve its transparency reporting on its use of automated enforcement.
  • Revise Instagram’s Community Guidelines to specify that female nipples can be shown to raise breast cancer awareness and clarify that where there are inconsistencies between Instagram’s Community Guidelines and Facebook’s Community Standards, the latter take precedence.

Where it has overturned Facebook takedowns the board says it expects Facebook to restore the specific pieces of removed content within seven days.

In addition, the Board writes that Facebook will also “examine whether identical content with parallel context associated with the Board’s decisions should remain on its platform”. And says Facebook has 30 days to publicly respond to its policy recommendations.

So it will certainly be interesting to see how the tech giant responds to the laundry list of proposed policy tweaks — perhaps especially the recommendations for increased transparency (including the suggestion it inform users when content has been removed solely by its AIs) — and whether Facebook is happy to align entirely with the policy guidance issued by the self-regulatory vehicle (or not).

Facebook created the board’s structure and charter and appointed its members — but has encouraged the notion it’s ‘independent’ from Facebook, even though it also funds FOB (indirectly, via a foundation it set up to administer the body).

And while the Board claims its review decisions are binding on Facebook there is no such requirement for Facebook to follow its policy recommendations.

It’s also notable that the FOB’s review efforts are entirely focused on takedowns — rather than on things Facebook chooses to host on its platform.

Given all that it’s impossible to quantify how much influence Facebook exerts on the Facebook Oversight Board’s decisions. And even if Facebook swallows all the aforementioned policy recommendations — or more likely puts out a PR line welcoming the FOB’s ‘thoughtful’ contributions to a ‘complex area’ and says it will ‘take them into account as it moves forward’ — it’s doing so from a place where it has retained maximum control of content review by defining, shaping and funding the ‘oversight’ involved.

tl;dr: An actual supreme court this is not.

In the coming weeks, the FOB will likely be most closely watched over a case it accepted recently — related to the Facebook’s indefinite suspension of former US president Donald Trump, after he incited a violent assault on the US capital earlier this month.

The board notes that it will be opening public comment on that case “shortly”.

“Recent events in the United States and around the world have highlighted the enormous impact that content decisions taken by internet services have on human rights and free expression,” it writes, going on to add that: “The challenges and limitations of the existing approaches to moderating content draw attention to the value of independent oversight of the most consequential decisions by companies such as Facebook.”

But of course this ‘Oversight Board’ is unable to be entirely independent of its founder, Facebook.

#content-moderation, #facebook, #facebook-oversight-board, #hate-speech, #policy, #social

0

Threat of inauguration violence casts a long shadow over social media

As the U.S. heads into one of the most perilous phases of American democracy since the Civil War, social media companies are scrambling to shore up their patchwork defenses for a moment they appear to have believed would never come.

Most major platforms pulled the emergency break last week, deplatforming the president of the United States and enforcing suddenly robust rules against conspiracies, violent threats and undercurrents of armed insurrection, all of which proliferated on those services for years. But within a week’s time, Amazon, Facebook, Twitter, Apple and Google had all made historic decisions in the name of national stability — and appearances. Snapchat, TikTok, Reddit and even Pinterest took their own actions to prevent a terror plot from being hatched on their platforms.

Now, we’re in the waiting phase. More than a week after a deadly pro-Trump riot invaded the iconic seat of the U.S. legislature, the internet still feels like it’s holding its breath, a now heavily-fortified inauguration ceremony looming ahead.

(Photo by SAUL LOEB/AFP via Getty Images)

What’s still out there

On the largest social network of all, images hyping follow-up events continued to circulate mid this week. One digital Facebook flyer promoted an “armed march on Capitol Hill and all state Capitols,” pushing the dangerous and false conspiracy that the 2020 presidential election was stolen.

Facebook says that it’s working to identify flyers calling for “Stop the Steal” adjacent events using digital fingerprinting, the same process it uses to remove terrorist content from ISIS and Al Qaeda. The company noted that it has seen flyers calling for events on January 17 across the country, January 18 in Virginia and inauguration day in D.C.

At least some of Facebook’s new efforts are working: one popular flyer TechCrunch observed on the platform was removed from some users’ feeds this week. A number of “Stop the Steal” groups we’d observed over the last month also unceremoniously blinked offline early this week following more forceful action from the company. Still, given the writing on the wall, many groups had plenty of time to tweak their names by a few words or point followers elsewhere to organize.

With only days until the presidential transition, acronym-heavy screeds promoting QAnon, an increasingly mainstream collection of outrageous pro-Trump government conspiracy theories, also remain easy to find. On one page with 2,500 followers, a QAnon believer pushed the debunked claim that anti-fascists executed the attack on the Capitol, claiming “January 6 was a trap.”

QAnon sign

(Photo by Win McNamee/Getty Images)

On a different QAnon group, an ominous post from an admin issued Congress a warning: “We have found a way to end this travesty! YOUR DAYS ARE NUMBERED!” The elaborate conspiracy’s followers were well represented at the deadly riot at the Capitol, as the many giant “Q” signs and esoteric t-shirt slogans made clear.

In a statement to TechCrunch about the state of extremism on the platform, Facebook says it is coordinating with terrorism experts as well as law enforcement “to prevent direct threats to public safety.” The company also noted that it works with partners to stay aware of violent content taking root on other platforms.

Facebook’s efforts are late and uneven, but they’re also more than the company has done to date. Measures from big social networks coupled with the absence of far-right social networks like Parler and Gab have left Trump’s most ardent supporters once again swearing off Silicon Valley and fanning out for an alternative.

Social media migration

Private messaging apps Telegram and Signal are both seeing an influx of users this week, but they offer something quite different from a Facebook or Twitter-like experience. Some expert social network observers see the recent migration as seasonal rather than permanent.

“The spike in usage of messaging platforms like Telegram and Signal will be temporary,” Yonder CEO Jonathon Morgan told TechCrunch. “Most users will either settle on platforms with a social experience, like Gab, MeWe, or Parler, if it returns, or will migrate back to Twitter and Facebook.”

That company uses AI to track how social groups connect online and what they talk about — violent conspiracies included. Morgan believes that propaganda-spreading “performative internet warriors” make a lot of noise online, but a performance doesn’t work without an audience. Others may quietly pose a more serious threat.

“The different types of engagement we saw during the assault on the Capitol mirror how these groups have fragmented online,” Morgan said. “We saw a large mob who was there to cheer on the extremists but didn’t enter the Capitol, performative internet warriors taking selfies, and paramilitaries carrying flex cuffs (mislabeled as “zip ties” in a lot of social conversation), presumably ready to take hostages.

“Most users (the mob) will be back on Parler if it returns, and in the meantime, they are moving to other apps that mimic the social experience of Twitter and Facebook, like MeWe.”

Still, Morgan says that research shows “deplatforming” extremists and conspiracy-spreaders is an effective strategy and efforts by “tech companies from Airbnb to AWS” will reduce the chances of violence in the coming days.

Cleaning up platforms can help turn the masses away from dangerous views, he explained, but the same efforts might further galvanize people with an existing intense commitment to those beliefs. With the winds shifting, already heterogeneous groups will be scattered too, making their efforts desperate and less predictable.

Deplatforming works, with risks

Jonathan Greenblatt, CEO of the Anti-Defamation League, told TechCrunch that social media companies still need to do much more to prepare for inauguration week. “We saw platforms fall short in their response to the Capitol insurrection,” Greenblatt said.

He cautioned that while many changes are necessary, we should be ready for online extremism to evolve into a more fractured ecosystem. Echo chambers may become smaller and louder, even as the threat of “large scale” coordinated action diminishes.

“The fracturing has also likely pushed people to start communicating with each other via encrypted apps and other private means, strengthening the connections between those in the chat and providing a space where people feel safe openly expressing violent thoughts, organizing future events, and potentially plotting future violence,” Greenblatt said.

By their own standards, social media companies have taken extraordinary measures in the U.S. in the last two weeks. But social networks have a long history of facilitating violence abroad, even as attention turns to political violence in America.

Greenblatt repeated calls for companies to hire more human moderators, a suggestion often made by experts focused on extremism. He believes social media could still take other precautions for inauguration week, like introducing a delay into livestreams or disabling them altogether, bolstering rapid response teams and suspending more accounts temporarily rather than focusing on content takedowns and handing out “strikes.”

“Platforms have provided little-to-nothing in the way of transparency about learnings from last week’s violent attack in the Capitol,” Greenblatt said.

“We know the bare minimum of what they ought to be doing and what they are capable of doing. If these platforms actually provided transparency and insights, we could offer additional—and potentially significantly stronger—suggestions.”

#capitol-riot, #facebook-misinformation, #hate-speech, #misinformation, #social, #tc

0

Parler CEO admits site may never recover from Amazon ban

Parler CEO admits site may never recover from Amazon ban

Enlarge (credit: Jaap Arriens/NurPhoto via Getty Images)

Parler may never recover from being banned by Amazon and a number of other technology companies, CEO John Matze told Reuters in a Wednesday interview.

“I am an optimist,” he said at one point in the conversation. “It may take days, it may take weeks but Parler will return and when we do we will be stronger.”

But at another point in the conversation, he acknowledged, “It could be never. We don’t know yet.”

Read 4 remaining paragraphs | Comments

#hate-speech, #january-6, #parler, #policy

0

What is Dlive? The Streaming Site Growing in Far-Right Users

A site called Dlive, where rioters broadcast from the Capitol, is benefiting from the growing exodus of right-wing users from Twitter, Facebook and YouTube.

#conspiracy-theories, #dlive-inc, #fringe-groups-and-movements, #gionet-tim, #hate-speech, #right-wing-extremism-and-alt-right, #rumors-and-misinformation, #social-media, #storming-of-the-us-capitol-jan-2021, #video-recordings-downloads-and-streaming, #whites

0

Reddit clone Voat, home to hate speech and QAnon, has shut down

That's the book shut on <em>one</em> unsavory corner of the Internet...

Enlarge / That’s the book shut on one unsavory corner of the Internet… (credit: LdF | Getty Images)

Reddit alternative Voat shut down on Christmas Day, citing a lack of operational funding, and casting doubt on the abilities of other similar almost-anything-goes, “free speech” platforms to stay online in the long run.

“I just can’t keep it up,” Voat cofounder Justin Chastain said in the shutdown announcement. Investment dried up in March 2020, he explained. “I personally decided to keep Voat up until after the U.S. election of 2020. I’ve been paying the costs out of pocket but now I’m out of money.”

Voat first launched in 2014 as a smaller Reddit alternative dedicated to “free speech,” including explicit hate speech, extreme right-wing content, racism, and other content limited or prohibited on other sites. It gained traction in 2015, when Reddit finally banned several explicitly racist subreddits from its platform in a bid to limit harassment, and some discontented Reddit users decided to migrate over.

Read 5 remaining paragraphs | Comments

#extremism, #free-speech, #gab, #harassment, #hate-speech, #online-hate-speech, #parler, #policy, #reddit, #terrorism, #voat

0

Big Fines and Strict Rules Unveiled Against ‘Big Tech’ in Europe

European Union and British authorities released draft laws to halt the spread of harmful content and improve competition.

#apple-inc, #computers-and-the-internet, #data-mining-and-database-marketing, #european-union, #facebook-inc, #fines-penalties, #hate-speech, #mobile-applications, #politics-and-government, #privacy, #regulation-and-deregulation-of-industry, #social-media, #twitter

0

Twitch Cracks Down on Hate Speech and Harassment

The livestreaming site announced new guidelines after contending with claims that its streamers were too easily abused.

#metoo-movement, #clemens-sara, #computer-and-video-games, #computers-and-the-internet, #corporate-social-responsibility, #cyberharassment, #hate-speech, #rumors-and-misinformation, #sexual-harassment, #social-media, #twitch-interactive-inc, #video-recordings-downloads-and-streaming, #workplace-hazards-and-violations

0

Facebook’s self-styled ‘oversight’ board selects first cases, most dealing with hate speech

A Facebook -funded body that the tech giant set up to distance itself from tricky and potentially reputation-damaging content moderation decisions has announced the first bundle of cases it will consider.

In a press release on its website the Facebook Oversight Board (FOB) says it sifted through more than 20,000 submissions before settling on six cases — one of which was referred to it directly by Facebook.

The six cases it’s chosen to start with are:

Facebook submission: 2020-006-FB-FBR

A case from France where a user posted a video and accompanying text to a COVID-19 Facebook group — which relates to claims about the French agency that regulates health products “purportedly refusing authorisation for use of hydroxychloroquine and azithromycin against COVID-19, but authorising promotional mail for remdesivir”; with the user criticizing the lack of a health strategy in France and stating “[Didier] Raoult’s cure” is being used elsewhere to save lives”. Facebook says it removed the content for violating its policy on violence and incitement. The video in questioned garnered at least 50,000 views and 1,000 shares.

The FOB says Facebook indicated in its referral that this case “presents an example of the challenges faced when addressing the risk of offline harm that can be caused by misinformation about the COVID-19 pandemic”.

User submissions:

Out of the five user submissions that the FOB selected, the majority (three cases) are related to hate speech takedowns.

One case apiece is related to Facebook’s nudity and adult content policy; and to its policy around dangerous individuals and organizations.

See below for the Board’s descriptions of the five user submitted cases:

  • 2020-001-FB-UA: A user posted a screenshot of two tweets by former Malaysian Prime Minister, Dr Mahathir Mohamad, in which the former Prime Minister stated that “Muslims have a right to be angry and kill millions of French people for the massacres of the past” and “[b]ut by and large the Muslims have not applied the ‘eye for an eye’ law. Muslims don’t. The French shouldn’t. Instead the French should teach their people to respect other people’s feelings.” The user did not add a caption alongside the screenshots. Facebook removed the post for violating its policy on hate speech. The user indicated in their appeal to the Oversight Board that they wanted to raise awareness of the former Prime Minister’s “horrible words”.
  • 2020-002-FB-UA: A user posted two well-known photos of a deceased child lying fully clothed on a beach at the water’s edge. The accompanying text (in Burmese) asks why there is no retaliation against China for its treatment of Uyghur Muslims, in contrast to the recent killings in France relating to cartoons. The post also refers to the Syrian refugee crisis. Facebook removed the content for violating its hate speech policy. The user indicated in their appeal to the Oversight Board that the post was meant to disagree with people who think that the killer is right and to emphasise that human lives matter more than religious ideologies.

  • 2020-003-FB-UA: A user posted alleged historical photos showing churches in Baku, Azerbaijan, with accompanying text stating that Baku was built by Armenians and asking where the churches have gone. The user stated that Armenians are restoring mosques on their land because it is part of their history. The user said that the “т.а.з.и.к.и” are destroying churches and have no history. The user stated that they are against “Azerbaijani aggression” and “vandalism”. The content was removed for violating Facebook’s hate speech policy. The user indicated in their appeal to the Oversight Board that their intention was to demonstrate the destruction of cultural and religious monuments.

  • 2020-004-IG-UA: A user in Brazil posted a picture on Instagram with a title in Portuguese indicating that it was to raise awareness of signs of breast cancer. Eight photographs within the picture showed breast cancer symptoms with corresponding explanations of the symptoms underneath. Five of the photographs included visible and uncovered female nipples. The remaining three photographs included female breasts, with the nipples either out of shot or covered by a hand. Facebook removed the post for violating its policy on adult nudity and sexual activity. The post has a pink background, and the user indicated in a statement to the Oversight Board that it was shared as part of the national “Pink October” campaign for the prevention of breast cancer.

  • 2020-005-FB-UA: A user in the US was prompted by Facebook’s “On This Day” function to reshare a “memory” in the form of a post that the user made two years ago. The user reshared the content. The post (in English) is an alleged quote from Joseph Goebbels, the Reich Minister of Propaganda in Nazi Germany, on the need to appeal to emotions and instincts, instead of intellect, and on the unimportance of truth. Facebook removed the content for violating its policy on dangerous individuals and organisations. The user indicated in their appeal to the Oversight Board that the quote is important as the user considers the current US presidency to be following a fascist model

Public comments on the cases can be submitted via the FOB’s website — but only for seven days (closing at 8:00 Eastern Standard Time on Tuesday, December 8, 2020).

The FOB says it “expects” to decide on each case — and “for Facebook to have acted on this decision” — within 90 days. So the first ‘results’ from the FOB, which only began reviewing cases in October, are almost certainly not going to land before 2021.

Panels comprised of five FOB members — including at least one from the region “implicated in the content” — will be responsible for deciding whether the specific pieces of content in question should stay down or be put back up.

Facebook’s outsourcing of a fantastically tiny subset of content moderation considerations to a subset of its so-called ‘Oversight Board’ has attracted plenty of criticism (including inspiring a mirrored unofficial entity that dubs itself the Real Oversight Board) — and no little cynicism.

Not least because it’s entirely funded by Facebook; structured as Facebook intended it to be structured; and with members chosen via a system devised by Facebook.

If it’s radical change you’re looking for, the FOB is not it.

Nor does the entity have any power to change Facebook policy — it can only issue recommendations (which Facebook can choose to entirely ignore).

Its remit does not extend to being able to investigate how Facebook’s attention-seeking business model influences the types of content being amplified or depressed by its algorithms, either.

And the narrow focus on content taken downs — rather than content that’s already allowed on the social network — skews its purview, as we’ve pointed out before.

So you won’t find the board asking tough questions about why hate groups continue to flourish and recruit on Facebook, for example, or robustly interrogating how much succour its algorithmic amplification has gifted to the antivaxx movement.  By design, the FOB is focused on symptoms, not the nation-sized platform ill of Facebook itself. Outsourcing a fantastically tiny subset of content moderations decisions can’t signify anything else.  

With this Facebook-commissioned pantomime of accountability the tech giant will be hoping to generate a helpful pipeline of distracting publicity — focused around specific and ‘nuanced’ content decisions — deflecting plainer but harder-hitting questions about the exploitative and abusive nature of Facebook’s business itself, and the lawfulness of its mass surveillance of Internet users, as lawmakers around the world grapple with how to rein in tech giants.  

The company wants the FOB to reframe discussion about the culture wars (and worse) that Facebook’s business model fuels as a societal problem — pushing a self-serving ‘fix’ for algorithmically fuelled societal division in the form of a few hand-picked professionals opining on individual pieces of content, leaving it free to continue defining the shape of the attention economy on a global scale. 

#content-moderation, #facebook, #facebook-oversight-board, #hate-speech, #platform-regulation, #social

0

Facebook loses final appeal in defamation takedown case, must remove same and similar hate posts globally

Austria’s Supreme Court has dismissed Facebook’s appeal in a long running speech takedown case — ruling it must remove references to defamatory comments made about a local politician worldwide for as long as the injunction lasts.

We’ve reached out to Facebook for comment on the ruling.

Green Party politician, Eva Glawischnig, successfully sued the social media giant seeking removal of defamatory comments made about her by a user of its platform after Facebook had refused to take down the abusive postings — which referred to her as a “lousy traitor”, a “corrupt tramp” and a member of a “fascist party”. 

After a preliminary injunction in 2016 Glawischnig won local removal of the defamatory postings the next year but continued her legal fight — pushing for similar postings to be removed and take downs to also be global.

Questions were referred up to the EU’s Court of Justice. And in a key judgement last year the CJEU decided platforms can be instructed to hunt for and remove illegal speech worldwide without falling foul of European rules that preclude platforms from being saddled with a “general content monitoring obligation”. Today’s Austrian Supreme Court ruling flows naturally from that.

Austrian newspaper Der Standard reports that the court confirmed the injunction applies worldwide, both to identical postings or those that carry the same essential meaning as the original defamatory posting.

It said the Austrian court argues that EU Member States and civil courts can require platforms like Facebook to monitor content in “specific cases” — such as when a court has identified user content as unlawful and “specific information” about it — in order to prevent content that’s been judged to be illegal from being reproduced and shared by another user of the network at a later point in time with the overarching aim of preventing future violations.

The case has important implications for the limitations of online speech.

Regional lawmakers are also working on updating digital liability regulations. Commission lawmakers have said they want to force platforms to take more responsibility for the content they fence and monetize — fuelled by concerns about the impact of online hate speech, terrorist content and divisive disinformation.

A long-standing EU rule, prohibiting Member States from putting a general content monitoring obligation on platforms, limits how they can be forced to censor speech. But the CJEU ruling has opened the door to bounded monitoring of speech — in instances where it’s been judged to be illegal — and that in turn may influence the policy substance of the Digital Services Act which the Commission is due to publish in draft early next month.

In a reaction to last year’s CJEU ruling, Facebook argued it “opens the door to obligations being imposed on internet companies to proactively monitor content and then interpret if it is ‘equivalent’ to content that has been found to be illegal”.

“In order to get this right national courts will have to set out very clear definitions on what ‘identical’ and ‘equivalent’ means in practice. We hope the courts take a proportionate and measured approach, to avoid having a chilling effect on freedom of expression,” it added.

#censorship, #content-takedowns, #defamation, #europe, #eva-glawischnig, #facebook, #free-speech, #freedom-of-expression, #hate-speech, #lawsuit, #platform-regulation

0

Bill Offering L.G.B.T. Protections in Italy Spurs Rallies on Both Sides

Supporters frame the measure as a long-overdue means to provide basic human rights. Opponents depict it as an overreaching step that would suppress opinion.

#assaults, #discrimination, #gender, #hate-crimes, #hate-speech, #homosexuality-and-bisexuality, #italy, #law-and-legislation, #transgender-and-transsexuals, #women-and-girls

0

Meghan, Duchess of Sussex, Speaks Out Against Harmful Online Behavior

She said the birth last year of Archie, her son with Prince Harry, had compelled her to take a stand against online bullying and misinformation.

#archie-earl-of-dumbarton, #computers-and-the-internet, #great-britain, #harry-duke-of-sussex, #hate-speech, #markle-meghan, #royal-families

0

Facebook gives more details about its efforts against hate speech before Myanmar’s general election

About three weeks ago, Facebook announced will increase its efforts against hate speech and misinformation in Myanmar before the country’s general election on November 8, 2020. Today, it gave some more details about what the company is doing to prevent the spread of hate speech and misinformation. This includes adding Burmese language warning screens to flag information rated false by third-party fact-checkers.

In November 2018, Facebook admitted it didn’t do enough to prevent its platform from being used to “foment division and incite offline violence” in Myanmar.

This is an understatement, considering that Facebook has been accused by human rights groups, including the United Nations Human Rights Council, of enabling the spread of hate speech in Myanmar against Rohingya Muslims, the target of a brutally violent ethnic cleansing campaign. A 2018 investigation by the New York Times found that members of the military in Myanmar, a predominantly Buddhist country, instigated genocide against Rohingya, and used Facebook, one of the country’s most widely-used online services, as a tool to conduct a “systematic campaign” of hate speech against the minority group.

In its announcement several weeks ago, Facebook said it will expand its misinformation policy and remove information intended to “lead to voter suppression or damage the integrity of the electoral process” by working with three fact-checking partners in Myanmar—BOOM, AFP Fact Check and Fact Crescendo. It also said it would flag potentially misleading images and apply a message forwarding limit it introduced in Sri Lanka in June 2019.

Facebook also shared that it in the second quarter of 2020, it had taken action against 280,000 pieces of content in Myanmar that violated it Community Standards against hate speech, with 97.8% detected by its systems before being reported, up from the 51,000 pieces of content it took action against in the first quarter.

But, as TechCrunch’s Natasha Lomas noted, “without greater visibility into the content Facebook’s platform is amplifying, including country specific factors such as whether hate speech posting is increasing in Myanmar as the election gets closer, it’s not possible to understand what volume of hate speech is passing under the radar of Facebook’s detection systems and reaching local eyeballs.”

Facebook’s latest announcement, posted today on its News Room, doesn’t answer those questions. Instead, the company gave some more information about what its preparations for the Myanmar general election.

The company said it will use technology to identify “new words and phrases associated with hate speech” in the country, and either remove posts with those words or “reduce their distribution.”

It will also introduce Burmese language warning screens for misinformation identified as false by its third-party fact-checkers, make reliable information about the election and voting more visible, and promote “digital literacy training” in Myanmar through programs like an ongoing monthly television talk show called “Tea Talks” and introducing its social media analytics tool, CrowdTangle, to newsrooms.

#apps, #asia, #facebook, #hate-speech, #misinformation, #myanmar, #southeast-asia, #tc

0

TikTok joins Europe’s code on tackling hate speech

TikTok, the popular short video sharing app, has joined the European Union’s Code of Conduct on Countering Illegal Hate Speech.

In a statement on joining the code, TikTok’s head of trust and safety for EMEA, Cormac Keenan, said: “We have never allowed hate on TikTok, and we believe it’s important that internet platforms are held to account on an issue as crucial as this.”

The non-legally binding code kicked off four years ago with a handful of tech giants agreeing to measures aimed at accelerating takedowns of illegal content while supporting their users to report hate speech and committing to increase joint working to share best practice to tackle the problem.

Since 2016 the code has grown from single to double figure signatories — and now covers Dailymotion, Facebook, Google+, Instagram, Jeuxvideo.com, Microsoft, Snapchat, TikTok, Twitter and YouTube.

TikTok’s statement goes on to highlight the platform’s “zero-tolerance” stance on hate speech and hate groups — in what reads like a tacit dig at Facebook, given the latter’s record of refusing to take down hate speech on ‘freedom of expression‘ grounds (including founder Mark Zuckerberg’s personal defence of letting holocaust denial thrive on his platform).

“We have a zero-tolerance stance on organised hate groups and those associated with them, like accounts that spread or are linked to white supremacy or nationalism, male supremacy, anti-Semitism, and other hate-based ideologies. We also remove race-based harassment and the denial of violent tragedies, such as the Holocaust and slavery,” Keenan writes.

“Our ultimate goal is to eliminate hate on TikTok. We recognise that this may seem an insurmountable challenge as the world is increasingly polarised, but we believe that this shouldn’t stop us from trying. Every bit of progress we make gets us that much closer to a more welcoming community experience for people on TikTok and out in the world.”

It’s interesting that EU hate speech rules are being viewed as a PR opportunity for TikTok to differentiate itself vs rival social platforms — even as most of them (Facebook included) are signed up to the very same code.

TikTok signing up comes a few months after it added its name to a similar EU initiative aimed at tackling the spread of online disinformation via a series of non-legally binding commitments.

The voluntary codes have proved popular with tech giants, given they lack legal compulsion and provide the opportunity for platforms to project the idea they’re doing something about tricky content issues — without the calibre and efficacy of their action being quantifiable.

The codes have also bought time by staving off actual regulation. But that is now looming. EU lawmakers are, for example, eyeing binding transparency rules for platforms to back up voluntary reports of illegal hate speech removals and make sure users are being properly informed of platform actions.

Commissioners are also consulting on and drafting a broader package of measures with the aim of updating long-standing rules wrapping digital services — including looking specifically at the rules around online liability and defining platform responsibilities vis-a-vis content.

A proposal for the Digital Services Act is slated before the end of the year.

The exact shape of the next-gen EU platform regulation remains to be seen but tighter rules for platform giants is one very real possibility, as lawmakers consult on ex ante regulation of so-called ‘gatekeeper’ platforms.

“Europe’s online marketplaces should be vibrant ecosystems, where start-ups have a real chance to blossom – they shouldn’t be closed shops controlled by a handful of gatekeeper platforms,” said EVP and competition chief, Margarthe Vestager, giving a speech in Berlin yesterday. “A list of ‘dos and don’ts’ could prevent conduct that is proven to be harmful to happen in the first place.

“The goal is that all companies, big and small, can compete on their merits on and offline.”

In just one example of the ongoing content moderation challenges faced by platforms, clips of a suicide were reported to be circulating on TikTok this week. Yesterday the company said it was trying to remove the content which it said had been livestreamed on Facebook.

#code-of-conduct-on-countering-illegal-hate-speech, #eu, #europe, #hate-speech, #platform-regulation, #policy, #social, #tiktok

0

Facebook touts beefed up hate speech detection ahead of Myanmar election

Facebook has offered a little detail on extra steps it’s taking to improve its ability to detect and remove hate speech and election disinformation ahead of Myanmar’s election. A general election is scheduled to take place in the country on November 8, 2020.

The announcement comes close to two years after the company admitted a catastrophic failure to prevent its platform from being weaponized to foment division and incite violence against the country’s Rohingya minority.

Facebook says now that it has expanded its misinformation policy with the aim of combating voter suppression and will now remove information “that could lead to voter suppression or damage the integrity of the electoral process” — giving the example of a post that falsely claims a candidate is a Bengali, not a Myanmar citizen, and thus ineligible to stand.

“Working with local partners, between now and November 22, we will remove verifiable misinformation and unverifiable rumors that are assessed as having the potential to suppress the vote or damage the integrity of the electoral process,” it writes.

Facebook says it’s working with three fact-checking organizations in the country — namely: BOOM, AFP Fact Check and Fact Crescendo — after introducing a fact-checking program there in March.

In March 2018 the United Nations warned that Facebook’s platform was being abused to spread hate speech and whip up ethnic violence in Myanmar. By November of that year the tech giant was forced to admit it had not stopped its platform from being repurposed as a tool to drive genocide, after a damning independent investigation slammed its impact on human rights.

On hate speech, which Facebook admits could suppress the vote in addition to leading to what it describes as “imminent, offline harm” (aka violence), the tech giant claims to have invested “significantly” in “proactive detection technologies” that it says help it “catch violating content more quickly”, albeit without quantifying the size of its investment nor providing further details. It only notes that it “also” uses AI to “proactively identify hate speech in 45 languages, including Burmese”.

Facebook’s blog post offers a metric to imply progress — with the company stating that in Q2 2020 it took action against 280,000 pieces of content in Myanmar for violations of its Community Standards prohibiting hate speech, of which 97.8% were detected proactively by its systems before the content was reported to it.

“This is up significantly from Q1 2020, when we took action against 51,000 pieces of content for hate speech violations, detecting 83% proactively,” it adds.

However without greater visibility into the content Facebook’s platform is amplifying, including country-specific factors such as whether hate speech posting is increasing in Myanmar as the election gets closer, it’s not possible to understand what volume of hate speech is passing under the radar of Facebook’s detection systems and reaching local eyeballs.

In a more clearly detailed development, Facebook notes that since August, electoral, issue and political ads in Myanmar have had to display a ‘paid for by’ disclosure label. Such ads are also stored in a searchable Ad Library for seven years — in an expansion of the self-styled ‘political ads transparency measures’ Facebook launched more than two years ago in the US and other western markets.

Facebook also says it’s working with two local partners to verify the official national Facebook Pages of political parties in Myanmar. “So far, more than 40 political parties have been given a verified badge,” it writes. “This provides a blue tick on the Facebook Page of a party and makes it easier for users to differentiate a real, official political party page from unofficial pages, which is important during an election campaign period.”

Another recent change it flags is an ‘image context reshare’ product, which launched in June — which Facebook says alerts a user when they attempt to share a image that’s more than a year old and could be “potentially harmful or misleading” (such as an image that “may come close to violating Facebook’s guidelines on violent content”).

“Out-of-context images are often used to deceive, confuse and cause harm. With this product, users will be shown a message when they attempt to share specific types of images, including photos that are over a year old and that may come close to violating Facebook’s guidelines on violent content. We warn people that the image they are about to share could be harmful or misleading will be triggered using a combination of artificial intelligence (AI) and human review,” it writes without offering any specific examples.

Another change it notes is the application of a limit on message forwarding to five recipients which Facebook introduced in Sri Lanka back in June 2019.

“These limits are a proven method of slowing the spread of viral misinformation that has the potential to cause real world harm. This safety feature is available in Myanmar and, over the course of the next few weeks, we will be making it available to Messenger users worldwide,” it writes.

On coordinated election interference, the tech giant has nothing of substance to share — beyond its customary claim that it’s “constantly working to find and stop coordinated campaigns that seek to manipulate public debate across our apps”, including groups seeking to do so ahead of a major election.

“Since 2018, we’ve identified and disrupted six networks engaging in Coordinated Inauthentic Behavior in Myanmar. These networks of accounts, Pages and Groups were masking their identities to mislead people about who they were and what they were doing by manipulating public discourse and misleading people about the origins of content,” it adds.

In summing up the changes, Facebook says it’s “built a team that is dedicated to Myanmar”, which it notes includes people “who spend significant time on the ground working with civil society partners who are advocating on a range of human and digital rights issues across Myanmar’s diverse, multi-ethnic society” — though clearly this team is not operating out of Myanmar.

It further claims engagement with key regional stakeholders will ensure Facebook’s business is “responsive to local needs” — something the company demonstrably failed on back in 2018.

“We remain committed to advancing the social and economic benefits of Facebook in Myanmar. Although we know that this work will continue beyond November, we acknowledge that Myanmar’s 2020 general election will be an important marker along the journey,” Facebook adds.

There’s no mention in its blog post of accusations that Facebook is actively obstructing an investigation into genocide in Myanmar.

Earlier this month, Time reported that Facebook is using US law to try to block a request for information related to Myanmar military officials’ use of its platforms by the West African nation, The Gambia.

“Facebook said the request is ‘extraordinarily broad’, as well as ‘unduly intrusive or burdensome’. Calling on the U.S. District Court for the District of Columbia to reject the application, the social media giant says The Gambia fails to ‘identify accounts with sufficient specificity’,” Time reported.

“The Gambia was actually quite specific, going so far as to name 17 officials, two military units and dozens of pages and accounts,” it added.

“Facebook also takes issue with the fact that The Gambia is seeking information dating back to 2012, evidently failing to recognize two similar waves of atrocities against Rohingya that year, and that genocidal intent isn’t spontaneous, but builds over time.”

In another recent development, Facebook has been accused of bending its hate speech policies to ignore inflammatory posts made against Rohingya Muslim immigrants by Hindu nationalist individuals and groups.

The Wall Street Journal reported last month that Facebook’s top public-policy executive in India, Ankhi Das, opposed applying its hate speech rules to T. Raja Singh, a member of Indian Prime Minister Narendra Modi’s Hindu nationalist party, along with at least three other Hindu nationalist individuals and groups flagged internally for promoting or participating in violence — citing sourcing from current and former Facebook employees.

#artificial-intelligence, #asia, #election-integrity, #facebook, #hate-speech, #india, #messenger, #myanmar, #narendra-modi, #social, #social-media, #sri-lanka, #united-nations, #voter-suppression

0

Facebook Must Better Police Online Hate, State Attorneys General Say

The call from 20 state officials adds to the rising pressure facing Mark Zuckerberg and his company.

#attorneys-general, #computers-and-the-internet, #cyberharassment, #facebook-inc, #fringe-groups-and-movements, #grewal-gurbir-s, #hate-speech, #new-jersey, #rumors-and-misinformation, #sandberg-sheryl-k, #social-media, #states-us, #zuckerberg-mark-e

0

More Than 1,000 Companies Boycotted Facebook. Did It Work?

Major advertisers on Facebook reduced their spending by millions of dollars in July, but not enough to significantly damage the platform’s revenue.

#advertising-and-marketing, #allengerritsen, #boycotts, #civil-rights-and-liberties, #facebook-inc, #hate-speech, #media, #online-advertising, #small-business, #social-media, #zuckerberg-mark-e

0

Twitter finally bans former KKK leader, David Duke

Twitter has confirmed it has permanently banned the account of David Duke, former leader of white supremacist hate group the Ku Klux Klan.

Duke had operated freely on its platform for years — amassing a following of around 53k and recently tweeting his support for president Trump to be re-elected. Now his @DrDavidDuke account page leads to an ‘account suspension’ notification (screengrabbed below).

A Twitter spokesperson confirmed to TechCrunch that the ban on Duke is permanent, emailing us this brief statement:

The account you referenced has been permanently suspended for repeated violations of the Twitter Rules on hateful conduct. This enforcement action is in line with our recently-updated guidance on harmful links.

While the move has been welcomed by anti-nazis everywhere, no one is rejoicing at how long it took Twitter to kick the KKK figurehead. The company has long claimed a policy prohibiting hateful conduct on its platform, while simultaneously carrying on a multi-year journey toward actually enforcing its own rules.

Over the years, Twitter’s notorious passivity in acting on policy-defined ‘acceptable behavior’ limits allowed abuse and toxic hate speech to build and bloom essentially unchecked — eventually forcing the company to commit to cleaning up its act to try to stop users from fleeing in horror. (Not a great definition of leadership by anyone’s standards as we pointed out back in 2017.)

Roll on a few more years and Twitter has been slowly shifting up its enforcement gears, with a push in 2018 toward what CEO Jack Dorsey dubbed “conversational health“, and further expansions to its hateful conduct policy. Enforcement has still been patchy and/or chequered. But appears to have stepped up markedly this year — which kicked off with a ban on a notorious UK right-wing hate preacher.

Twitter’s 2020 enforcement mojo may have a fair bit to do with the pandemic. In March, with concern spiking over COVID-19 misinformation spreading online, Twitter tweaked its rules to zero in on harmful link spreading (aka “malicious URLs” as it calls them), as a step to combat coronavirus scammers.

So it looks like public health risks have finally helped concentrate minds at Twitter HQ around enforcement — and everyone (still) on its platform is better for it.

In recent weeks Twitter has cracked down on the right-wing conspiracy theory group, Qanon, banning 7,000 accounts earlier this month. It also finally found a way to respond to US president Trump’s abuse of its platform as a conduit for broadcasting violent threats and trying to stir up a race war (and spread political disinformation) by applying screens and fact-check labels to offending Trump tweets.

The president’s son, Donald Trump Jr, has also had temporary restrictions applied to his account this month after he shared a video which makes false and potentially life-threatening claims about the coronavirus pandemic.

That looks like a deliberate warning shot across Trump’s bows — to say that while Twitter might not be willing to ban the president himself (given his public office), it sure as hell will kick his son into touch if he steps over the line.

Twitter’s policy on link-blocking states the company may take action to limit the spread of links which relate to a number of content categories, including terrorism, violence and hateful conduct, in addition to those pointing to other bad stuff such as malware and spam. The policy further notes: “Accounts dedicated to sharing content which we block, or which attempt to circumvent a block on the sharing a link, may be subject to additional enforcement action, including suspension.”

Twitter had previously said Duke hadn’t been banned because he’d left the KKK, per the Washington Times. So it looks as if he got the banhammer for essentially being a malicious URL node in slithering human form, by using his account to spread links to content that preached his gospel of hate.

Which makes for a nice silver lining on the pandemic storm cloud.

Much like similar right-wing hate spreaders, Duke also used his Twitter account to bully and harass critics — by being able to direct a nazi troll army of Twitter supporters to target individuals with abuse and try to get their accounts suspended via tricking Twitter’s systems through mass reporting their tweets.

Safe to say, Duke, like all nazis, won’t be missed.

Also doubtless concentrating minds at Twitter on standing up for its own community standards is the #StopHateForProfit ad boycott that’s been taking place this month, with multiple high profile advertisers withdrawing spend across major social media platforms as an objection to their failure to boot out hate speech. 

#coronavirus, #covid-19, #hate-speech, #social, #twitter

0

“Zuck off”: Doctors, nurses, scientists rail against Zuckerberg

A frowning man in a business suit.

Enlarge / Facebook CEO Mark Zuckerberg testifying before Congress in April 2018. It wasn’t his only appearance in DC this decade. (credit: Bloomberg | Getty Images)

San Francisco city officials are considering condemning the decision to name a local public hospital after Mark Zuckerberg—a move backed by nurses and doctors at the hospital, who have been railing against the Facebook co-founder and CEO since the hospital changed its name in 2015.

San Francisco Supervisor Gordon Mar on Tuesday introduced a resolution to the board of supervisors that would condemn the Zuckerberg name. The resolution also urges the city to establish clear rules on naming rights that reflect the city’s “values and a commitment to affirming and upholding human rights, dignity, and social and racial justice.”

Doctors and nurses at the hospital have been campaigning for the hospital to drop the name since it was first introduced in 2015, following a $75 million donation from Zuckerberg and his wife, Priscilla Chan, a pediatrician who used to work at the hospital. Over the years, hospital staff have expressed concern that the hospital is associated with Facebook and all of its problems and controversies—including, but not limited to, those related to privacy, unethical research, the dissemination of misinformation, hate speech, and disinformation.

Read 13 remaining paragraphs | Comments

#chan, #facebook, #hate-speech, #hospital, #mark-zuckerberg, #name, #policy, #san-francisco, #science, #stop-hate-for-profit, #zuckerberg

0

Facebook Said to Consider Banning Political Ads

The social network has been under intense pressure for allowing misinformation and hate speech to spread on its site.

#civil-rights-and-liberties, #corporate-social-responsibility, #facebook-inc, #hate-speech, #political-advertising, #rumors-and-misinformation, #social-media, #zuckerberg-mark-e

0

Reddit bans pro-Trump /r/The_Donald for “rule-breaking content”

Cartoon flying saucers have decapitated an orange robot.

Enlarge (credit: Aurich Lawson / Ars Technica)

Reddit has banned hundreds of subreddits after a major rewrite of its content rules, the site announced on Monday. The newly banned subreddits include /r/The_Donald, a leading forum for fans of the president. Reddit also banned /r/ChapoTrapHouse—a subreddit dedicated to the popular left-wing podcast.

The bans are the latest signs of how much Reddit’s content-moderation policy has evolved. Until 2015, the site hosted openly racist subreddits. But like Twitter and other social media sites, Reddit has adopted increasingly strict policies against hosting hate speech.

The new version of Reddit’s content policies makes Reddit’s opposition to hate speech more overt. “Reddit is a place for creating community and belonging, not for attacking marginalized or vulnerable groups of people,” the company says in the first of its eight new rules.

Read 4 remaining paragraphs | Comments

#hate-speech, #policy, #reddit

0

Facebook will label rule violations as Coke, Pepsi, Starbucks join ad “pause”

A man in a T-shirt looks worried.

Enlarge / Facebook CEO Mark Zuckerberg speaking about Facebook News in New York, Oct. 25, 2019. (credit: Drew Angerer | Getty Images)

Facebook CEO Mark Zuckerberg said the company will change the way it handles rule-breaking speech from high-profile politicians in the future amid an advertising boycott that has drawn participation from large firms across several sectors.

Several nonprofits, including the Anti-Defamation League, the NAACP, and Color of Change, launched the Stop Hate for Profit campaign about two weeks ago. The boycott accuses Facebook of a “long history of allowing racist, violent, and verifiably false content to run rampant on its platform” and asks advertisers to “show they will not support a company that puts profit over safety.”

The boycott drew early support from outdoor apparel retailers Patagonia, The North Face, and REI. By Friday, the movement seemed to hit critical mass as food and personal care behemoth Unilever said it would suspend US ad campaigns on both Facebook and Twitter for the rest of the year. Telecom giant Verizon also said Friday it would suspend Facebook advertising for the time being.

Read 8 remaining paragraphs | Comments

#advertisers, #advertising, #boycotts, #facebook, #hate-speech, #policy, #trump

0

Lawsuit by Black YouTubers against YouTube faces “uphill battle”

Kimberly Newman, the lead plaintiff, in a 2019 YouTube video.

Enlarge / Kimberly Newman, the lead plaintiff, in a 2019 YouTube video. (credit: Kimberly Newman / YouTube)

Four Black YouTubers have sued YouTube, arguing that the online platform discriminates against Black content creators like themselves.

YouTube “knowingly, intentionally, and systematically employs artificial intelligence algorithms, computer and machine based filtering and review tools to ‘target’ Plaintiffs and all other persons similarly situated, by using information about their racial identity and viewpoint to restrict access and drive them off YouTube,” the lawsuit states.

In their 103-page lawsuit, the four women detail a variety of challenges they’ve faced over the years as YouTube content creators. On several occasions, YouTube demonetized some of their videos, blocking them from generating revenue via ads. They say these decisions were made based on vague criteria with no meaningful opportunity to appeal. They also fault YouTube for failing to take down—and in some cases even promoting—”hate speech videos targeting the African American community.”

Read 17 remaining paragraphs | Comments

#free-speech, #hate-speech, #policy, #youtube

0

How will EC plans to reboot rules for digital services impact startups?

A framework for ensuring fairness in digital marketplaces and tackling abusive behavior online is brewing in Europe, fed by a smorgasbord of issues and ideas, from online safety and the spread of disinformation, to platform accountability, data portability and the fair functioning of digital markets.

European Commission lawmakers are even turning their eye to labor rights, spurred by regional concern over unfair conditions for platform workers.

On the content side, the core question is how to balance individual freedom of expression online against threats to public discourse, safety and democracy from illegal or junk content that can be deployed cheaply, anonymously and at massive scale to pollute genuine public debate.

The age-old conviction that the cure for bad speech is more speech can stumble in the face of such scale. While illegal or harmful content can be a money spinner, outrage-driven engagement is an economic incentive that often gets overlooked or edited out of this policy debate.

Certainly the platform giants — whose business models depend on background data-mining of internet users in order to program their content-sorting and behavioral ad-targeting (activity that, notably, remains under regulatory scrutiny in relation to EU data protection law) — prefer to frame what’s at stake as a matter of free speech, rather than bad business models.

But with EU lawmakers opening a wide-ranging consultation about the future of digital regulation, there’s a chance for broader perspectives on platform power to shape the next decades online, and much more besides.

In search of cutting-edge standards

For the past two decades, the EU’s legal framework for regulating digital services has been the e-commerce Directive — a cornerstone law that harmonizes basic principles and bakes in liabilities exemptions, greasing the groove of cross-border e-commerce.

In recent years, the Commission has supplemented this by applying pressure on big platforms to self-regulate certain types of content, via a voluntary Code of Conduct on illegal hate speech takedowns — and another on disinformation. However, the codes lack legal bite and lawmakers continue to chastise platforms for not doing enough nor being transparent enough about what they are doing.

#artificial-intelligence, #competition-law, #competition-reform, #digital-services-act, #disinformation, #ecommerce, #ecommerce-directive, #election-interference, #europe, #extra-crunch, #freedom-of-expression, #government, #hate-speech, #market-analysis, #online-harms, #platform-regulation, #platforms, #policy, #startups, #tc

0

On illegal hate speech, EU lawmakers eye binding transparency for platforms

It’s more than four years since major tech platforms signed up to a voluntary pan-EU Code of Conduct on illegal hate speech removals. Yesterday the European Commission’s latest assessment of the non-legally binding agreement lauds “overall positive” results — with 90% of flagged content assessed within 24 hours and 71% of the content deemed to be illegal hate speech removed. The latter is up from just 28% in 2016.

However the report cards finds platforms are still lacking in transparency. Nor are they providing users with adequate feedback on the issue of hate speech removals, in the Commission’s view.

Platforms responded and gave feedback to 67.1% of the notifications received, per the report card — up from 65.4% in the previous monitoring exercise. Only Facebook informs users systematically — with the Commission noting: “All the other platforms have to make improvements.”

In another criticism, its assessment of platforms’ performance in dealing with hate speech reports found inconsistencies in their evaluation processes — with “separate and comparable” assessments of flagged content that were carried out over different time periods showing “divergences” in how they were handled.

Signatories to the EU online hate speech code are: Dailymotion, Facebook, Google+, Instagram, Jeuxvideo.com, Microsoft, Snapchat, Twitter and YouTube.

This is now the fifth biannual evaluation of the code. It may not yet be the final assessment but EU lawmakers’ eyes are firmly tilted toward a wider legislative process — with commissioners now busy consulting on and drafting a package of measures to update the laws wrapping digital services.

A draft of this Digital Services Act is slated to land by the end of the year, with commissioners signalling they will update the rules around online liability and seek to define platform responsibilities vis-a-vis content.

Unsurprisingly, then, the hate speech code is now being talked about as feeding that wider legislative process — while the self-regulatory effort looks to be reaching the end of the road. 

The code’s signatories are also clearly no longer a comprehensive representation of the swathe of platforms in play these days. There’s no WhatsApp, for example, nor TikTok (which did just sign up to a separate EU Code of Practice targeted at disinformation). But that hardly matters if legal limits on illegal content online are being drafted — and likely to apply across the board. 

Commenting in a statement, Věra Jourová, Commission VP for values and transparency, said: “The Code of conduct remains a success story when it comes to countering illegal hate speech online. It offered urgent improvements while fully respecting fundamental rights. It created valuable partnerships between civil society organisations, national authorities and the IT platforms. Now the time is ripe to ensure that all platforms have the same obligations across the entire Single Market and clarify in legislation the platforms’ responsibilities to make users safer online. What is illegal offline remains illegal online.”

In another supporting statement, Didier Reynders, commissioner for Justice, added: The forthcoming Digital Services Act will make a difference. It will create a European framework for digital services, and complement existing EU actions to curb illegal hate speech online. The Commission will also look into taking binding transparency measures for platforms to clarify how they deal with illegal hate speech on their platforms.”

Earlier this month, at a briefing discussing Commission efforts to tackle online disinformation, Jourová suggested lawmakers are ready to set down some hard legal limits online where illegal content is concerned, telling journalists: “In the Digital Services Act you will see the regulatory action very probably against illegal content — because what’s illegal offline must be clearly illegal online and the platforms have to proactively work in this direction.” Disinformation would not likely get the same treatment, she suggested.

The Commission has now further signalled it will consider ways to prompt all platforms that deal with illegal hate speech to set up “effective notice-and-action systems”.

In addition, it says it will continue — this year and next — to work on facilitating the dialogue between platforms and civil society organisations that are focused on tackling illegal hate speech, saying that it especially wants to foster “engagement with content moderation teams, and mutual understanding on local legal specificities of hate speech”

In its own report last year assessing the code of conduct, the Commission concluded that it had contributed to achieving “quick progress”, particularly on the “swift review and removal of hate speech content”.

It also suggested the effort had “increased trust and cooperation between IT Companies, civil society organisations and Member States authorities in the form of a structured process of mutual learning and exchange of knowledge” — noting that platforms reported “a considerable extension of their network of ‘trusted flaggers’ in Europe since 2016.”

“Transparency and feedback are also important to ensure that users can appeal a decision taken regarding content they posted as well as being a safeguard to protect their right to free speech,” the Commission report also notes, specifying that Facebook reported having received 1.1 million appeals related to content actioned for hate speech between January 2019 and March 2019, and that 130,000 pieces of content were restored “after a reassessment”.

On volumes of hate speech, the Commission suggested the amount of notices on hate speech content are roughly in the range of 17-30% of total content, noting for example that Facebook reported having removed 3.3M pieces of content for violating hate speech policies in the last quarter of 2018 and 4M in the first quarter of 2019.

“The ecosystems of hate speech online and magnitude of the phenomenon in Europe remains an area where more research and data are needed,” the report added.

#censorship, #digital-services-act, #europe, #european-commission, #freedom-of-expression, #hate-speech, #platforms, #social

0

North Face, Patagonia, REI join Facebook advertiser boycott

The participating businesses have decided the best way to show "dislike" is to withhold money.

Enlarge / The participating businesses have decided the best way to show “dislike” is to withhold money. (credit: GreyParrot | Getty Images)

Several brands have agreed to suspend advertising on Facebook for the month of July and are calling for other companies to join them in boycotting the platform to protest its handling of racism and hate speech.

Patagonia, producer of high-end outerwear, on Sunday became the most recent company to say it was pulling all advertising from Facebook and Instagram for the time being. “For too long, Facebook has failed to take sufficient steps to stop the spread of hateful lies and dangerous propaganda on its platform,” company head of marketing Cory Bayers said in a written statement. “The stakes are too high to sit back and let the company continue to be complicit in spreading disinformation and fomenting fear and hatred.”

Patagonia followed employee placement firm Upwork and outdoor wear competitors REI and The North Face, which all confirmed on Friday they would join the boycott. North Face parent company VF Corp also told CNN its other apparel brands, including Dickies, Vans, and Timberland, were considering joining the protest.

Read 7 remaining paragraphs | Comments

#advertising, #anti-defamation-league, #biz-it, #black-lives-matter, #defamation, #facebook, #hate-speech, #naacp, #north-face, #patagonia, #policy, #politics, #stop-hate-for-profit

0

Germany tightens online hate speech rules to make platforms send reports straight to the feds

While a French online hate speech law has just been derailed by the country’s top constitutional authority on freedom of expression grounds, Germany is beefing up hate speech rules — passing a provision that will require platforms to send suspected criminal content directly to the Federal police at the point it’s reported by a user.

The move is part of a wider push by the German government to tackle a rise in right wing extremism and hate crime — which it links to the spread of hate speech online.

Germany’s existing Network Enforcement Act (aka the NetzDG law) came into force in the country in 2017, putting an obligation on social network platforms to remote hate speech within set deadlines as tight as 24 hours for easy cases — with fines of up to €50M should they fail to comply.

Yesterday the parliament passed a reform which extends NetzDG by placing a reporting obligation on platforms which requires them to report certain types of “criminal content” to the Federal Criminal Police Office.

A wider reform of the NetzDG law remains ongoing in parallel, that’s intended to bolster user rights and transparency, including by simplifying user notifications and making it easier for people to object to content removals and have successfully appealed content restored, among other tweaks. Broader transparency reporting requirements are also looming for platforms.

The NetzDG law has always been controversial, with critics warning from the get go that it would lead to restrictions on freedom of expression by incentivizing platforms to remove content rather than risk a fine. (Aka, the risk of ‘overblocking’.) In 2018 Human Rights Watch dubbed it a flawed law — critiquing it for being “vague, overbroad, and turn[ing] private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal”.

The latest change to hate speech rules is no less controversial: Now the concern is that social media giants are being co-opted to help the state build massive databases on citizens without robust legal justification.

A number of amendments to the latest legal reform were rejected, including one tabled by the Greens which would have prevented the personal data of the authors of reported social media posts from being automatically sent to the police.

The political party is concerned about the risk of the new reporting obligation being abused — resulting in data on citizens who have not in fact posted any criminal content ending up with the police.

It also argues there are only weak notification requirements to inform authors of flagged posts that their data has been passed to the police, among sundry other criticisms.

The party had proposed that only the post’s content would be transmitted directly to police who would have been able to request associated personal data from the platform should there be a genuine need to investigate a particular piece of content.

The German government’s reform of hate speech law follows the 2019 murder of a pro-refugee politician, Walter Lübcke, by neo nazis — which it said was preceded by targeted threats and hate speech online.

Earlier this month police staged raids on 40 hate speech suspects across a number of states who are accused of posting “criminally relevant comments” about Lübcke, per national media.

The government also argues that hate speech online has a chilling effect on free speech and a deleterious impact on democracy by intimidating those it targets — meaning they’re unable to freely express themselves or participate without fear in society.

At the pan-EU level, the European Commission has been pressing platforms to improve their reporting around hate speech takedowns for a number of years, after tech firms signed up to voluntary EU Code of Conduct on hate speech.

It is also now consulting on wider changes to platform rules and governance — under a forthcoming Digital Services Act which will consider how much liability tech giants should face for content they’re fencing.

#censorship, #europe, #freedom-of-speech, #germany, #hate-crime, #hate-speech, #law, #netzdg, #social, #tc

0

French constitutional authority rejects law forcing online platforms to delete hate-speech content

French regulation on hate speech on online platforms has been widely deemed as unconstitutional by France’s Constitutional Council, the top authority in charge of ruling whether a new law complies with the constitution. It won’t come into effect as expected in the coming weeks.

As a reminder, the original law said that online platforms should remove within 24 hours illicit content that has been flagged. Otherwise, companies will have to pay hefty fines every time they infringe the law. For social media companies, it could have potentially cost them many millions of dollars per year.

Illicit content means anything that would be considered as an offense or a crime in the offline world, such as death threats, discrimination, Holocaust denial… The list of illicit content consists of what is more broadly called hate speech.

But the Constitutional Council says that such a technical list makes it difficult to rule what is illicit content and what is not. Due to the short window of time, online platforms can’t check with a court whether a tweet, a post, a photo or a blog post is deemed as illicit or not.

When you combine that with potential fines, the Constitutional Council fears that online platforms will censor content a bit too quickly.

“Given the difficulties to rule whether flagged content is evidently illicit, the incurred penalty with the first violation […], the impugned provisions can only encourage online platform providers to remove content that is flagged, whether it is evidently illicit or not,” the Constitutional Council writes.

As you may have guessed, over-censoring content infringes freedom of speech.

For the most extreme categories, terrorist content and child pornography, online platforms had to react within an hour according to the law that was voted a couple of months ago. The Constitutional Council uses the same wording to rule that online platforms have to over-censor as they can’t base their moderation efforts on court rulings. Free speech again.

Given that the rest of the law is based on those two processes, it is no longer relevant. It included transparency on moderation processes and an appeal mechanism.

“The goal was to reach a difficult compromise between preserving free speech and enforcing the rule of law online. The government acknowledges the decision of the Constitutional Council — it believes the legal text in question can’t reach this objective in its current form,” the government said in a statement.

The government and deputy Laetitia Avia say they’ll now base further work on today’s ruling. In other words, they’re going back to the drawing board.

Germany has already passed similar regulation and there are ongoing discussions at the European Union level.

#europe, #hate-speech, #laetitia-avia, #policy

0

Facebook pulls Trump campaign ads for featuring Nazi-associated image

A Facebook logo and

Enlarge / Thumbs down. (credit: Getty Images | Ted Soqui )

Facebook this afternoon removed from its platform a series of campaign ads for President Donald Trump, citing policy against hate speech—a takedown that landed right in the middle of a hearing where a Facebook official was being grilled by Congress about the site’s failures to act on hate speech originating from the White House.

The ad campaign, paid for by Trump’s re-election committee, ran on the official Facebook pages for Trump, Vice President Mike Pence, and the “Team Trump” campaign. The text of the advertisements reads, “Dangerous MOBS of far-left groups are running through our streets and causing absolute mayhem,” and encourages viewers to sign up for communications with, “Please add your name IMMEDIATELY to stand with your President and his decision to declare ANTIFA a Terrorist Organization.”

The ad campaign began running since June 3, according to left-leaning watchdog group Media Matters. As of yesterday, however, a new version of the ad started running on Facebook, featuring an inverted red triangle with a black outline. As immediately pointed out by several anti-defamation groups and media outlets, that symbol was used by the Nazi party to identify political prisoners in concentration camps. That category first included communists, then also social democrats, socialists, anarchists, trade unionists, Freemasons, and other perceived threats.

Read 12 remaining paragraphs | Comments

#antisemitism, #congress, #facebook, #hate-speech, #hearings, #nazis, #policy, #trump, #uncategorized

0

France passes law forcing online platforms to delete hate speech content within 24 hours

France’s lower chamber of the parliament has voted in favor of a controversial law against hate speech on social networks and online platforms. As I described last year, online platforms will have to remove illicit content that has been flagged within 24 hours. Otherwise, companies will have to pay hefty fines every time they infringe the law.

What do they mean by illicit content? Essentially, anything that would be considered as an offense or a crime in the offline world is now considered as illicit content when it’s an online platform. Among other things, you could think about death threats, discrimination, Holocaust denial…

For the most extreme categories, terrorist content and child pornography, online platforms have to react within an hour.

While online hate speech has been getting out of control, many fear that online platforms will censor content a bit too quickly. Companies don’t want to risk a fine so they might delete content that doesn’t infringe the law just because they’re not sure.

Essentially, online platforms have to regulate themselves. The government then checks whether they’re doing a good job or not. “It’s just like banking regulators. They check that banks have implemented systems that are efficient, and they audit those systems. I think that’s how we should think about it,” France’s digital minister Cédric O told me in an interview last year.

There are multiple levels of fines. It starts at hundreds of thousand of euros but it can reach up to 4% of the global annual revenue of the company with severe cases. The Superior Council of the Audiovisual (CSA) is the regulator in charge of those cases.

Germany has already passed similar regulation and there are ongoing discussions at the European Union level.

#europe, #france, #france-newsletter, #hate-speech, #policy

0

Facebook settles moderator suit for $52M as hate speech on site increases

People work at computers in an open office.

Enlarge / Content moderators work at a Facebook office in Austin, Texas. (credit: Ilana Panich-Linsman | The Washington Post | Getty Images)

Many jobs can cause employee burnout, but the effect of having to deal with the absolute worst cruelty humanity has to offer for 40 hours a week can go well beyond burnout and leave employees with serious mental health traumas. Facebook has now settled with a group of content moderators who sued the tech behemoth, alleging their jobs left them with severe post-traumatic stress disorder the company did nothing to mitigate or prevent.

The company will pay $52 million to settle the suit, first filed in 2018 by a content moderator named Selena Scola. Scola’s suit alleged that she developed “debilitating” PTSD after having to watch “thousands of acts of extreme and graphic violence.”

The conditions under which Facebook moderators often work have been extensively reported out by The Guardian, The Verge (more than once), The Washington Post, and BuzzFeed News, among others. Moderators, who mostly work for third-party contract firms, described to reporters hours spent looking at graphic murders, animal cruelty, sexual abuse, child abuse, and other horrifying footage, while being provided with little to no managerial or mental health support and hard-to-meet quotas under shifting guidelines.

Read 8 remaining paragraphs | Comments

#content-moderation, #facebook, #hate-speech, #policy, #ptsd, #settlements

0

Facebook upgrades its AI to better tackle COVID-19 misinformation and hate speech

Facebook’s AI tools are the only thing standing between its users and the growing onslaught of hate and misinformation the platform is experiencing. The company’s researchers have cooked up a few new capabilities for the systems that keep the adversary at bay, identifying COVID-19-related misinformation and hateful speech disguised as memes.

Detecting and removing misinformation relating to the virus is obviously a priority right now, as Facebook and other social media become breeding grounds not just for ordinary speculation and discussion, but malicious interference by organized campaigns aiming to sow discord and spread pseudoscience.

“We have seen a huge change in behavior across the site because of COVID-19, a huge increase in misinformation that we consider dangerous,” said Facebook CTO Mike Schroepfer in a call with press earlier today.

The company contracts with dozens of fact-checking organizations around the world, but — leaving aside the question of how effective the collaborations really are — misinformation has a way of quickly mutating, making taking down even a single image or link a complex affair.

Take a look at the three example images below, for instance:In some ways they’re nearly identical, with the same background image, colors, typeface, and so on. But the second one is slightly different — it’s the kind of thing you might see when someone takes a screenshot and shares that instead of the original. The third is visually the same but the words have the opposite meaning.

An unsophisticated computer vision algorithm would either rate these as completely different images due to those small changes (they result in different hashes) or all the same due to overwhelming visual similarity. Of course we see the differences right away, but training an algorithm to do that reliably is very difficult. And the way things spread on Facebook, you might end up with thousands of variations rather than a handful.

“What we want to be able to do is detect those things as being identical because they are, to a person, the same thing,” said Schroepfer. “Our previous systems were very accurate, but they were very fragile and brittle to even very small changes. If you change a small number of pixels, we were too nervous that it was different, and so we would mark it as different and not take it down. What we did here over the last two and a half years is build a neural net based similarity detector that allowed us to better catch a wider variety of these variants again at very high accuracy.”

Fortunately analyzing images at those scales is a specialty of Facebook’s. The infrastructure is there for comparing photos and searching for features like faces and less desirable things; It just needed to be taught what to look for. The result — from years of work, it should be said — is SimSearchNet, a system dedicated to finding and analyzing near-duplicates of a given image by close inspection of their most salient features (which may not be at all what you or I would notice).

SimSearchNet is currently inspecting every image uploaded to Instagram and Facebook — billions a day.

The system is also monitoring Facebook Marketplace, where people trying to skirt the rules will upload the same image of an item for sale (say, an N95 face mask) but slightly edited to avoid being flagged by the system as not allowed. With the new system, the similarities between recolored or otherwise edited photos are noted and the sale stopped.

Hateful memes and ambiguous skunks

Another issue Facebook has been dealing with is hate speech — and its more loosely defined sibling hateful speech. One area that has proven especially difficult for automated systems, however, is memes.

The problem is that the meaning of these posts often results from an interplay between the image and the text. Words that would be perfectly appropriate or ambiguous on their own have their meaning clarified by the image on which they appear. Not only that, but there’s an endless number of variations in images or phrasings that can subtly change (or not change) the resulting meaning. See below:

To be clear, these are toned down “mean memes,” not the kind of truly hateful ones often found on Facebook.

Each individual piece of the puzzle is fine in some contexts, insulting in others. How can a machine learning system learn to tell what’s good and what’s bad? This “multimodal hate speech” is a non-trivial problem because of the way AI works. We’ve built systems to understand language, and to classify images, but how those two things relate is not so simple a problem.

The Facebook researchers note that there is “surprisingly little” research on the topic, so theirs is more an exploratory mission than a solution. The technique they arrived at had several steps. First, they had humans annotate a large collection of meme-type images as hateful or not, creating the Hateful Memes dataset. Next, a machine learning system was trained on this data, but with a crucial difference from existing ones.

Almost all such image analysis algorithms, when presented with text and an image at the same time, will classify the one, then the other, then attempt to relate the two together. But that has the aforementioned weakness that, independent of context, the text and images of hateful memes may be totally benign.

Facebook’s system combines the information from text and image earlier in the pipeline, in what it calls “early fusion” to differentiate it from the traditional “late fusion” approach. This is more akin to how people do it — looking at all the components of a piece of media before evaluating its meaning or tone.

Right now the resultant algorithms aren’t ready for deployment at large — at around 65-70 percent overall accuracy, though Schroepfer cautioned that the team uses “the hardest of the hard problems” to evaluate efficacy. Some multimodal hate speech will be trivial to flag as such, while some is difficult even for humans to gauge.

To help advance the art, Facebook is running a “Hateful Memes Challenge” as part of the NeurIPS AI conference later this year; this is commonly done with difficult machine learning tasks, as new problems like this one are like catnip for researchers.

AI’s changing role in Facebook policy

Facebook announced its plans to rely on AI more heavily for moderation in the early days of the COVID-19 crisis. In a press call in March, Mark Zuckerberg said that the company expected more “false positives”—instances of content flagged when it shouldn’t be—with the company’s fleet of 15,000 moderation contractors at home with paid leave.

YouTube and Twitter also shifted more of their content moderation to AI around the same time, issuing similar warnings about how an increased reliance on automated moderation might lead to content that doesn’t actually break any platform rules being flagged mistakenly.

In spite of its AI efforts, Facebook has been eager to get its human content reviewers back in the office. In mid-April, Zuckerberg gave a timeline for when employees could be expected to get back to the office, noting that content reviewers were high on Facebook’s list of “critical employees” marked for the earliest return.

While Facebook warned that its AI systems might remove content too aggressively, hate speech, violent threats and misinformation continue to proliferate on the platform as the coronavirus crisis stretches on. Facebook most recently came under fire for disseminating a viral video discouraging people from wearing face masks or seeking vaccines once they are available— a clear violation of the platform’s rules against health misinformation.

The video, an excerpt from a forthcoming pseudo-documentary called “Plandemic,” initially took off on YouTube, but researchers found that Facebook’s thriving ecosystem of conspiracist groups shared it far and wide on the platform, injecting it into mainstream online discourse. The 26-minute-long video, peppered with conspiracies, is also a perfect example of the kind of content an algorithm would have a difficult time making sense of.

On Tuesday, Facebook also released a community standards enforcement report detailing its moderation efforts across categories like terrorism, harassment and hate speech. While the results only include one a one month span during the pandemic, we can expect to see more of the impact of Facebook’s shift to AI moderation next time around.

In a call about the company’s moderation efforts, Zuckerberg noted that the pandemic has made “the human review part” of its moderation much harder, as concerns around protecting user privacy and worker mental health make remote work a challenge for reviewers, but one the company is navigating now. Facebook confirmed to TechCrunch that the company is now allowing a small portion of full-time content reviewers back into the office on a volunteer basis and according to Facebook Vice President of Integrity Guy Rosen, “the majority” of its contract content reviewers can now work from home. “The humans are going to continue to be a really important part of the equation,” Rosen said.

#artificial-intelligence, #coronavirus, #covid-19, #facebook, #hate-speech, #science, #social, #tc

0

Tumblr now removes reblogs in violation of its hate speech policy, not just the original posts

Tumblr is making a change to how it deals with hate speech on its blogging platform. The company announced today it will also remove the reblogs (repostings) from any blogs that were suspended for violating its policies around hate speech. Already, the company says it’s identified nearly 1,000 blogs that were banned for blatant violations of its hate speech rules. Most of these blogs contained Nazi-related content, it said. This week, Tumblr began to remove all the reblogs from these previously banned sites, as well — a number totaling 4.47M individual posts.

In an announcement, Tumblr explains its reasoning behind the decision to also remove the reblogged material:

We’ve listened to your feedback and have reassessed how we can more effectively remove hateful content from Tumblr. In our own research, and from your helpful reports, we found that much of the existing hate speech stemmed from blogs that have actually already been terminated. While their original posts were deleted upon blog termination, the content of those posts still lived on in reblogs. Those reblogs rarely contained the kind of counter-speech that serves to keep hateful rhetoric in check, so we’re changing how we deal with them.

In other words, it saw no value in allowing the hate speech to live on in this reposted state, as the majority of the reblogs weren’t engaged in providing what Tumblr referred to as “educational” or “necessary counter-arguments” to the hate speech.

When asked if it did, in fact, remove reblogs of an educational nature, Tumblr said it used human moderators to determine which content was in violation and which was not. Any blogs containing “productive counter-conversations” or “educational blogs” were not removed as part of this process, we’re told.

In addition, Tumblr says that moving forward, it will evaluate all blogs suspended for hate speech and consider mass reblog deletion when appropriate.

The company consulted with outside experts — including Center for Democracy and Technology, Brennan Center for Justice, EFF, and Stanford tech policy academic Daphne Keller —  to determine the right course of action. Ultimately, Tumblr believes the new approach is aligned with the recommended best practices around hate speech it has been advised to adopt.

“We are, and will always remain, steadfast believers in free speech. Tumblr is a place where you can be yourself and express your opinions. Hate speech is not conducive to that,” the company’s announcement read. “When hate speech goes unchecked, it eventually silences the voices that add kindness and value to our society. That’s not the kind of Tumblr any of us want.”

Tumblr also noted its decisions it’s making aren’t being left up to A.I. and algorithms. Instead, Tumblr asks users to flag hate speech they come across for review by Tumblr’s Trust & Safety team.

As expected, there’s a debate about the policy taking place in the comments of the Tumblr post about the changes. On one side are those who support the idea of companies enforcing policies around the sort of content they do not want to host. On the other, are the free speech advocates who see any such policy as a form of censorship.

The effort to take more action on hate speech follows Tumblr’s 2018 decision to ban porn from its platform after getting kicked out of Apple’s App Store for hosting the content. Similarly, hosting hate speech reblogs could cause problems with Apple’s own rules.

Tumblr has made few changes since its acquisition by WordPress owner Automattic from (TechCrunch parent) Verizon in 2019. But its earlier decisions to clean up its site have had a negative impact on its traffic.

Its significantly devalued price point at the time of the Automattic deal was attributed to its decision to remove NSFW content. Almost every meaningful metric was down year-over-year since the ban, including total visitors, uniques, average site visit, traffic, daily active users, and more. Meanwhile, the younger demographic who used to populate Tumblr in the millions have largely moved on to expressive, video-centric social platforms, like TikTok and Twitch.

Tumblr’s Community Guidelines haven’t been updated to include its decision to remove reblogs of hate speech, but its full hate speech rules can be viewed here. 

#ban, #blogs, #content-policy, #hate-speech, #social, #tumblr

0