My Twitter Pullback Is About More Than Elon Musk

Social media is full of hate speech, bots, vitriol, attack armies, screamers and people who live for the opportunity to be angry.

#black-people, #computers-and-the-internet, #cyberharassment, #hate-speech, #musk-elon, #social-media, #twitter

Antisemitism Increased Under Trump. Then It Got Even Worse.

The worldwide growth of a social crisis.

#anti-defamation-league, #anti-semitism, #fringe-groups-and-movements, #hate-crimes, #hate-speech, #jews-and-judaism, #neo-nazi-groups, #right-wing-extremism-and-alt-right, #zionism

How War in Ukraine Roiled Facebook and Instagram

The rules over what war content is permitted on Facebook and Instagram keep changing, causing internal confusion.

#clegg-nick, #computers-and-the-internet, #facebook-inc, #hate-speech, #instagram-inc, #meta-platforms-inc, #propaganda, #rumors-and-misinformation, #russia, #russian-invasion-of-ukraine-2022, #social-media, #ukraine, #war-and-armed-conflicts, #zuckerberg-mark-e

Posting “death to the Russian invaders” on Facebook now OK in some countries

Smoke rises from a Russian tank destroyed by Ukrainian forces in the Luhansk region on February 26, 2022. (Photo by Anatolii Stepanov / AFP via Getty Images)

Enlarge / Smoke rises from a Russian tank destroyed by Ukrainian forces in the Luhansk region on February 26, 2022. (Photo by Anatolii Stepanov / AFP via Getty Images) (credit: Anatolii Stepanov / AFP)

As Russia’s invasion of Ukraine grinds on, Meta is temporarily changing its policies to allow users on Facebook and Instagram to post calls for violence against—and even the deaths of—Russian soldiers and political figures, including Russian President Vladimir Putin and Belarusian President Alexander Lukashenko.

“As a result of the Russian invasion of Ukraine, we have temporarily made allowances for forms of political expression that would normally violate our rules like violent speech such as ‘death to Russian invaders,’” Meta spokesperson Andy Stone said on Twitter. “We still won’t allow credible calls for violence against Russian civilians.”

The temporary policy exception was recently sent to Facebook and Instagram moderators, and emails detailing the change were revealed by Reuters. The exceptions mark the social media company’s latest attempt to adapt to the shifting geopolitical situation.

Read 6 remaining paragraphs | Comments

#facebook, #hate-speech, #instagram, #meta, #policy, #russian-invasion-of-ukraine, #social-media, #violence

Yogi Adityanath’s Election Win Raises His Profile Across India

Yogi Adityanath’s return as chief minister of Uttar Pradesh is fueling talk that he might succeed Narendra Modi as prime minister one day, and continue to advance their Hindu political movement.

#adityanath-yogi, #bharatiya-janata-party, #elections, #hate-speech, #hinduism, #modi-narendra, #muslims-and-islam, #politics-and-government, #rashtriya-swayamsevak-sangh-india, #uttar-pradesh-state-india

As Officials Look Away, Hate Speech in India Nears Dangerous Levels

Activists and analysts say calls for anti-Muslim violence — even genocide — are moving from the fringes to the mainstream, while political leaders keep silent.

#adityanath-yogi, #discrimination, #fringe-groups-and-movements, #hate-speech, #hinduism, #muslims-and-islam, #politics-and-government, #yati-narsinghanand

Employees pleaded with Facebook to stop letting politicians bend rules

One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018.

Enlarge / One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018. (credit: Saul Loeb | Getty Images)

Facebook’s senior executives interfered to allow US politicians and celebrities to post whatever they wanted on its social network despite pleas from employees to stop, leaked internal documents suggest.

Employees claim in the documents that while Facebook has long insisted that it is politically neutral, it allowed rightwing figures to break rules designed to curb misinformation and harmful content, after being stung by accusations of bias from conservatives.

In September 2020, just ahead of the US presidential election, the author of an internal memo wrote that “director-level employees” had “written internally that they would prefer to formally exclude political considerations from the decision-making process.”

Read 27 remaining paragraphs | Comments

#facebook, #hate-speech, #privacy, #tech, #zuckerberg

Facebook Debates What to Do With Its Like and Share Buttons

Likes and shares made the social media site what it is. Now, company documents show, it’s struggling to deal with their effects.

#anxiety-and-stress, #computers-and-the-internet, #conspiracy-theories, #corporate-social-responsibility, #facebook-inc, #friendship, #hate-speech, #haugen-frances, #instagram-inc, #mobile-applications, #online-advertising, #research, #rumors-and-misinformation, #social-media, #whistle-blowers, #zuckerberg-mark-e

In India, Facebook Struggles to Combat Misinformation and Hate Speech

Internal documents show a struggle with misinformation, hate speech and celebrations of violence in the country, the company’s biggest market.

#artificial-intelligence, #computers-and-the-internet, #engineering-and-engineers, #facebook-inc, #fringe-groups-and-movements, #hate-speech, #haugen-frances, #india, #muslims-and-islam, #pakistan, #politics-and-government, #rumors-and-misinformation, #social-media, #whistle-blowers

Gadsby and Netflix Employees Pressure Executive Over Dave Chappelle Special

In a town-hall-style meeting on Friday, Ted Sarandos, a co-chief executive of Netflix, faced criticism from staff over Mr. Chappelle’s “The Closer.”

#chappelle-dave, #discrimination, #gadsby-hannah, #gay-and-lesbian-alliance-against-defamation, #hastings-reed, #hate-speech, #instagram-inc, #netflix-inc, #news-and-news-media, #sarandos-ted, #television, #transgender-and-transsexuals

Facebook Comments Can Get Media Firms Sued in Australia

Australia’s top court has said media companies can be held liable for replies to their posts, prompting some to step back from the platform.

#australia, #censorship, #computers-and-the-internet, #dylan-voller, #facebook-inc, #freedom-of-speech-and-expression, #freedom-of-the-press, #hate-speech, #human-rights-and-human-rights-violations, #libel-and-slander, #media, #news-and-news-media, #regulation-and-deregulation-of-industry, #social-media, #suits-and-litigation-civil

An Experiment to Stop Online Abuse Falls Short in Germany

Despite having one of the world’s toughest laws against online hate speech and harassment, Germany has struggled to contain toxic content ahead of its Sept. 26 election.

#computers-and-the-internet, #cyberharassment, #elections, #facebook-inc, #freedom-of-speech-and-expression, #fringe-groups-and-movements, #germany, #hate-speech, #legislatures-and-parliaments, #politics-and-government, #regulation-and-deregulation-of-industry, #right-wing-extremism-and-alt-right, #rumors-and-misinformation, #social-media, #twitter, #youtube-com

Twitch sues users over alleged “hate raids” against streamers

Twitch sues users over alleged “hate raids” against streamers

Enlarge (credit: MikkelWilliam | Getty Images)

Since early August, Twitch has been wrestling with an epidemic of harassment, known as “hate raids,” against marginalized streamers. These attacks spam streamers’ chats with hateful and bigoted language, amplified dozens of times a minute by bots. On Thursday, after a month trying and failing to combat the tactic, Twitch resorted to the legal system, suing two alleged hate raiders [PDF] for “targeting black and LGBTQIA+ streamers with racist, homophobic, sexist and other harassing content” in violation of its terms of service.

“We hope this Complaint will shed light on the identity of the individuals behind these attacks and the tools that they exploit, dissuade them from taking similar behaviors to other services, and help put an end to these vile attacks against members of our community,” a Twitch spokesperson said in a comment to WIRED.

Read 6 remaining paragraphs | Comments

#abuse, #gaming-culture, #harassment, #hate-speech, #policy, #twitch

YouTube’s recommender AI still a horrorshow, finds major crowdsourced study

For years YouTube’s video-recommending algorithm has stood accused of fuelling a grab-bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory.

And while YouTube’s tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm’s antisocial recommendations — announcing a few policy tweaks or limiting/purging the odd hateful account — it’s not clear how far the platform’s penchant for promoting horribly unhealthy clickbait has actually been rebooted.

The suspicion remains nowhere near far enough.

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of ‘bottom-feeding’/low grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side-effect of the platform’s rapacious appetite to harvest views to serve ads.

That YouTube’s AI is still — per Mozilla’s study — behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform.

The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight — via the convenient shield of ‘commercial secrecy’.

But regulation that could help crack open proprietary AI blackboxes is now on the cards — at least in Europe.

To fix YouTube’s algorithm, Mozilla is calling for “common sense transparency laws, better oversight, and consumer pressure” — suggesting a combination of laws that mandate transparency into AI systems; protect independent researchers so they can interrogate algorithmic impacts; and empower platform users with robust controls (such as the ability to opt out of “personalized” recommendations) are what’s needed to rein in the worst excesses of the YouTube AI.

Regrets, YouTube users have had a few…

To gather data on specific recommendations being made made to YouTube users — information that Google does not routinely make available to external researchers — Mozilla took a crowdsourced approach, via a browser extension (called RegretsReporter) that lets users self-report YouTube videos they “regret” watching.

The tool can generate a report which includes details of the videos the user had been recommended, as well as earlier video views, to help build up a picture of how YouTube’s recommender system was functioning. (Or, well, ‘dysfunctioning’ as the case may be.)

The crowdsourced volunteers whose data fed Mozilla’s research reported a wide variety of ‘regrets’, including videos spreading COVID-19 fear-mongering, political misinformation and “wildly inappropriate” children’s cartoons, per the report — with the most frequently reported content categories being misinformation, violent/graphic content, hate speech and spam/scams.

A substantial majority (71%) of the regret reports came from videos that had been recommended by YouTube’s algorithm itself, underscoring the AI’s starring role in pushing junk into people’s eyeballs.

The research also found that recommended videos were 40% more likely to be reported by the volunteers than videos they’d searched for themselves.

Mozilla even found “several” instances when the recommender algorithmic put content in front of users that violated YouTube’s own community guidelines and/or was unrelated to the previous video watched. So a clear fail.

A very notable finding was that regrettable content appears to be a greater problem for YouTube users in non-English speaking countries: Mozilla found YouTube regrets were 60% higher in countries without English as a primary language — with Brazil, Germany and France generating what the report said were “particularly high” levels of regretful YouTubing. (And none of the three can be classed as minor international markets.)

Pandemic-related regrets were also especially prevalent in non-English speaking countries, per the report — a worrying detail to read in the middle of an ongoing global health crisis.

The crowdsourced study — which Mozilla bills as the largest-ever into YouTube’s recommender algorithm — drew on data from more than 37,000 YouTube users who installed the extension, although it was a subset of 1,162 volunteers — from 91 countries — who submitted reports that flagged 3,362 regrettable videos which the report draws on directly.

These reports were generated between July 2020 and May 2021.

What exactly does Mozilla mean by a YouTube “regret”? It says this is a crowdsourced concept based on users self-reporting bad experiences on YouTube, so it’s a subjective measure. But Mozilla argues that taking this “people-powered” approach centres the lived experiences of Internet users and is therefore helpful in foregrounding the experiences of marginalised and/or vulnerable people and communities (vs, for example, applying only a narrower, legal definition of ‘harm’).

“We wanted to interrogate and explore further [people’s experiences of falling down the YouTube ‘rabbit hole’] and frankly confirm some of these stories — but then also just understand further what are some of the trends that emerged in that,” explained Brandi Geurkink, Mozilla’s senior manager of advocacy and the lead researcher for the project, discussing the aims of the research.

“My main feeling in doing this work was being — I guess — shocked that some of what we had expected to be the case was confirmed… It’s still a limited study in terms of the number of people involved and the methodology that we used but — even with that — it was quite simple; the data just showed that some of what we thought was confirmed.

“Things like the algorithm recommending content essentially accidentally, that it later is like ‘oops, this actually violates our policies; we shouldn’t have actively suggested that to people’… And things like the non-English-speaking user base having worse experiences — these are things you hear discussed a lot anecdotally and activists have raised these issues. But I was just like — oh wow, it’s actually coming out really clearly in our data.”

Mozilla says the crowdsourced research uncovered “numerous examples” of reported content that would likely or actually breach YouTube’s community guidelines — such as hate speech or debunked political and scientific misinformation.

But it also says the reports flagged a lot of what YouTube “may” consider ‘borderline content’. Aka, stuff that’s harder to categorize — junk/low quality videos that perhaps toe the acceptability line and may therefore be trickier for the platform’s algorithmic moderation systems to respond to (and thus content that may also survive the risk of a take down for longer).

However a related issue the report flags is that YouTube doesn’t provide a definition for borderline content — despite discussing the category in its own guidelines — hence, says Mozilla, that makes the researchers’ assumption that much of what the volunteers were reporting as ‘regretful’ would likely fall into YouTube’s own ‘borderline content’ category impossible to verify.

The challenge of independently studying the societal effects of Google’s tech and processes is a running theme underlying the research. But Mozilla’s report also accuses the tech giant of meeting YouTube criticism with “inertia and opacity”.

It’s not alone there either. Critics have long accused YouTube’s ad giant parent of profiting off-of engagement generated by hateful outrage and harmful disinformation — allowing “AI-generated bubbles of hate” surface ever more baleful (and thus stickily engaging) stuff, exposing unsuspecting YouTube users to increasingly unpleasant and extremist views, even as Google gets to shield its low grade content business under a user-generated content umbrella.

Indeed, ‘falling down the YouTube rabbit hole’ has become a well-trodden metaphor for discussing the process of unsuspecting Internet users being dragging into the darkest and nastiest corners of the web. This user reprogramming taking place in broad daylight via AI-generated suggestions that yell at people to follow the conspiracy breadcrumb trail right from inside a mainstream web platform.

Back as 2017 — when concern was riding high about online terrorism and the proliferation of ISIS content on social media — politicians in Europe were accusing YouTube’s algorithm of exactly this: Automating radicalization.

However it’s remained difficult to get hard data to back up anecdotal reports of individual YouTube users being ‘radicalized’ after viewing hours of extremist content or conspiracy theory junk on Google’s platform.

Ex-YouTube insider — Guillaume Chaslot — is one notable critic who’s sought to pull back the curtain shielding the proprietary tech from deeper scrutiny, via his algotransparency project.

Mozilla’s crowdsourced research adds to those efforts by sketching a broad — and broadly problematic — picture of the YouTube AI by collating reports of bad experiences from users themselves.

Of course externally sampling platform-level data that only Google holds in full (at its true depth and dimension) can’t be the whole picture — and self-reporting, in particular, may introduce its own set of biases into Mozilla’s data-set. But the problem of effectively studying big tech’s blackboxes is a key point accompanying the research, as Mozilla advocates for proper oversight of platform power.

In a series of recommendations the report calls for “robust transparency, scrutiny, and giving people control of recommendation algorithms” — arguing that without proper oversight of the platform, YouTube will continue to be harmful by mindlessly exposing people to damaging and braindead content.

The problematic lack of transparency around so much of how YouTube functions can be picked up from other details in the report. For example, Mozilla found that around 9% of recommended regrets (or almost 200 videos) had since been taken down — for a variety of not always clear reasons (sometimes, presumably, after the content was reported and judged by YouTube to have violated its guidelines).

Collectively, just this subset of videos had had a total of 160M views prior to being removed for whatever reason.

In other findings, the research found that regretful views tend to perform well on the platform.

A particular stark metric is that reported regrets acquired a full 70% more views per day than other videos watched by the volunteers on the platform — lending weight to the argument that YouTube’s engagement-optimising algorithms disproportionately select for triggering/misinforming content more often than quality (thoughtful/informing) stuff simply because it brings in the clicks.

While that might be great for Google’s ad business, it’s clearly a net negative for democratic societies which value truthful information over nonsense; genuine public debate over artificial/amplified binaries; and constructive civic cohesion over divisive tribalism.

But without legally-enforced transparency requirements on ad platforms — and, most likely, regulatory oversight and enforcement that features audit powers — these tech giants are going to continue to be incentivized to turn a blind eye and cash in at society’s expense.

Mozilla’s report also underlines instances where YouTube’s algorithms are clearly driven by a logic that’s unrelated to the content itself — with a finding that in 43.6% of the cases where the researchers had data about the videos a participant had watched before a reported regret the recommendation was completely unrelated to the previous video.

The report gives examples of some of these logic-defying AI content pivots/leaps/pitfalls — such as a person watching videos about the U.S. military and then being recommended a misogynistic video entitled ‘Man humiliates feminist in viral video.’

In another instance, a person watched a video about software rights and was then recommended a video about gun rights. So two rights make yet another wrong YouTube recommendation right there.

In a third example, a person watched an Art Garfunkel music video and was then recommended a political video entitled ‘Trump Debate Moderator EXPOSED as having Deep Democrat Ties, Media Bias Reaches BREAKING Point.’

To which the only sane response is, umm what???

YouTube’s output in such instances seems — at best — some sort of ‘AI brain fart’.

A generous interpretation might be that the algorithm got stupidly confused. Albeit, in a number of the examples cited in the report, the confusion is leading YouTube users toward content with a right-leaning political bias. Which seems, well, curious.

Asked what she views as the most concerning findings, Mozilla’s Geurkink told TechCrunch: “One is how clearly misinformation emerged as a dominant problem on the platform. I think that’s something, based on our work talking to Mozilla supporters and people from all around the world, that is a really obvious thing that people are concerned about online. So to see that that is what is emerging as the biggest problem with the YouTube algorithm is really concerning to me.”

She also highlighted the problem of the recommendations being worse for non-English-speaking users as another major concern, suggesting that global inequalities in users’ experiences of platform impacts “doesn’t get enough attention” — even when such issues do get discussed.

Responding to Mozilla’s report in a statement, a Google spokesperson sent us this statement:

“The goal of our recommendation system is to connect viewers with content they love and on any given day, more than 200 million videos are recommended on the homepage alone. Over 80 billion pieces of information is used to help inform our systems, including survey responses from viewers on what they want to watch. We constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content. Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1%.”

Google also claimed it welcomes research into YouTube — and suggested it’s exploring options to bring in external researchers to study the platform, without offering anything concrete on that front.

At the same time, its response queried how Mozilla’s study defines ‘regrettable’ content — and went on to claim that its own user surveys generally show users are satisfied with the content that YouTube recommends.

In further non-quotable remarks, Google noted that earlier this year it started disclosing a ‘violative view rate‘ (VVR) metric for YouTube — disclosing for the first time the percentage of views on YouTube that comes from content that violates its policies.

The most recent VVR stands at 0.16-0.18% — which Google says means that out of every 10,000 views on YouTube, 16-18 come from violative content. It said that figure is down by more than 70% when compared to the same quarter of 2017 — crediting its investments in machine learning as largely being responsible for the drop.

However, as Geurkink noted, the VVR is of limited use without Google releasing more data to contextualize and quantify how far its AI was involved in accelerating views of content its own rules state shouldn’t be viewed on its platform. Without that key data the suspicion must be that the VVR is a nice bit of misdirection.

“What would be going further than [VVR] — and what would be really, really helpful — is understanding what’s the role that the recommendation algorithm plays in this?” Geurkink told us on that, adding: “That’s what is a complete blackbox still. In the absence of greater transparency [Google’s] claims of progress have to be taken with a grain of salt.”

Google also flagged a 2019 change it made to how YouTube’s recommender algorithm handles ‘borderline content’ — aka, content that doesn’t violate policies but falls into a problematic grey area — saying that that tweak had also resulted in a 70% drop in watchtime for this type of content.

Although the company confirmed this borderline category is a moveable feast — saying it factors in changing trends as well as context and also works with experts to determine what’s get classed as borderline — which makes the aforementioned percentage drop pretty meaningless since there’s no fixed baseline to measure against.

It’s notable that Google’s response to Mozilla’s report makes no mention of the poor experience reported by survey participants in non-English-speaking markets. And Geurkink suggested that, in general, many of the claimed mitigating measures YouTube applies are geographically limited — i.e. to English-speaking markets like the US and UK. (Or at least arrive in those markets first, before a slower rollout to other places.) 

A January 2019 tweak to reduce amplification of conspiracy theory content in the US was only expanded to the UK market months later — in August — for example.

“YouTube, for the past few years, have only been reporting on their progress of recommendations of harmful or borderline content in the US and in English-speaking markets,” she also said. “And there are very few people questioning that — what about the rest of the world? To me that is something that really deserves more attention and more scrutiny.”

We asked Google to confirm whether it had since applied the 2019 conspiracy theory related changes globally — and a spokeswoman told us that it had. But the much higher rate of reports made to Mozilla of — a yes broader measure of — ‘regrettable’ content being made in non-English-speaking markets remains notable.

And while there could be others factors at play, which might explain some of the disproportionately higher reporting, the finding may also suggest that, where YouTube’s negative impacts are concerned, Google directs greatest resource at markets and languages where its reputational risk and the capacity of its machine learning tech to automate content categorization are strongest.

Yet any such unequal response to AI risk obviously means leaving some users at greater risk of harm than others — adding another harmful dimension and layer of unfairness to what is already a multi-faceted, many-headed-hydra of a problem.

It’s yet another reason why leaving it up to powerful platforms to rate their own AIs, mark their own homework and counter genuine concerns with self-serving PR is for the birds.

(In additional filler background remarks it sent us, Google described itself as the first company in the industry to incorporate “authoritativeness” into its search and discovery algorithms — without explaining when exactly it claims to have done that or how it imagined it would be able to deliver on its stated mission of ‘organizing the world’s information and making it universally accessible and useful’ without considering the relative value of information sources… So color us baffled at that claim. Most likely it’s a clumsy attempt to throw disinformation shade at rivals.)

Returning to the regulation point, an EU proposal — the Digital Services Act — is set to introduce some transparency requirements on large digital platforms, as part of a wider package of accountability measures. And asked about this Geurkink described the DSA as “a promising avenue for greater transparency”.

But she suggested the legislation needs to go further to tackle recommender systems like the YouTube AI.

“I think that transparency around recommender systems specifically and also people having control over the input of their own data and then the output of recommendations is really important — and is a place where the DSA is currently a bit sparse, so I think that’s where we really need to dig in,” she told us.

One idea she voiced support for is having a “data access framework” baked into the law — to enable vetted researchers to get more of the information they need to study powerful AI technologies — i.e. rather than the law trying to come up with “a laundry list of all of the different pieces of transparency and information that should be applicable”, as she put it.

The EU also now has a draft AI regulation on the table. The legislative plan takes a risk-based approach to regulating certain applications of artificial intelligence. However it’s not clear whether YouTube’s recommender system would fall under one of the more closely regulated categories — or, as seems more likely (at least with the initial Commission proposal), fall entirely outside the scope of the planned law.

“An earlier draft of the proposal talked about systems that manipulate human behavior which is essentially what recommender systems are. And one could also argue that’s the goal of advertising at large, in some sense. So it was sort of difficult to understand exactly where recommender systems would fall into that,” noted Geurkink.

“There might be a nice harmony between some of the robust data access provisions in the DSA and the new AI regulation,” she added. “I think transparency is what it comes down to, so anything that can provide that kind of greater transparency is a good thing.

“YouTube could also just provide a lot of this… We’ve been working on this for years now and we haven’t seen them take any meaningful action on this front but it’s also, I think, something that we want to keep in mind — legislation can obviously take years. So even if a few of our recommendations were taken up [by Google] that would be a really big step in the right direction.”

#advertising-tech, #artificial-intelligence, #content-moderation, #disinformation, #european-union, #google, #hate-speech, #machine-learning, #mozilla, #policy, #recommender-systems, #social, #social-media, #tc, #youtube

LinkedIn formally joins EU Code on hate speech takedowns

Microsoft-owned LinkedIn has committed to doing more to quickly purge illegal hate speech from its platform in the European Union by formally signing up to a self-regulatory initiative that seeks to tackle the issue through a voluntary Code of Conduct.

In statement today, the European Commission announced that the professional social network has joined the EU’s Code of Conduct on Countering Illegal Hate Speech Online, with justice commissioner, Didier Reynders, welcoming LinkedIn’s (albeit tardy) participation, and adding in a statement that the code “is and will remain an important tool in the fight against hate speech, including within the framework established by digital services legislation”.

“I invite more businesses to join, so that the online world is free from hate,” Reynders added.

While LinkedIn’s name wasn’t formally associated with the voluntary Code before now it said it has “supported” the effort via parent company Microsoft, which was already signed up.

In a statement on its decision to formally join now, it also said:

“LinkedIn is a place for professional conversations where people come to connect, learn and find new opportunities. Given the current economic climate and the increased reliance jobseekers and professionals everywhere are placing on LinkedIn, our responsibility is to help create safe experiences for our members. We couldn’t be clearer that hate speech is not tolerated on our platform. LinkedIn is a strong part of our members’ professional identities for the entirety of their career — it can be seen by their employer, colleagues and potential business partners.”

In the EU ‘illegal hate speech’ can mean content that espouses racist or xenophobic views, or which seeks to incite violence or hatred against groups of people because of their race, skin color, religion or ethnic origin etc.

A number of Member States have national laws on the issue — and some have passed their own legislation specifically targeted at the digital sphere. So the EU Code is supplementary to any actual hate speech legislation. It is also non-legally binding.

The initiative kicked off back in 2016 — when a handful of tech giants (Facebook, Twitter, YouTube and Microsoft) agreed to accelerate takedowns of illegal speech (or well, attach their brand names to the PR opportunity associated with saying they would).

Since the Code became operational, a handful of other tech platforms have joined — with video sharing platform TikTok signing up last October, for example.

But plenty of digital services (notably messaging platforms) still aren’t participating. Hence the Commission’s call for more digital services companies to get on board.

At the same time, the EU is in the process of firming up hard rules in the area of illegal content.

Last year the Commission proposed broad updates (aka the Digital Services Act) to existing ecommerce rules to set operational ground rules that they said are intended to bring online laws in line with offline legal requirements — in areas such as illegal content, and indeed illegal goods. So, in the coming years, the bloc will get a legal framework that tackles — at least at a high level — the hate speech issue, not merely a voluntary Code. 

The EU also recently adopted legislation on terrorist content takedowns (this April) — which is set to start applying to online platforms from next year.

But it’s interesting to note that, on the perhaps more controversial issue of hate speech (which can deeply intersect with freedom of expression), the Commission wants to maintain a self-regulatory channel alongside incoming legislation — as Reynders’ remarks underline.

Brussels evidently sees value in having a mixture of ‘carrots and sticks’ where hot button digital regulation issues are concerned. Especially in the controversial ‘danger zone’ of speech regulation.

So, while the DSA is set to bake in standardized ‘notice and response’ procedures to help digital players swiftly respond to illegal content, by keeping the hate speech Code around it means there’s a parallel conduit where key platforms could be encouraged by the Commission to commit to going further than the letter of the law (and thereby enable lawmakers to sidestep any controversy if they were to try to push more expansive speech moderation measures into legislation).

The EU has — for several years — had a voluntary a Code of Practice on Online Disinformation too. (And a spokeswoman for LinkedIn confirmed it has been signed up to that since its inception, also through its parent company Microsoft.)

And while lawmakers recently announced a plan to beef that Code up — to make it “more binding”, as they oxymoronically put it — it certainly isn’t planning to legislate on that (even fuzzier) speech issue.

In further public remarks today on the hate speech Code, the Commission said that a fifth monitoring exercise in June 2020 showed that on average companies reviewed 90% of reported content within 24 hours and removed 71% of content that was considered to be illegal hate speech.

It added that it welcomed the results — but also called for signatories to redouble their efforts, especially around providing feedback to users and in how they approach transparency around reporting and removals.

The Commission has also repeatedly calls for platforms signed up to the disinformation Code to do more to tackle the tsunami of ‘fake news’ being fenced on their platforms, including — on the public health front — what they last year dubbed a coronavirus infodemic.

The COVID-19 crisis has undoubtedly contributed to concentrating lawmakers’ minds on the complex issue of how to effectively regulate the digital sphere and likely accelerated a number of EU efforts.

 

#brussels, #code-of-conduct-on-countering-illegal-hate-speech, #covid-19, #digital-services-act, #europe, #european-commission, #european-union, #facebook, #freedom-of-speech, #hate-speech, #homophobia, #linkedin, #microsoft, #online-disinformation, #online-platforms, #policy, #racism, #social, #social-network

Twitter is eyeing new anti-abuse tools to give users more control over mentions

Twitter is looking at adding new features that could help users who are facing abusive situations on its platform as a result of unwanted attention pile-ons, such as when a tweet goes viral for a reason they didn’t expect and a full firehose of counter tweets get blasted their way.

Racist abuse also remains a major problem on Twitter’s platform.

The social media giant says it’s toying with providing users with more controls over the @mention feature to help people “control unwanted attention” as privacy engineer, Dominic Camozzi, puts it.

The issue is that Twitter’s notification system will alert a user when they’ve been directly tagged in a tweet — drawing their attention to the contents. That’s great if the tweet is nice or interesting. But if the contents is abusive it’s a shortcut to scale hateful cyberbullying.

Twitter is badged these latest anti-abuse ideas as “early concepts” — and encouraging users to submit feedback as it considers what changes it might make.

Potential features it’s considering include letting users ‘unmention’ themselves — i.e. remove their name from another’s tweet so they’re no longer tagged in it (and any ongoing chatter around it won’t keep appearing in their mentions feed).

It’s also considering making an unmention action more powerful in instances where an account that a user doesn’t follow mentions them — by providing a special notification to “highlight potential unwanted situations”.

If the user then goes ahead and unmentions themselves Twitter envisages removing the ability of the tweet-composer to tag them again in future — which looks like it could be a strong tool against strangers who abuse @mentions. 

Twitter is also considering adding settings that would let users restrict certain accounts from mentioning them entirely. Which sounds like it would have come in pretty handy when president Trump was on the platform (assuming the setting could be deployed against public figures).

Twitter also says it’s looking at adding a switch that can be flipped to prevent anyone on the platform from @-ing you — for a period of one day; three days; or seven days. So basically a ‘total peace and quiet’ mode.

It says it wants to make changes in this area that can work together to help users by stopping “the situation from escalating further” — such as by providing users with notifications when they’re getting lots of mentions, combined with the ability to easily review the tweets in question and change their settings to shield themselves (e.g. by blocking all mentions for a day or longer).

The known problem of online troll armies coordinating targeted attacks against Twitter users means it can take disproportionate effort for the object of a hate pile-on to shield themselves from the abuse of so many strangers.

Individually blocking abusive accounts or muting specific tweets does not scale in instances when there may be hundreds — or even thousands — of accounts and tweets involved in the targeted abuse.

For now, it remains to be seen whether or not Twitter will move forward and implement the exact features it’s showing off via Camozzi’s thread.

A Twitter spokeswoman confirmed the concepts are “a design mock” and “still in the early stages of design and research”. But she added: “We’re excited about community feedback even at this early stage.”

The company will need to consider whether the proposed features might introduce wider complications on the service. (Such as, for example, what would happen to automatically scheduled tweets that include the Twitter handle of someone who subsequently flips the ‘block all mentions’ setting; does that prevent the tweet from going out entirely or just have it tweet out but without the person’s handle, potentially lacking core context?)

Nonetheless, those are small details and it’s very welcome that Twitter is looking at ways to expand the utility of the tools users can use to protect themselves from abuse — i.e. beyond the existing, still fairly blunt, anti-abuse features (like block, mute and report tweet).

Co-ordinated trolling attacks have, for years, been an unwanted ‘feature’ of Twitter’s platform and the company has frequently been criticized for not doing enough to prevent harassment and abuse.

The simple fact that Twitter is still looking for ways to provide users with better tools to prevent hate pile-ons — here in mid 2021 — is a tacit acknowledgment of its wider failure to clear abusers off its platform. Despite repeated calls for it to act.

A Google search for “* leaves Twitter after abuse” returns numerous examples of high profile Twitter users quitting the platform after feeling unable to deal with waves of abuse — several from this year alone (including a number of footballers targeted with racist tweets).

Other examples date back as long ago as 2013, underlining how Twitter has repeatedly failed to get a handle on its abuse problem, leaving users to suffer at the hands of trolls for well over a decade (or, well, just quit the service entirely).

One recent high profile exit was the model Chrissy Teigen — who had been a long time Twitter user, spending ten years on the platform — but who pulled the plug on her account in March, writing in her final tweets that she was “deeply bruised” and that the platform “no longer serves me positively as it serves me negatively”.

A number of soccer players in the UK have also been campaigning against racism on social media this year — organizing a boycott of services to amp up pressure on companies like Twitter to deal with racist abusers.

While public figures who use social media may be more likely to face higher levels of abusive online trolling than other types of users, it’s a problem that isn’t limited to users with a public profile. Racist abuse, for example, remains a general problem on Twitter. And the examples of celebrity users quitting over abuse that are visible via Google are certainly just the tip of the iceberg.

It goes without saying that it’s terrible for Twitter’s business if highly engaged users feel forced to abandon the service in despair.

The company knows it has a problem. As far back as 2018 it said it was looking for ways to improve “conversational health” on its platform — as well as, more recently, expanding its policies and enforcement around hateful and abusive tweets.

It has also added some strategic friction to try to nudge users to be more thoughtful and take some of the heat out of outrage cycles — such as encouraging users to read an article before directly retweeting it.

Perhaps most notably it has banned some high profile abusers of its service — including, at long last, president troll Trump himself earlier this year.

A number of other notorious trolls have also been booted over the years, although typically only after Twitter had allowed them to carry on coordinating abuse of others via its service, failing to promptly and vigorously enforce its policies against hateful conduct — letting the trolls get away with seeing how far they could push their luck — until the last.

By failing to get a proper handle on abusive use of its platform for so long, Twitter has created a toxic legacy out of its own mismanagement — one that continues to land it unwanted attention from high profile users who might otherwise be key ambassadors for its service.

#abuse, #chrissy-teigen, #hate-speech, #internet-troll, #social, #social-media, #trolling, #trump, #twitter

Macron says G7 countries should work together to tackle toxic online content

In a press conference at the Élysée Palace, French President Emmanuel Macron reiterated his focus on online regulation, and more particularly toxic content. He called for more international cooperation as the Group of Seven (G7) summit is taking place later this week in the U.K.

“The third big topic that could benefit from efficient multilateralism and that we’re going to bring up during this G7 summit is online regulation,” Macron said. “This topic, and I’m sure we’ll talk about it again, is essential for our democracies.”

Macron also used that opportunity to sum up France’s efforts on this front. “During the summer of 2017, we launched an initiative to tackle online terrorist content with then Prime Minister Theresa May. At first, and as crazy as it sounds today, we mostly failed. Because of free speech, people told us to mind our own business, more or less.”

In 2019, there was a horrendous mass mosque shooting in Christchurch, New Zealand. And you could find multiple copies of the shooting videos on Facebook, YouTube and Twitter. Macron invited New Zealand Prime Minister Jacinda Ardern, several digital ministers of the G7 and tech companies to Paris.

They all signed a nonbinding pledge called the Christchurch Call. Essentially, tech companies that operate social platforms agreed to increase their efforts when it comes to blocking toxic content — and terrorist content in particular.

Facebook, Twitter, Google (and YouTube), Microsoft, Amazon and other tech companies signed the pledge. 17 countries and the European Commission also backed the Christchurch Call. There was one notable exception — the U.S. didn’t sign it.

“This strategy led to some concrete results because all online platforms that signed it have followed through,” Macron said. “Evidence of this lies in what happened in France last fall when we faced terrorist attacks.” In October 2020, French middle-school teacher Samuel Paty was killed and beheaded by a terrorist.

“Platforms flagged content and removed content within an hour,” he added.

Over time, more countries and online platforms announced their support for the Christchurch Call. In May, President Joe Biden joined the international bid against toxic content. “Given the number of companies incorporated in the U.S., it’s a major step and I welcome it,” Macron said today.

But what comes next after the Christchurch Call? First, Macron wants to convince more countries to back the call — China and Russia aren’t part of the supporters for instance.

“The second thing is that we have to push forward to create a framework for all sorts of online hate speech, racist speech, anti-semitic speech and everything related to online harassment,” Macron said.

He then briefly referred to French regulation on this front. Last year, French regulation on hate speech on online platforms has been widely deemed as unconstitutional by France’s Constitutional Council, the top authority in charge of ruling whether a new law complies with the constitution.

The list of hate-speech content was long and broad while potential fines were very high. The Constitutional Council feared that online platforms would censor content a bit too quickly.

But that doesn’t seem to be stopping Macron from backing new regulation on online content at the European level and at the G7 level.

“It’s the only way to build an efficient framework that we can bring at the G20 summit and that can help us fight against wild behavior in online interactions — and therefore wild behavior in our new world order,” Macron said, using the controversial ‘wild behavior’ metaphor (ensauvagement). That term was first popularized by far-right political figures.

According to him, if world leaders fail to find some common grounds when it comes to online regulation, it’ll lead to internet fragmentation. Some countries may choose to block several online services for instance.

And yet, recent events have showed us that this ship has sailed already. The Nigerian government suspended Twitter operations in the country just a few days ago. It’s easy to agree to block terrorist content, but it becomes tedious quite quickly when you want to moderate other content.

#emmanuel-macron, #europe, #hate-speech, #macron, #policy, #regulation, #social, #toxic-content

Facebook’s hand-picked ‘oversight’ panel upholds Trump ban — for now

Facebook’s content decision review body, a quasi-external panel that’s been likened to a ‘Supreme Court of Facebook’ but isn’t staffed by sitting judges, can’t be truly independent of the tech giant which funds it, has no legal legitimacy or democratic accountability, and goes by the much duller official title ‘Oversight Board’ (aka the FOB) — has just made the biggest call of its short life…

Facebook’s hand-picked ‘oversight’ panel has voted against reinstating former U.S. president Donald Trump’s Facebook account.

However it has sought to row the company back from an ‘indefinite’ ban — finding fault with its decision to impose an indefinite restriction, rather than issue a more standard penalty (such as a penalty strike or permanent account closure).

In a press release announcing its decision the board writes:

Given the seriousness of the violations and the ongoing risk of violence, Facebook was justified in suspending Mr. Trump’s accounts on January 6 and extending that suspension on January 7.

However, it was not appropriate for Facebook to impose an ‘indefinite’ suspension.

It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.”

The board wants Facebook to revision its decision on Trump’s account within six months — and “decide the appropriate penalty”. So it appears to have succeeded in… kicking the can down the road.

The FOB is due to hold a press conference to discuss its decision shortly so stay tuned for updates.

This story is developing… refresh for updates…

It’s certainly been a very quiet five months on mainstream social media since Trump had his social media ALL CAPS megaphone unceremoniously shut down in the wake of his supporters’ violent storming of the capital.

For more on the background to Trump’s deplatforming do make time for this excellent explainer by TechCrunch’s Taylor Hatmaker. But the short version is that Trump finally appeared to have torched the last of his social media rule-breaking chances after he succeeded in fomenting an actual insurrection on U.S. soil on January 6. Doing so with the help of the massive, mainstream social media platforms whose community standards don’t, as a rule, give a thumbs up to violent insurrection…

#alan-rusbridger, #alex-stamos, #content-moderation, #donald-j-trump, #donald-trump, #facebook, #facebook-oversight-board, #fob, #freedom-of-speech, #hate-speech, #joe-biden, #mark-zuckerberg, #nick-clegg, #oversight-board, #policy, #social, #social-media, #united-states

On Google Podcasts, a Buffet of Hate

The platform’s tolerance of white supremacist, pro-Nazi and conspiracy theory content pushes the boundaries of the medium.

#freedom-of-speech-and-expression, #hate-speech, #jones-alex-1974, #podcasts, #right-wing-extremism-and-alt-right, #social-media

For Political Cartoonists, the Irony Was That Facebook Didn’t Recognize Irony

As Facebook has become more active at moderating political speech, it has had trouble dealing with satire.

#bors-matt, #cartoons-and-cartoonists, #comedy-and-humor, #computers-and-the-internet, #facebook-inc, #fringe-groups-and-movements, #hall-ed, #hate-speech, #instagram-inc, #rumors-and-misinformation, #social-media, #violence-media-and-entertainment, #zyglis-adam

Tech’s Legal Shield Appears Likely to Survive as Congress Focuses on Details

Section 230 isn’t expected to be revoked, but even the more modest proposals for weakening it could have effects that ripple across the internet.

#facebook-inc, #freedom-of-speech-and-expression, #google-inc, #hate-speech, #law-and-legislation, #online-advertising, #social-media, #united-states-politics-and-government

Donald Trump is one of 15,000 Gab users whose account just got hacked

Promotional image for social media site Gab says

Enlarge (credit: Gab.com)

The founder of the far-right social media platform Gab said that the private account of former President Donald Trump was among the data stolen and publicly released by hackers who recently breached the site.

In a statement on Sunday, founder Andrew Torba used a transphobic slur to refer to Emma Best, the co-founder of Distributed Denial of Secrets. The statement confirmed claims the WikiLeaks-style group made on Monday that it obtained 70GB of passwords, private posts, and more from Gab and was making them available to select researchers and journalists. The data, Best said, was provided by an unidentified hacker who breached Gab by exploiting a SQL-injection vulnerability in its code.

“My account and Trump’s account were compromised, of course as Trump is about to go on stage and speak,” Torba wrote on Sunday as Trump was about to speak at the CPAC conference in Florida. “The entire company is all hands investigating what happened and working to trace and patch the problem.”

Read 10 remaining paragraphs | Comments

#biz-it, #ddosecrets, #gab, #hacking, #hate-speech, #leaks, #policy, #tech

The Economic Case for Regulating Social Media

The core business model of platforms like Facebook and Twitter poses a threat to society and requires retooling, an economist says.

#antitrust-laws-and-competition-issues, #conspiracy-theories, #facebook-inc, #freedom-of-speech-and-expression, #fringe-groups-and-movements, #hate-speech, #online-advertising, #political-advertising, #regulation-and-deregulation-of-industry, #rumors-and-misinformation, #social-media, #twitter, #united-states-economy, #united-states-politics-and-government, #youtube-com

Facebook’s ‘oversight’ body overturns four takedowns and issues a slew of policy suggestions

Facebook’s self-regulatory ‘Oversight Board’ (FOB) has delivered its first batch of decisions on contested content moderation decisions almost two months after picking its first cases.

A long time in the making, the FOB is part of Facebook’s crisis PR push to distance its business from the impact of controversial content moderation decisions — by creating a review body to handle a tiny fraction of the complaints its content takedowns attract. It started accepting submissions for review in October 2020 — and has faced criticism for being slow to get off the ground.

Announcing the first decisions today, the FOB reveals it has chosen to uphold just one of the content moderation decisions made earlier by Facebook, overturning four of the tech giant’s decisions.

Decisions on the cases were made by five-member panels that contained at least one member from the region in question and a mix of genders, per the FOB. A majority of the full Board then had to review each panel’s findings to approve the decision before it could be issued.

The sole case where the Board has upheld Facebook’s decision to remove content is case 2020-003-FB-UA — where Facebook had removed a post under its Community Standard on Hate Speech which had used the Russian word “тазики” (“taziks”) to describe Azerbaijanis, who the user claimed have no history compared to Armenians.

In the four other cases the Board has overturned Facebook takedowns, rejecting earlier assessments made by the tech giant in relation to policies on hate speech, adult nudity, dangerous individuals/organizations, and violence and incitement. (You can read the outline of these cases on its website.)

Each decision relates to a specific piece of content but the board has also issued nine policy recommendations.

These include suggestions that Facebook [emphasis ours]:

  • Create a new Community Standard on health misinformation, consolidating and clarifying the existing rules in one place. This should define key terms such as “misinformation.”
  • Adopt less intrusive means of enforcing its health misinformation policies where the content does not reach Facebook’s threshold of imminent physical harm.
  • Increase transparency around how it moderates health misinformation, including publishing a transparency report on how the Community Standards have been enforced during the COVID-19 pandemic. This recommendation draws upon the public comments the Board received.
  • Ensure that users are always notified of the reasons for any enforcement of the Community Standards against them, including the specific rule Facebook is enforcing. (The Board made two identical policy recommendations on this front related to the cases it considered, also noting in relation to the second hate speech case that “Facebook’s lack of transparency left its decision open to the mistaken belief that the company removed the content because the user expressed a view it disagreed with”.)
  • Explain and provide examples of the application of key terms from the Dangerous Individuals and Organizations policy, including the meanings of “praise,” “support” and “representation.” The Community Standard should also better advise users on how to make their intent clear when discussing dangerous individuals or organizations.
  • Provide a public list of the organizations and individuals designated as ‘dangerous’ under the Dangerous Individuals and Organizations Community Standard or, at the very least, a list of examples.
  • Inform users when automated enforcement is used to moderate their content, ensure that users can appeal automated decisions to a human being in certain cases, and improve automated detection of images with text-overlay so that posts raising awareness of breast cancer symptoms are not wrongly flagged for review. Facebook should also improve its transparency reporting on its use of automated enforcement.
  • Revise Instagram’s Community Guidelines to specify that female nipples can be shown to raise breast cancer awareness and clarify that where there are inconsistencies between Instagram’s Community Guidelines and Facebook’s Community Standards, the latter take precedence.

Where it has overturned Facebook takedowns the board says it expects Facebook to restore the specific pieces of removed content within seven days.

In addition, the Board writes that Facebook will also “examine whether identical content with parallel context associated with the Board’s decisions should remain on its platform”. And says Facebook has 30 days to publicly respond to its policy recommendations.

So it will certainly be interesting to see how the tech giant responds to the laundry list of proposed policy tweaks — perhaps especially the recommendations for increased transparency (including the suggestion it inform users when content has been removed solely by its AIs) — and whether Facebook is happy to align entirely with the policy guidance issued by the self-regulatory vehicle (or not).

Facebook created the board’s structure and charter and appointed its members — but has encouraged the notion it’s ‘independent’ from Facebook, even though it also funds FOB (indirectly, via a foundation it set up to administer the body).

And while the Board claims its review decisions are binding on Facebook there is no such requirement for Facebook to follow its policy recommendations.

It’s also notable that the FOB’s review efforts are entirely focused on takedowns — rather than on things Facebook chooses to host on its platform.

Given all that it’s impossible to quantify how much influence Facebook exerts on the Facebook Oversight Board’s decisions. And even if Facebook swallows all the aforementioned policy recommendations — or more likely puts out a PR line welcoming the FOB’s ‘thoughtful’ contributions to a ‘complex area’ and says it will ‘take them into account as it moves forward’ — it’s doing so from a place where it has retained maximum control of content review by defining, shaping and funding the ‘oversight’ involved.

tl;dr: An actual supreme court this is not.

In the coming weeks, the FOB will likely be most closely watched over a case it accepted recently — related to the Facebook’s indefinite suspension of former US president Donald Trump, after he incited a violent assault on the US capital earlier this month.

The board notes that it will be opening public comment on that case “shortly”.

“Recent events in the United States and around the world have highlighted the enormous impact that content decisions taken by internet services have on human rights and free expression,” it writes, going on to add that: “The challenges and limitations of the existing approaches to moderating content draw attention to the value of independent oversight of the most consequential decisions by companies such as Facebook.”

But of course this ‘Oversight Board’ is unable to be entirely independent of its founder, Facebook.

#content-moderation, #facebook, #facebook-oversight-board, #hate-speech, #policy, #social

Threat of inauguration violence casts a long shadow over social media

As the U.S. heads into one of the most perilous phases of American democracy since the Civil War, social media companies are scrambling to shore up their patchwork defenses for a moment they appear to have believed would never come.

Most major platforms pulled the emergency break last week, deplatforming the president of the United States and enforcing suddenly robust rules against conspiracies, violent threats and undercurrents of armed insurrection, all of which proliferated on those services for years. But within a week’s time, Amazon, Facebook, Twitter, Apple and Google had all made historic decisions in the name of national stability — and appearances. Snapchat, TikTok, Reddit and even Pinterest took their own actions to prevent a terror plot from being hatched on their platforms.

Now, we’re in the waiting phase. More than a week after a deadly pro-Trump riot invaded the iconic seat of the U.S. legislature, the internet still feels like it’s holding its breath, a now heavily-fortified inauguration ceremony looming ahead.

(Photo by SAUL LOEB/AFP via Getty Images)

What’s still out there

On the largest social network of all, images hyping follow-up events continued to circulate mid this week. One digital Facebook flyer promoted an “armed march on Capitol Hill and all state Capitols,” pushing the dangerous and false conspiracy that the 2020 presidential election was stolen.

Facebook says that it’s working to identify flyers calling for “Stop the Steal” adjacent events using digital fingerprinting, the same process it uses to remove terrorist content from ISIS and Al Qaeda. The company noted that it has seen flyers calling for events on January 17 across the country, January 18 in Virginia and inauguration day in D.C.

At least some of Facebook’s new efforts are working: one popular flyer TechCrunch observed on the platform was removed from some users’ feeds this week. A number of “Stop the Steal” groups we’d observed over the last month also unceremoniously blinked offline early this week following more forceful action from the company. Still, given the writing on the wall, many groups had plenty of time to tweak their names by a few words or point followers elsewhere to organize.

With only days until the presidential transition, acronym-heavy screeds promoting QAnon, an increasingly mainstream collection of outrageous pro-Trump government conspiracy theories, also remain easy to find. On one page with 2,500 followers, a QAnon believer pushed the debunked claim that anti-fascists executed the attack on the Capitol, claiming “January 6 was a trap.”

QAnon sign

(Photo by Win McNamee/Getty Images)

On a different QAnon group, an ominous post from an admin issued Congress a warning: “We have found a way to end this travesty! YOUR DAYS ARE NUMBERED!” The elaborate conspiracy’s followers were well represented at the deadly riot at the Capitol, as the many giant “Q” signs and esoteric t-shirt slogans made clear.

In a statement to TechCrunch about the state of extremism on the platform, Facebook says it is coordinating with terrorism experts as well as law enforcement “to prevent direct threats to public safety.” The company also noted that it works with partners to stay aware of violent content taking root on other platforms.

Facebook’s efforts are late and uneven, but they’re also more than the company has done to date. Measures from big social networks coupled with the absence of far-right social networks like Parler and Gab have left Trump’s most ardent supporters once again swearing off Silicon Valley and fanning out for an alternative.

Social media migration

Private messaging apps Telegram and Signal are both seeing an influx of users this week, but they offer something quite different from a Facebook or Twitter-like experience. Some expert social network observers see the recent migration as seasonal rather than permanent.

“The spike in usage of messaging platforms like Telegram and Signal will be temporary,” Yonder CEO Jonathon Morgan told TechCrunch. “Most users will either settle on platforms with a social experience, like Gab, MeWe, or Parler, if it returns, or will migrate back to Twitter and Facebook.”

That company uses AI to track how social groups connect online and what they talk about — violent conspiracies included. Morgan believes that propaganda-spreading “performative internet warriors” make a lot of noise online, but a performance doesn’t work without an audience. Others may quietly pose a more serious threat.

“The different types of engagement we saw during the assault on the Capitol mirror how these groups have fragmented online,” Morgan said. “We saw a large mob who was there to cheer on the extremists but didn’t enter the Capitol, performative internet warriors taking selfies, and paramilitaries carrying flex cuffs (mislabeled as “zip ties” in a lot of social conversation), presumably ready to take hostages.

“Most users (the mob) will be back on Parler if it returns, and in the meantime, they are moving to other apps that mimic the social experience of Twitter and Facebook, like MeWe.”

Still, Morgan says that research shows “deplatforming” extremists and conspiracy-spreaders is an effective strategy and efforts by “tech companies from Airbnb to AWS” will reduce the chances of violence in the coming days.

Cleaning up platforms can help turn the masses away from dangerous views, he explained, but the same efforts might further galvanize people with an existing intense commitment to those beliefs. With the winds shifting, already heterogeneous groups will be scattered too, making their efforts desperate and less predictable.

Deplatforming works, with risks

Jonathan Greenblatt, CEO of the Anti-Defamation League, told TechCrunch that social media companies still need to do much more to prepare for inauguration week. “We saw platforms fall short in their response to the Capitol insurrection,” Greenblatt said.

He cautioned that while many changes are necessary, we should be ready for online extremism to evolve into a more fractured ecosystem. Echo chambers may become smaller and louder, even as the threat of “large scale” coordinated action diminishes.

“The fracturing has also likely pushed people to start communicating with each other via encrypted apps and other private means, strengthening the connections between those in the chat and providing a space where people feel safe openly expressing violent thoughts, organizing future events, and potentially plotting future violence,” Greenblatt said.

By their own standards, social media companies have taken extraordinary measures in the U.S. in the last two weeks. But social networks have a long history of facilitating violence abroad, even as attention turns to political violence in America.

Greenblatt repeated calls for companies to hire more human moderators, a suggestion often made by experts focused on extremism. He believes social media could still take other precautions for inauguration week, like introducing a delay into livestreams or disabling them altogether, bolstering rapid response teams and suspending more accounts temporarily rather than focusing on content takedowns and handing out “strikes.”

“Platforms have provided little-to-nothing in the way of transparency about learnings from last week’s violent attack in the Capitol,” Greenblatt said.

“We know the bare minimum of what they ought to be doing and what they are capable of doing. If these platforms actually provided transparency and insights, we could offer additional—and potentially significantly stronger—suggestions.”

#capitol-riot, #facebook-misinformation, #hate-speech, #misinformation, #social, #tc

Parler CEO admits site may never recover from Amazon ban

Parler CEO admits site may never recover from Amazon ban

Enlarge (credit: Jaap Arriens/NurPhoto via Getty Images)

Parler may never recover from being banned by Amazon and a number of other technology companies, CEO John Matze told Reuters in a Wednesday interview.

“I am an optimist,” he said at one point in the conversation. “It may take days, it may take weeks but Parler will return and when we do we will be stronger.”

But at another point in the conversation, he acknowledged, “It could be never. We don’t know yet.”

Read 4 remaining paragraphs | Comments

#hate-speech, #january-6, #parler, #policy

What is Dlive? The Streaming Site Growing in Far-Right Users

A site called Dlive, where rioters broadcast from the Capitol, is benefiting from the growing exodus of right-wing users from Twitter, Facebook and YouTube.

#conspiracy-theories, #dlive-inc, #fringe-groups-and-movements, #gionet-tim, #hate-speech, #right-wing-extremism-and-alt-right, #rumors-and-misinformation, #social-media, #storming-of-the-us-capitol-jan-2021, #video-recordings-downloads-and-streaming, #whites

Reddit clone Voat, home to hate speech and QAnon, has shut down

That's the book shut on <em>one</em> unsavory corner of the Internet...

Enlarge / That’s the book shut on one unsavory corner of the Internet… (credit: LdF | Getty Images)

Reddit alternative Voat shut down on Christmas Day, citing a lack of operational funding, and casting doubt on the abilities of other similar almost-anything-goes, “free speech” platforms to stay online in the long run.

“I just can’t keep it up,” Voat cofounder Justin Chastain said in the shutdown announcement. Investment dried up in March 2020, he explained. “I personally decided to keep Voat up until after the U.S. election of 2020. I’ve been paying the costs out of pocket but now I’m out of money.”

Voat first launched in 2014 as a smaller Reddit alternative dedicated to “free speech,” including explicit hate speech, extreme right-wing content, racism, and other content limited or prohibited on other sites. It gained traction in 2015, when Reddit finally banned several explicitly racist subreddits from its platform in a bid to limit harassment, and some discontented Reddit users decided to migrate over.

Read 5 remaining paragraphs | Comments

#extremism, #free-speech, #gab, #harassment, #hate-speech, #online-hate-speech, #parler, #policy, #reddit, #terrorism, #voat

Big Fines and Strict Rules Unveiled Against ‘Big Tech’ in Europe

European Union and British authorities released draft laws to halt the spread of harmful content and improve competition.

#apple-inc, #computers-and-the-internet, #data-mining-and-database-marketing, #european-union, #facebook-inc, #fines-penalties, #hate-speech, #mobile-applications, #politics-and-government, #privacy, #regulation-and-deregulation-of-industry, #social-media, #twitter

Twitch Cracks Down on Hate Speech and Harassment

The livestreaming site announced new guidelines after contending with claims that its streamers were too easily abused.

#metoo-movement, #clemens-sara, #computer-and-video-games, #computers-and-the-internet, #corporate-social-responsibility, #cyberharassment, #hate-speech, #rumors-and-misinformation, #sexual-harassment, #social-media, #twitch-interactive-inc, #video-recordings-downloads-and-streaming, #workplace-hazards-and-violations

Facebook’s self-styled ‘oversight’ board selects first cases, most dealing with hate speech

A Facebook -funded body that the tech giant set up to distance itself from tricky and potentially reputation-damaging content moderation decisions has announced the first bundle of cases it will consider.

In a press release on its website the Facebook Oversight Board (FOB) says it sifted through more than 20,000 submissions before settling on six cases — one of which was referred to it directly by Facebook.

The six cases it’s chosen to start with are:

Facebook submission: 2020-006-FB-FBR

A case from France where a user posted a video and accompanying text to a COVID-19 Facebook group — which relates to claims about the French agency that regulates health products “purportedly refusing authorisation for use of hydroxychloroquine and azithromycin against COVID-19, but authorising promotional mail for remdesivir”; with the user criticizing the lack of a health strategy in France and stating “[Didier] Raoult’s cure” is being used elsewhere to save lives”. Facebook says it removed the content for violating its policy on violence and incitement. The video in questioned garnered at least 50,000 views and 1,000 shares.

The FOB says Facebook indicated in its referral that this case “presents an example of the challenges faced when addressing the risk of offline harm that can be caused by misinformation about the COVID-19 pandemic”.

User submissions:

Out of the five user submissions that the FOB selected, the majority (three cases) are related to hate speech takedowns.

One case apiece is related to Facebook’s nudity and adult content policy; and to its policy around dangerous individuals and organizations.

See below for the Board’s descriptions of the five user submitted cases:

  • 2020-001-FB-UA: A user posted a screenshot of two tweets by former Malaysian Prime Minister, Dr Mahathir Mohamad, in which the former Prime Minister stated that “Muslims have a right to be angry and kill millions of French people for the massacres of the past” and “[b]ut by and large the Muslims have not applied the ‘eye for an eye’ law. Muslims don’t. The French shouldn’t. Instead the French should teach their people to respect other people’s feelings.” The user did not add a caption alongside the screenshots. Facebook removed the post for violating its policy on hate speech. The user indicated in their appeal to the Oversight Board that they wanted to raise awareness of the former Prime Minister’s “horrible words”.
  • 2020-002-FB-UA: A user posted two well-known photos of a deceased child lying fully clothed on a beach at the water’s edge. The accompanying text (in Burmese) asks why there is no retaliation against China for its treatment of Uyghur Muslims, in contrast to the recent killings in France relating to cartoons. The post also refers to the Syrian refugee crisis. Facebook removed the content for violating its hate speech policy. The user indicated in their appeal to the Oversight Board that the post was meant to disagree with people who think that the killer is right and to emphasise that human lives matter more than religious ideologies.

  • 2020-003-FB-UA: A user posted alleged historical photos showing churches in Baku, Azerbaijan, with accompanying text stating that Baku was built by Armenians and asking where the churches have gone. The user stated that Armenians are restoring mosques on their land because it is part of their history. The user said that the “т.а.з.и.к.и” are destroying churches and have no history. The user stated that they are against “Azerbaijani aggression” and “vandalism”. The content was removed for violating Facebook’s hate speech policy. The user indicated in their appeal to the Oversight Board that their intention was to demonstrate the destruction of cultural and religious monuments.

  • 2020-004-IG-UA: A user in Brazil posted a picture on Instagram with a title in Portuguese indicating that it was to raise awareness of signs of breast cancer. Eight photographs within the picture showed breast cancer symptoms with corresponding explanations of the symptoms underneath. Five of the photographs included visible and uncovered female nipples. The remaining three photographs included female breasts, with the nipples either out of shot or covered by a hand. Facebook removed the post for violating its policy on adult nudity and sexual activity. The post has a pink background, and the user indicated in a statement to the Oversight Board that it was shared as part of the national “Pink October” campaign for the prevention of breast cancer.

  • 2020-005-FB-UA: A user in the US was prompted by Facebook’s “On This Day” function to reshare a “memory” in the form of a post that the user made two years ago. The user reshared the content. The post (in English) is an alleged quote from Joseph Goebbels, the Reich Minister of Propaganda in Nazi Germany, on the need to appeal to emotions and instincts, instead of intellect, and on the unimportance of truth. Facebook removed the content for violating its policy on dangerous individuals and organisations. The user indicated in their appeal to the Oversight Board that the quote is important as the user considers the current US presidency to be following a fascist model

Public comments on the cases can be submitted via the FOB’s website — but only for seven days (closing at 8:00 Eastern Standard Time on Tuesday, December 8, 2020).

The FOB says it “expects” to decide on each case — and “for Facebook to have acted on this decision” — within 90 days. So the first ‘results’ from the FOB, which only began reviewing cases in October, are almost certainly not going to land before 2021.

Panels comprised of five FOB members — including at least one from the region “implicated in the content” — will be responsible for deciding whether the specific pieces of content in question should stay down or be put back up.

Facebook’s outsourcing of a fantastically tiny subset of content moderation considerations to a subset of its so-called ‘Oversight Board’ has attracted plenty of criticism (including inspiring a mirrored unofficial entity that dubs itself the Real Oversight Board) — and no little cynicism.

Not least because it’s entirely funded by Facebook; structured as Facebook intended it to be structured; and with members chosen via a system devised by Facebook.

If it’s radical change you’re looking for, the FOB is not it.

Nor does the entity have any power to change Facebook policy — it can only issue recommendations (which Facebook can choose to entirely ignore).

Its remit does not extend to being able to investigate how Facebook’s attention-seeking business model influences the types of content being amplified or depressed by its algorithms, either.

And the narrow focus on content taken downs — rather than content that’s already allowed on the social network — skews its purview, as we’ve pointed out before.

So you won’t find the board asking tough questions about why hate groups continue to flourish and recruit on Facebook, for example, or robustly interrogating how much succour its algorithmic amplification has gifted to the antivaxx movement.  By design, the FOB is focused on symptoms, not the nation-sized platform ill of Facebook itself. Outsourcing a fantastically tiny subset of content moderations decisions can’t signify anything else.  

With this Facebook-commissioned pantomime of accountability the tech giant will be hoping to generate a helpful pipeline of distracting publicity — focused around specific and ‘nuanced’ content decisions — deflecting plainer but harder-hitting questions about the exploitative and abusive nature of Facebook’s business itself, and the lawfulness of its mass surveillance of Internet users, as lawmakers around the world grapple with how to rein in tech giants.  

The company wants the FOB to reframe discussion about the culture wars (and worse) that Facebook’s business model fuels as a societal problem — pushing a self-serving ‘fix’ for algorithmically fuelled societal division in the form of a few hand-picked professionals opining on individual pieces of content, leaving it free to continue defining the shape of the attention economy on a global scale. 

#content-moderation, #facebook, #facebook-oversight-board, #hate-speech, #platform-regulation, #social

Facebook loses final appeal in defamation takedown case, must remove same and similar hate posts globally

Austria’s Supreme Court has dismissed Facebook’s appeal in a long running speech takedown case — ruling it must remove references to defamatory comments made about a local politician worldwide for as long as the injunction lasts.

We’ve reached out to Facebook for comment on the ruling.

Green Party politician, Eva Glawischnig, successfully sued the social media giant seeking removal of defamatory comments made about her by a user of its platform after Facebook had refused to take down the abusive postings — which referred to her as a “lousy traitor”, a “corrupt tramp” and a member of a “fascist party”. 

After a preliminary injunction in 2016 Glawischnig won local removal of the defamatory postings the next year but continued her legal fight — pushing for similar postings to be removed and take downs to also be global.

Questions were referred up to the EU’s Court of Justice. And in a key judgement last year the CJEU decided platforms can be instructed to hunt for and remove illegal speech worldwide without falling foul of European rules that preclude platforms from being saddled with a “general content monitoring obligation”. Today’s Austrian Supreme Court ruling flows naturally from that.

Austrian newspaper Der Standard reports that the court confirmed the injunction applies worldwide, both to identical postings or those that carry the same essential meaning as the original defamatory posting.

It said the Austrian court argues that EU Member States and civil courts can require platforms like Facebook to monitor content in “specific cases” — such as when a court has identified user content as unlawful and “specific information” about it — in order to prevent content that’s been judged to be illegal from being reproduced and shared by another user of the network at a later point in time with the overarching aim of preventing future violations.

The case has important implications for the limitations of online speech.

Regional lawmakers are also working on updating digital liability regulations. Commission lawmakers have said they want to force platforms to take more responsibility for the content they fence and monetize — fuelled by concerns about the impact of online hate speech, terrorist content and divisive disinformation.

A long-standing EU rule, prohibiting Member States from putting a general content monitoring obligation on platforms, limits how they can be forced to censor speech. But the CJEU ruling has opened the door to bounded monitoring of speech — in instances where it’s been judged to be illegal — and that in turn may influence the policy substance of the Digital Services Act which the Commission is due to publish in draft early next month.

In a reaction to last year’s CJEU ruling, Facebook argued it “opens the door to obligations being imposed on internet companies to proactively monitor content and then interpret if it is ‘equivalent’ to content that has been found to be illegal”.

“In order to get this right national courts will have to set out very clear definitions on what ‘identical’ and ‘equivalent’ means in practice. We hope the courts take a proportionate and measured approach, to avoid having a chilling effect on freedom of expression,” it added.

#censorship, #content-takedowns, #defamation, #europe, #eva-glawischnig, #facebook, #free-speech, #freedom-of-expression, #hate-speech, #lawsuit, #platform-regulation

Bill Offering L.G.B.T. Protections in Italy Spurs Rallies on Both Sides

Supporters frame the measure as a long-overdue means to provide basic human rights. Opponents depict it as an overreaching step that would suppress opinion.

#assaults, #discrimination, #gender, #hate-crimes, #hate-speech, #homosexuality-and-bisexuality, #italy, #law-and-legislation, #transgender-and-transsexuals, #women-and-girls

Meghan, Duchess of Sussex, Speaks Out Against Harmful Online Behavior

She said the birth last year of Archie, her son with Prince Harry, had compelled her to take a stand against online bullying and misinformation.

#archie-earl-of-dumbarton, #computers-and-the-internet, #great-britain, #harry-duke-of-sussex, #hate-speech, #markle-meghan, #royal-families

Facebook gives more details about its efforts against hate speech before Myanmar’s general election

About three weeks ago, Facebook announced will increase its efforts against hate speech and misinformation in Myanmar before the country’s general election on November 8, 2020. Today, it gave some more details about what the company is doing to prevent the spread of hate speech and misinformation. This includes adding Burmese language warning screens to flag information rated false by third-party fact-checkers.

In November 2018, Facebook admitted it didn’t do enough to prevent its platform from being used to “foment division and incite offline violence” in Myanmar.

This is an understatement, considering that Facebook has been accused by human rights groups, including the United Nations Human Rights Council, of enabling the spread of hate speech in Myanmar against Rohingya Muslims, the target of a brutally violent ethnic cleansing campaign. A 2018 investigation by the New York Times found that members of the military in Myanmar, a predominantly Buddhist country, instigated genocide against Rohingya, and used Facebook, one of the country’s most widely-used online services, as a tool to conduct a “systematic campaign” of hate speech against the minority group.

In its announcement several weeks ago, Facebook said it will expand its misinformation policy and remove information intended to “lead to voter suppression or damage the integrity of the electoral process” by working with three fact-checking partners in Myanmar—BOOM, AFP Fact Check and Fact Crescendo. It also said it would flag potentially misleading images and apply a message forwarding limit it introduced in Sri Lanka in June 2019.

Facebook also shared that it in the second quarter of 2020, it had taken action against 280,000 pieces of content in Myanmar that violated it Community Standards against hate speech, with 97.8% detected by its systems before being reported, up from the 51,000 pieces of content it took action against in the first quarter.

But, as TechCrunch’s Natasha Lomas noted, “without greater visibility into the content Facebook’s platform is amplifying, including country specific factors such as whether hate speech posting is increasing in Myanmar as the election gets closer, it’s not possible to understand what volume of hate speech is passing under the radar of Facebook’s detection systems and reaching local eyeballs.”

Facebook’s latest announcement, posted today on its News Room, doesn’t answer those questions. Instead, the company gave some more information about what its preparations for the Myanmar general election.

The company said it will use technology to identify “new words and phrases associated with hate speech” in the country, and either remove posts with those words or “reduce their distribution.”

It will also introduce Burmese language warning screens for misinformation identified as false by its third-party fact-checkers, make reliable information about the election and voting more visible, and promote “digital literacy training” in Myanmar through programs like an ongoing monthly television talk show called “Tea Talks” and introducing its social media analytics tool, CrowdTangle, to newsrooms.

#apps, #asia, #facebook, #hate-speech, #misinformation, #myanmar, #southeast-asia, #tc

TikTok joins Europe’s code on tackling hate speech

TikTok, the popular short video sharing app, has joined the European Union’s Code of Conduct on Countering Illegal Hate Speech.

In a statement on joining the code, TikTok’s head of trust and safety for EMEA, Cormac Keenan, said: “We have never allowed hate on TikTok, and we believe it’s important that internet platforms are held to account on an issue as crucial as this.”

The non-legally binding code kicked off four years ago with a handful of tech giants agreeing to measures aimed at accelerating takedowns of illegal content while supporting their users to report hate speech and committing to increase joint working to share best practice to tackle the problem.

Since 2016 the code has grown from single to double figure signatories — and now covers Dailymotion, Facebook, Google+, Instagram, Jeuxvideo.com, Microsoft, Snapchat, TikTok, Twitter and YouTube.

TikTok’s statement goes on to highlight the platform’s “zero-tolerance” stance on hate speech and hate groups — in what reads like a tacit dig at Facebook, given the latter’s record of refusing to take down hate speech on ‘freedom of expression‘ grounds (including founder Mark Zuckerberg’s personal defence of letting holocaust denial thrive on his platform).

“We have a zero-tolerance stance on organised hate groups and those associated with them, like accounts that spread or are linked to white supremacy or nationalism, male supremacy, anti-Semitism, and other hate-based ideologies. We also remove race-based harassment and the denial of violent tragedies, such as the Holocaust and slavery,” Keenan writes.

“Our ultimate goal is to eliminate hate on TikTok. We recognise that this may seem an insurmountable challenge as the world is increasingly polarised, but we believe that this shouldn’t stop us from trying. Every bit of progress we make gets us that much closer to a more welcoming community experience for people on TikTok and out in the world.”

It’s interesting that EU hate speech rules are being viewed as a PR opportunity for TikTok to differentiate itself vs rival social platforms — even as most of them (Facebook included) are signed up to the very same code.

TikTok signing up comes a few months after it added its name to a similar EU initiative aimed at tackling the spread of online disinformation via a series of non-legally binding commitments.

The voluntary codes have proved popular with tech giants, given they lack legal compulsion and provide the opportunity for platforms to project the idea they’re doing something about tricky content issues — without the calibre and efficacy of their action being quantifiable.

The codes have also bought time by staving off actual regulation. But that is now looming. EU lawmakers are, for example, eyeing binding transparency rules for platforms to back up voluntary reports of illegal hate speech removals and make sure users are being properly informed of platform actions.

Commissioners are also consulting on and drafting a broader package of measures with the aim of updating long-standing rules wrapping digital services — including looking specifically at the rules around online liability and defining platform responsibilities vis-a-vis content.

A proposal for the Digital Services Act is slated before the end of the year.

The exact shape of the next-gen EU platform regulation remains to be seen but tighter rules for platform giants is one very real possibility, as lawmakers consult on ex ante regulation of so-called ‘gatekeeper’ platforms.

“Europe’s online marketplaces should be vibrant ecosystems, where start-ups have a real chance to blossom – they shouldn’t be closed shops controlled by a handful of gatekeeper platforms,” said EVP and competition chief, Margarthe Vestager, giving a speech in Berlin yesterday. “A list of ‘dos and don’ts’ could prevent conduct that is proven to be harmful to happen in the first place.

“The goal is that all companies, big and small, can compete on their merits on and offline.”

In just one example of the ongoing content moderation challenges faced by platforms, clips of a suicide were reported to be circulating on TikTok this week. Yesterday the company said it was trying to remove the content which it said had been livestreamed on Facebook.

#code-of-conduct-on-countering-illegal-hate-speech, #eu, #europe, #hate-speech, #platform-regulation, #policy, #social, #tiktok

Facebook touts beefed up hate speech detection ahead of Myanmar election

Facebook has offered a little detail on extra steps it’s taking to improve its ability to detect and remove hate speech and election disinformation ahead of Myanmar’s election. A general election is scheduled to take place in the country on November 8, 2020.

The announcement comes close to two years after the company admitted a catastrophic failure to prevent its platform from being weaponized to foment division and incite violence against the country’s Rohingya minority.

Facebook says now that it has expanded its misinformation policy with the aim of combating voter suppression and will now remove information “that could lead to voter suppression or damage the integrity of the electoral process” — giving the example of a post that falsely claims a candidate is a Bengali, not a Myanmar citizen, and thus ineligible to stand.

“Working with local partners, between now and November 22, we will remove verifiable misinformation and unverifiable rumors that are assessed as having the potential to suppress the vote or damage the integrity of the electoral process,” it writes.

Facebook says it’s working with three fact-checking organizations in the country — namely: BOOM, AFP Fact Check and Fact Crescendo — after introducing a fact-checking program there in March.

In March 2018 the United Nations warned that Facebook’s platform was being abused to spread hate speech and whip up ethnic violence in Myanmar. By November of that year the tech giant was forced to admit it had not stopped its platform from being repurposed as a tool to drive genocide, after a damning independent investigation slammed its impact on human rights.

On hate speech, which Facebook admits could suppress the vote in addition to leading to what it describes as “imminent, offline harm” (aka violence), the tech giant claims to have invested “significantly” in “proactive detection technologies” that it says help it “catch violating content more quickly”, albeit without quantifying the size of its investment nor providing further details. It only notes that it “also” uses AI to “proactively identify hate speech in 45 languages, including Burmese”.

Facebook’s blog post offers a metric to imply progress — with the company stating that in Q2 2020 it took action against 280,000 pieces of content in Myanmar for violations of its Community Standards prohibiting hate speech, of which 97.8% were detected proactively by its systems before the content was reported to it.

“This is up significantly from Q1 2020, when we took action against 51,000 pieces of content for hate speech violations, detecting 83% proactively,” it adds.

However without greater visibility into the content Facebook’s platform is amplifying, including country-specific factors such as whether hate speech posting is increasing in Myanmar as the election gets closer, it’s not possible to understand what volume of hate speech is passing under the radar of Facebook’s detection systems and reaching local eyeballs.

In a more clearly detailed development, Facebook notes that since August, electoral, issue and political ads in Myanmar have had to display a ‘paid for by’ disclosure label. Such ads are also stored in a searchable Ad Library for seven years — in an expansion of the self-styled ‘political ads transparency measures’ Facebook launched more than two years ago in the US and other western markets.

Facebook also says it’s working with two local partners to verify the official national Facebook Pages of political parties in Myanmar. “So far, more than 40 political parties have been given a verified badge,” it writes. “This provides a blue tick on the Facebook Page of a party and makes it easier for users to differentiate a real, official political party page from unofficial pages, which is important during an election campaign period.”

Another recent change it flags is an ‘image context reshare’ product, which launched in June — which Facebook says alerts a user when they attempt to share a image that’s more than a year old and could be “potentially harmful or misleading” (such as an image that “may come close to violating Facebook’s guidelines on violent content”).

“Out-of-context images are often used to deceive, confuse and cause harm. With this product, users will be shown a message when they attempt to share specific types of images, including photos that are over a year old and that may come close to violating Facebook’s guidelines on violent content. We warn people that the image they are about to share could be harmful or misleading will be triggered using a combination of artificial intelligence (AI) and human review,” it writes without offering any specific examples.

Another change it notes is the application of a limit on message forwarding to five recipients which Facebook introduced in Sri Lanka back in June 2019.

“These limits are a proven method of slowing the spread of viral misinformation that has the potential to cause real world harm. This safety feature is available in Myanmar and, over the course of the next few weeks, we will be making it available to Messenger users worldwide,” it writes.

On coordinated election interference, the tech giant has nothing of substance to share — beyond its customary claim that it’s “constantly working to find and stop coordinated campaigns that seek to manipulate public debate across our apps”, including groups seeking to do so ahead of a major election.

“Since 2018, we’ve identified and disrupted six networks engaging in Coordinated Inauthentic Behavior in Myanmar. These networks of accounts, Pages and Groups were masking their identities to mislead people about who they were and what they were doing by manipulating public discourse and misleading people about the origins of content,” it adds.

In summing up the changes, Facebook says it’s “built a team that is dedicated to Myanmar”, which it notes includes people “who spend significant time on the ground working with civil society partners who are advocating on a range of human and digital rights issues across Myanmar’s diverse, multi-ethnic society” — though clearly this team is not operating out of Myanmar.

It further claims engagement with key regional stakeholders will ensure Facebook’s business is “responsive to local needs” — something the company demonstrably failed on back in 2018.

“We remain committed to advancing the social and economic benefits of Facebook in Myanmar. Although we know that this work will continue beyond November, we acknowledge that Myanmar’s 2020 general election will be an important marker along the journey,” Facebook adds.

There’s no mention in its blog post of accusations that Facebook is actively obstructing an investigation into genocide in Myanmar.

Earlier this month, Time reported that Facebook is using US law to try to block a request for information related to Myanmar military officials’ use of its platforms by the West African nation, The Gambia.

“Facebook said the request is ‘extraordinarily broad’, as well as ‘unduly intrusive or burdensome’. Calling on the U.S. District Court for the District of Columbia to reject the application, the social media giant says The Gambia fails to ‘identify accounts with sufficient specificity’,” Time reported.

“The Gambia was actually quite specific, going so far as to name 17 officials, two military units and dozens of pages and accounts,” it added.

“Facebook also takes issue with the fact that The Gambia is seeking information dating back to 2012, evidently failing to recognize two similar waves of atrocities against Rohingya that year, and that genocidal intent isn’t spontaneous, but builds over time.”

In another recent development, Facebook has been accused of bending its hate speech policies to ignore inflammatory posts made against Rohingya Muslim immigrants by Hindu nationalist individuals and groups.

The Wall Street Journal reported last month that Facebook’s top public-policy executive in India, Ankhi Das, opposed applying its hate speech rules to T. Raja Singh, a member of Indian Prime Minister Narendra Modi’s Hindu nationalist party, along with at least three other Hindu nationalist individuals and groups flagged internally for promoting or participating in violence — citing sourcing from current and former Facebook employees.

#artificial-intelligence, #asia, #election-integrity, #facebook, #hate-speech, #india, #messenger, #myanmar, #narendra-modi, #social, #social-media, #sri-lanka, #united-nations, #voter-suppression

Facebook Must Better Police Online Hate, State Attorneys General Say

The call from 20 state officials adds to the rising pressure facing Mark Zuckerberg and his company.

#attorneys-general, #computers-and-the-internet, #cyberharassment, #facebook-inc, #fringe-groups-and-movements, #grewal-gurbir-s, #hate-speech, #new-jersey, #rumors-and-misinformation, #sandberg-sheryl-k, #social-media, #states-us, #zuckerberg-mark-e