Some independent shops flout the new limits on free expression. Others try to come to terms with them. For readers, they offer a sense of connection in a changed city.
Doug Guthrie, once one of America’s leading China bulls, rang the alarm on doing business there. He spoke about his time at Apple.
The city’s government said it would block the distribution of films that are deemed to undermine national security, bringing the territory more in line with mainland Chinese rules.
Yes, the civil liberties group is divided. What else is new?
Apple’s privacy push has put the company at odds with rivals. Despite protests from some corners of Silicon Valley, Monday’s announcements show that Apple has doubled down on privacy features.
The popular social media site had removed a post by President Muhammadu Buhari threatening secessionists in the southeast of the country.
Users outside China said the site had failed to call up videos and images of the iconic figure from the 1989 crackdown.
Authorities succeeded in shuttering an activist site for three days. The takedown, and its reversal, presage a battle over internet freedoms.
The social network wrongly bowed to government demands to take down content in the countries, employees said, in more signs of internal dissent.
Its campaign is part of a global wave of actions by governments that are testing how far they can go to control online speech.
The new law is a direct response to Facebook’s and Twitter’s suspensions of former President Donald J. Trump.
The Russian-language online news channel was best known for its priestly presenters and conspiratorial musings about the global financial system plotting against Moscow—suspicions it viewed as confirmed last July when the Google-owned streaming service took the channel down over what it claimed was a US sanctions breach.
Now Tsargrad is poised to strike back after a landmark court ruling that could put Google’s entire Russian business in jeopardy as Moscow steps up attempts to force western technology companies to comply with its laws.
A Moscow court last month ordered Google to reinstate Tsargrad’s YouTube channel globally on the grounds the ban had unfairly discriminated against its owner, Konstantin Malofeev.
The literature of the fantastic has always embodied our profound truths, our finest attributes and our deepest prejudices.
To stay on the good side of the Chinese authorities, the company has made decisions that contradict its carefully curated image.
Apple built the world’s most valuable business on top of China. Now it has to answer to the Chinese government.
My memoir could teach teenagers how to exit an abusive relationship. So why don’t some parents want their children to read it?
Police are now stopping random people on the streets. A group of secret informers has reappeared. The killings continue, but so does the resistance.
A company-appointed panel ruled that the ban was justified at the time but added that the company should reassess its action and make a final decision in six months.
QAnon adherents and other far-right influencers are making thousands of dollars broadcasting election and vaccine conspiracy theories on the streaming site.
On social media, the director’s fans blurred out her name and turned images on their sides to evade the censors. “People should be celebrating,” one writer said.
The New York Post has complained that Facebook is blocking and downplaying its stories. But the platform doesn’t pay any special deference to journalists.
As online attacks against Chinese feminists intensify, popular social media companies are responding by removing the women — not the abusers — from their platforms.
If you live in a community with a homeowners association, chances are good that you may be limited to just the Stars and Stripes.
Never before have so many countries, including China, moved with such vigor at the same time to limit the power of a single industry.
Democrats are breathing easier. Republicans are crying censorship. For all of the country’s news consumers, a strange quiet has descended after a four-year bombardment of presidential verbiage.
After a year dominated by protests against police killings of Black Americans, the books on the list of the most frequently challenged titles of 2020 reflected the movement — and the backlash to it.
Online platforms that stream dance, singing and comedy shows are pixelating performers’ T-shirts and sneakers amid a nationalistic fervor.
Russia has implemented a novel censorship method in an ongoing effort to silence Twitter. Instead of outright blocking the social media site, the country is using previously unseen techniques to slow traffic to a crawl and make the site all but unusable for people inside the country.
Research published Tuesday says that the throttling slows traffic traveling between Twitter and Russia-based end users to a paltry 128kbps. Whereas past Internet censorship techniques used by Russia and other nation-states have relied on outright blocking, slowing traffic passing to and from a widely used Internet service is a relatively new technique that provides benefits for the censoring party.
Easy to implement, hard to circumvent
“Contrary to blocking, where access to the content is blocked, throttling aims to degrade the quality of service, making it nearly impossible for users to distinguish imposed/intentional throttling from nuanced reasons such as high server load or a network congestion,” researchers with Censored Planet, a censorship measurement platform that collects data in more than 200 countries, wrote in a report. “With the prevalence of ‘dual-use’ technologies such as Deep Packet Inspection devices (DPIs), throttling is straightforward for authorities to implement yet hard for users to attribute or circumvent.”
The security forces have arrested at least 56 reporters, outlawed online news outlets and crippled communications. Young people have stepped in with their phones to help document the brutality.
Online portals have practiced aggressive journalism in a mostly compliant media landscape. But trolls and the government could now be empowered to stop them.
The director, Charles Ferguson, said in a lawsuit that an executive was concerned about the “negative reaction it would provoke among Trump supporters and the Trump administration.”
A New Jersey teacher was suspended in 2017 after, she says, the school administration told her to remove a reference to Mr. Trump from a student’s shirt in a photo.
Beginning in April, new iPhones and other iOS devices sold in Russia will include an extra setup step. Alongside questions about language preference and whether to enable Siri, users will see a screen that prompts them to install a list of apps from Russian developers. It’s not just a regional peculiarity. It’s a concession Apple has made to legal pressure from Moscow—one that could have implications far beyond Russia’s borders.
The law in question dates back to 2019, when Russia dictated that all computers, smartphones, smart TVs, and so on sold there must come preloaded with a selection of state-approved apps that includes browsers, messenger platforms, and even antivirus services. Apple has stopped short of that; the suggested apps aren’t pre-installed, and users can opt not to download them. But the company’s decision to bend its rules on pre-installs could inspire other repressive regimes to make similar demands—or even more invasive ones.
Officials said the social network had failed to block objectionable political content, a sign of the sacrifices it must make to remain in the vast but difficult market.
Keep the ban of the former president in place.
This is https://speed.gulag.link/, a speedtest application that demonstrates Roskomnadzor throttling to Russian users it impacts. [credit: Jim Salter ]
Last night, a confidential source at a Russian ISP contacted Ars with confirmation of the titanic mistake Roskomnadzor—Russia’s Federal Service for Supervision of Communications, Information Technology, and Mass Media—made when attempting to punitively throttle Twitter’s link-shortening service
Our source tells us that Roskomnadzor distributes to all Russian ISPs a hardware package that must be connected just behind that ISP’s BGP core router. At their small ISP, Roskomnadzor’s package includes an EcoFilter 4080 deep package inspection system, a pair of Russian-made 10Gbps aggregation switches, and two Huawei servers. According to our source, this hardware is “massive overkill” for its necessary function and their experienced traffic level—possibly because “at some point in time, government planned to capture all the traffic there is.”
Currently, the Roskomnadzor package does basic fiation for the list of banned resources—and, as of this week, has begun on-the-fly modifications of DNS requests as well. The DNS mangling also caused problems when first enabled—according to our source, YouTube DNS requests were broken for most of a day. Roskomnadzor eventually plans to require all Russian ISPs to replace the real root DNS servers with its own, but that project has met with resistance and difficulties.
Kentik Director of Internet Analysis Doug Madory observed this morning that traffic to Russian state ISP Rostelecom dropped significantly in the wake of its attempt to throttle Twitter. The outages seem to have been caused by a poorly crafted substring in a blocklist/network shaping tool maintained by Russia’s Roskomnadzor bureau.
What Roskomnadzor intended was to slow down access to Twitter’s link shortening service, t.co. All links embedded in tweets are automatically wrapped through this service, which enables Twitter to monitor the types and quality of links its users share.
Russian authorities have railed against Twitter for some time due to the service’s failure or refusal to remove content illegal in Russia. This includes content that is illegal in most of the world and violates Twitter’s own terms of service, such as self harm and child sexualization—but Roskomnadzor only claims 2,000 or so such posts over the course of a year. It seems likely that the real sticking point for the agency is posts encouraging children to join Russian opposition protests.
In its latest strike against online content it doesn’t control Russia is throttling Twitter. State agency Roskomnadzor said today it was taking the action in response to the social media not removing banned content, claiming it had identified more than 3,000 unlawful posts that have not been taken down — and warning it could implement a total block on the service.
However the action by the comms regulator to slow down all Twitter’s mobile traffic and 50% of desktop users in Russia appeared to have briefly taken down Roskomnadzor’s own website earlier today.
Reports also circulated on social media that Russian government websites, including kremlin.ru, had been affected. At the time of writing these sites were accessible but earlier we were unable to access Roskomnadzor’s site.
The stand-off between the state agency and Twitter comes at a time when Russia is trying to clamp down on anti-corruption protestors who are supporters of the jailed opposition leader, Alexei Navalny — who has, in recent weeks, called for demonstrators to take to the streets to ramp up pressure on the regime.
Roskomnadzor’s statement makes no mention of the state’s push to censor political opposition — claiming only that the content it’s throttling Twitter for failing to delete is material relating to minors committing suicide; child pornography; and drug use. Hence it also claims to be taking the action to “protect Russian citizens”. However a draconian application of speech-chilling laws to try to silence political opposition are nothing new in Putin’s Russia.
The Russian regime has sought to get content it doesn’t like removed from foreign-based social media services a number of times in recent years, including — as now — resorting to technical means to limit access.
Most notoriously, back in 2018, an attempt by Russia to block access to the messaging service Telegram resulted in massive collateral damage to the local Internet as the block took down millions of (non-Telegram-related) IP addresses — disrupting those other services.
Also in 2018 Facebook-owned Instagram complied with a Russian request to remove content posted by Navalny — which earned it a shaming tweet from the now jailed politician.
Although now behind bars in Russia — Navalny was jailed in February, after Russia claimed he had violated the conditions of a suspended sentence — the prominent Putin critic has continued to use his official Twitter account as a megaphone to denounce corruption and draw attention to the injustice of his detention, following his attempted poisoning last year (which has been linked to Russia’s FSB).
Recent tweets from Navalny’s account include amplification of an investigation by the German newspaper Bild into RT DE, the Russian state-controlled media outlet Russia Today’s German channel — which the newspaper accuses of espionage in German targeting Navalny and his associates (he was staying in a German hospital in Berlin at the time, recovering from the attempted poisoning).
Slowing down access to Twitter is one way for Russia to try to put a lid on Navalny’s critical output on the platform — which also includes a recent retweet of a video claiming that Russian citizen’s taxes were used this winter by Putin and his cronies to fund yachts, whiskey and a Maldivian vacation.
Navalny’s account has also tweeted in recent hours to denounce his jailing by the Russian state following its attempt to poison him — saying: “This situation is called attempted murder”.
At the time of writing Twitter had not responded to requests for comment on Roskomnadzor’s action.
However last month, in a worrying development in India that’s also related to anti-government protests (in that case by farmers who are seeking to reverse moves to deregulate the market), Twitter caved in to pressure from the government — shuttering 500 accounts including some linked to the anti-government protests.
It also agreed to reduce the visibility of certain protest hashtags.
A British reader relates how “our love was eroded” by Meghan’s public actions. Also: Censoring Dr. Seuss; gay adoption; when birth control fails.
The Dr. Seuss cancellation illustrates all the problems that they used to have with censorship.
The video platform became the latest American internet giant to take down the military’s content since the coup last month.
The Facebook Oversight Board (FOB) is already feeling frustrated by the binary choices it’s expected to make as it reviews Facebook’s content moderation decisions, according to one of its members who was giving evidence to a UK House of Lords committee today which is running an enquiry into freedom of expression online.
The FOB is currently considering whether to overturn Facebook’s ban on former US president, Donald Trump. The tech giant banned Trump “indefinitely” earlier this year after his supporters stormed the US capital.
The chaotic insurrection on January 6 led to a number of deaths and widespread condemnation of how mainstream tech platforms had stood back and allowed Trump to use their tools as megaphones to whip up division and hate rather than enforcing their rules in his case.
Yet, after finally banning Trump, Facebook almost immediately referred the case to it’s self-appointed and self-styled Oversight Board for review — opening up the prospect that its Trump ban could be reversed in short order via an exceptional review process that Facebook has fashioned, funded and staffed.
Alan Rusbridger, a former editor of the British newspaper The Guardian — and one of 20 FOB members selected as an initial cohort (the Board’s full headcount will be double that) — avoided making a direct reference to the Trump case today, given the review is ongoing, but he implied that the binary choices it has at its disposal at this early stage aren’t as nuanced as he’d like.
“What happens if — without commenting on any high profile current cases — you didn’t want to ban somebody for life but you wanted to have a ‘sin bin’ so that if they misbehaved you could chuck them back off again?” he said, suggesting he’d like to be able to issue a soccer-style “yellow card” instead.
“I think the Board will want to expand in its scope. I think we’re already a bit frustrated by just saying take it down or leave it up,” he went on. “What happens if you want to… make something less viral? What happens if you want to put an interstitial?
“So I think all these things are things that the Board may ask Facebook for in time. But we have to get our feet under the table first — we can do what we want.”
“At some point we’re going to ask to see the algorithm, I feel sure — whatever that means,” Rusbridger also told the committee. “Whether we can understand it when we see it is a different matter.”
To many people, Facebook’s Trump ban is uncontroversial — given the risk of further violence posed by letting Trump continue to use its megaphone to foment insurrection. There are also clear and repeat breaches of Facebook’s community standards if you want to be a stickler for its rules.
Among supporters of the ban is Facebook’s former chief security officer, Alex Stamos, who has since been working on wider trust and safety issues for online platforms via the Stanford Internet Observatory.
Stamos was urging both Twitter and Facebook to cut Trump off before everything kicked off, writing in early January: “There are no legitimate equities left and labeling won’t do it.”
But in the wake of big tech moving almost as a unit to finally put Trump on mute, a number of world leaders and lawmakers were quick to express misgivings at the big tech power flex.
Germany’s chancellor called Twitter’s ban on him “problematic”, saying it raised troubling questions about the power of the platforms to interfere with speech. While other lawmakers in Europe seized on the unilateral action — saying it underlined the need for proper democratic regulation of tech giants.
The sight of the world’s most powerful social media platforms being able to mute a democratically elected president (even one as divisive and unpopular as Trump) made politicians of all stripes feel queasy.
Facebook’s entirely predictable response was, of course, to outsource this two-sided conundrum to the FOB. After all, that was its whole plan for the Board. The Board would be there to deal with the most headachey and controversial content moderation stuff.
And on that level Facebook’s Oversight Board is doing exactly the job Facebook intended for it.
But it’s interesting that this unofficial ‘supreme court’ is already feeling frustrated by the limited binary choices it’s asked them for. (Of, in the Trump case, either reversing the ban entirely or continuing it indefinitely.)
The FOB’s unofficial message seems to be that the tools are simply far too blunt. Although Facebook has never said it will be bound by any wider policy suggestions the Board might make — only that it will abide by the specific individual review decisions. (Which is why a common critique of the Board is that it’s toothless where it matters.)
How aggressive the Board will be in pushing Facebook to be less frustrating very much remains to be seen.
“None of this is going to be solved quickly,” Rusbridger went on to tell the committee in more general remarks on the challenges of moderating speech in the digital era. Getting to grips with the Internet’s publishing revolution could in fact, he implied, take the work of generations — making the customary reference the long tail of societal disruption that flowed from Gutenberg inventing the printing press.
If Facebook was hoping the FOB would kick hard (and thorny-in-its-side) questions around content moderation into long and intellectual grasses it’s surely delighted with the level of beard stroking which Rusbridger’s evidence implies is now going on inside the Board. (If, possibly, slightly less enchanted by the prospect of its appointees asking it if they can poke around its algorithmic black boxes.)
Kate Klonick, an assistant professor at St John’s University Law School, was also giving evidence to the committee — having written an article on the inner workings of the FOB, published recently in the New Yorker, after she was given wide-ranging access by Facebook to observe the process of the body being set up.
The Lords committee was keen to learn more on the workings of the FOB and pressed the witnesses several times on the question of the Board’s independence from Facebook.
Rusbridger batted away concerns on that front — saying “we don’t feel we work for Facebook at all”. Though Board members are paid by Facebook via a trust it set up to put the FOB at arm’s length from the corporate mothership. And the committee didn’t shy away or raising the payment point to query how genuinely independent they can be?
“I feel highly independent,” Rusbridger said. “I don’t think there’s any obligation at all to be nice to Facebook or to be horrible to Facebook.”
“One of the nice things about this Board is occasionally people will say but if we did that that will scupper Facebook’s economic model in such and such a country. To which we answer well that’s not our problem. Which is a very liberating thing,” he added.
Of course it’s hard to imagine a sitting member of the FOB being able to answer the independence question any other way — unless they were simultaneously resigning their commission (which, to be clear, Rusbridger wasn’t).
He confirmed that Board members can serve three terms of three years apiece — so he could have almost a decade of beard-stroking on Facebook’s behalf ahead of him.
Klonick, meanwhile, emphasized the scale of the challenge it had been for Facebook to try to build from scratch a quasi-independent oversight body and create distance between itself and its claimed watchdog.
“Building an institution to be a watchdog institution — it is incredibly hard to transition to institution-building and to break those bonds [between the Board and Facebook] and set up these new people with frankly this huge set of problems and a new technology and a new back end and a content management system and everything,” she said.
Rusbridger had said the Board went through an extensive training process which involved participation from Facebook representatives during the ‘onboarding’. But went on to describe a moment when the training had finished and the FOB realized some Facebook reps were still joining their calls — saying that at that point the Board felt empowered to tell Facebook to leave.
“This was exactly the type of moment — having watched this — that I knew had to happen,” added Klonick. “There had to be some type of formal break — and it was told to me that this was a natural moment that they had done their training and this was going to be moment of push back and breaking away from the nest. And this was it.”
However if your measure of independence is not having Facebook literally listening in on the Board’s calls you do have to query how much Kool Aid Facebook may have successfully doled out to its chosen and willing participants over the long and intricate process of programming its own watchdog — including to extra outsiders it allowed in to observe the set up.
The committee was also interested in the fact the FOB has so far mostly ordered Facebook to reinstate content its moderators had previously taken down.
In January, when the Board issued its first decisions, it overturned four out of five Facebook takedowns — including in relation to a number of hate speech cases. The move quickly attracted criticism over the direction of travel. After all, the wider critique of Facebook’s business is it’s far too reluctant to remove toxic content (it only banned holocaust denial last year, for example). And lo! Here’s its self-styled ‘Oversight Board’ taking decisions to reverse hate speech takedowns…
The unofficial and oppositional ‘Real Facebook Board’ — which is truly independent and heavily critical of Facebook — pounced and decried the decisions as “shocking”, saying the FOB had “bent over backwards to excuse hate”.
Klonick said the reality is that the FOB is not Facebook’s supreme court — but rather it’s essentially just “a dispute resolution mechanism for users”.
If that assessment is true — and it sounds spot on, so long as you recall the fantastically tiny number of users who get to use it — the amount of PR Facebook has been able to generate off of something that should really just be a standard feature of its platform is truly incredible.
Klonick argued that the Board’s early reversals were the result of it hearing from users objecting to content takedowns — which had made it “sympathetic” to their complaints.
“Absolute frustration at not knowing specifically what rule was broken or how to avoid breaking the rule again or what they did to be able to get there or to be able to tell their side of the story,” she said, listing the kinds of things Board members had told her they were hearing from users who had petitioned for a review of a takedown decision against them.
“I think that what you’re seeing in the Board’s decision is, first and foremost, to try to build some of that back in,” she suggested. “Is that the signal that they’re sending back to Facebook — that’s it’s pretty low hanging fruit to be honest. Which is let people know the exact rule, given them a fact to fact type of analysis or application of the rule to the facts and give them that kind of read in to what they’re seeing and people will be happier with what’s going on.
“Or at least just feel a little bit more like there is a process and it’s not just this black box that’s censoring them.”
In his response to the committee’s query, Rusbridger discussed how he approaches review decision-making.
“In most judgements I begin by thinking well why would we restrict freedom of speech in this particular case — and that does get you into interesting questions,” he said, having earlier summed up his school of thought on speech as akin to the ‘fight bad speech with more speech’ Justice Brandeis type view.
“The right not to be offended has been engaged by one of the cases — as opposed to the borderline between being offended and being harmed,” he went on. “That issue has been argued about by political philosophers for a long time and it certainly will never be settled absolutely.
“But if you went along with establishing a right not to be offended that would have huge implications for the ability to discuss almost anything in the end. And yet there have been one or two cases where essentially Facebook, in taking something down, has invoked something like that.”
“Harm as oppose to offence is clearly something you would treat differently,” he added. “And we’re in the fortunate position of being able to hire in experts and seek advisors on the harm here.”
While Rusbridger didn’t sound troubled about the challenges and pitfalls facing the Board when it may have to set the “borderline” between offensive speech and harmful speech itself — being able to (further) outsource expertise presumably helps — he did raise a number of other operational concerns during the session. Including over the lack of technical expertise among current board members (who were purely Facebook’s picks).
Without technical expertise how can the Board ‘examine the algorithm’, as he suggested it would want to, because it won’t be able to understand Facebook’s content distribution machine in any meaningful way?
Since the Board currently lacks technical expertise, it does raise wider questions about its function — and whether its first learned cohort might not be played as useful idiots from Facebook’s self-interested perspective — by helping it gloss over and deflect deeper scrutiny of its algorithmic, money-minting choices.
If you don’t really understand how the Facebook machine functions, technically and economically, how can you conduct any kind of meaningful oversight at all? (Rusbridger evidently gets that — but is also content to wait and see how the process plays out. No doubt the intellectual exercise and insider view is fascinating. “So far I’m finding it highly absorbing,” as he admitted in his evidence opener.)
“People say to me you’re on that Board but it’s well known that the algorithms reward emotional content that polarises communities because that makes it more addictive. Well I don’t know if that’s true or not — and I think as a board we’re going to have to get to grips with that,” he went on to say. “Even if that takes many sessions with coders speaking very slowly so that we can understand what they’re saying.”
“I do think our responsibility will be to understand what these machines are — the machines that are going in rather than the machines that are moderating,” he added. “What their metrics are.”
Both witnesses raised another concern: That the kind of complex, nuanced moderation decisions the Board is making won’t be able to scale — suggesting they’re too specific to be able to generally inform AI-based moderation. Nor will they necessarily be able to be acted on by the staffed moderation system that Facebook currently operates (which gives its thousand of human moderators a fantastically tiny amount of thinking time per content decision).
Despite that the issue of Facebook’s vast scale vs the Board’s limited and Facebook-defined function — to fiddle at the margins of its content empire — was one overarching point that hung uneasily over the session, without being properly grappled with.
“I think your question about ‘is this easily communicated’ is a really good one that we’re wrestling with a bit,” Rusbridger said, conceding that he’d had to brain up on a whole bunch of unfamiliar “human rights protocols and norms from around the world” to feel qualified to rise to the demands of the review job.
Scaling that level of training to the tens of thousands of moderators Facebook currently employs to carry out content moderation would of course be eye-wateringly expensive. Nor is it on offer from Facebook. Instead it’s hand-picked a crack team of 40 very expensive and learned experts to tackle an infinitesimally smaller number of content decisions.
“I think it’s important that the decisions we come to are understandable by human moderators,” Rusbridger added. “Ideally they’re understandable by machines as well — and there is a tension there because sometimes you look at the facts of a case and you decide it in a particular way with reference to those three standards [Facebook’s community standard, Facebook’s values and “a human rights filter”]. But in the knowledge that that’s going to be quite a tall order for a machine to understand the nuance between that case and another case.
“But, you know, these are early days.”
The death of Mushtaq Ahmed has renewed alarm about the country’s use of a draconian digital security law to crack down on dissent.
How the right is trying to censor critical race theory.
The move plunges the social network into the post-coup politics of the roiled nation, after years of criticism over how Myanmar’s military had used the site.
The Kremlin has constructed an entire infrastructure of repression but has not displaced Western apps. Instead, it is turning to outright intimidation.
A panel of judges found the online outlet, Malaysiakini, guilty of contempt of court for the comments about Malaysia’s judiciary.
The tech giant’s intentionally broad-brush — call it antisocial — implementation of content restrictions took down a swathe of non-news publishers’ Facebook pages, as well as silencing news outlets’, illustrating its planned dodge of the (future) law.
Facebook took the step to censor a bunch of pages as parliamentarians in Australia are debating a legislative proposal to force Facebook (and Google) to pay publishers for linking to their news content. In recent years the media industry in the country has successfully lobbied for a law to extract payment from the tech giants for monetizing news content when it’s reshared on their platforms — though the legislation is still being drafted.
Last month Google also threatened to close its search engine in Australia if the law isn’t amended. But it’s Facebook that screwed its courage to the sticking place and flipped the chaos switch first.
Last night Internet users in Australia took to Twitter to report local scores of Facebook pages being wiped clean of content — including hospitals, universities, unions, government departments and the bureau of meteorology, to name a few.
In the wake of Facebook’s unilateral censorship of all sorts of Facebook pages, parliamentarians in the country accused the tech giant of “an assault on a sovereign nation”.
The prime minister of Australia also said today that his government “would not be intimidated”.
Reached for comment, Facebook confirmed it has applied an intentionally broad definition of news to restrict — saying it has done so to reflect the lack of clear guidance in the law “as drafted”.
So it looks like the collateral damage of Facebook silencing scores of public information pages is at least partly a PR tactic to illustrate potential ‘consequences’ of lawmakers forcing it to pay to display certain types of content — i.e. to ‘encourage’ a rethink while there’s still time.
The tech giant did also say it would reverse pages that are “inadvertently impacted”.
But it did not indicate whether it would be doing the leg work of checking its own homework there, or whether silenced pages must (somehow) petition it to be reinstated.
“The actions we’re taking are focused on restricting publishers and people in Australia from sharing or viewing Australian and international news content. As the law does not provide clear guidance on the definition of news content, we have taken a broad definition in order to respect the law as drafted. However, we will reverse any Pages that are inadvertently impacted,” a Facebook company spokesperson said in the statement.
It’s also not clear how many non-news pages have been affected by Facebook’s self-imposed content restrictions.
If the tech giant was hoping to kick off a wider debate about the merits of Australia’s (controversial) plan to make tech pay for news (including in its current guise, for links to news — not just snippets of content, as under the EU’s recent copyright reform expansion of neighbouring rights for news) — Facebook has certainly succeeded in grabbing global eyeballs by blocking regional access to vast swathes of useful, factual information.
However Facebook’s blunt action has also attracted criticism that it’s putting business interests before human rights — given it’s shuttering users’ ability to find what might be vital information, such as from hospitals and government departments, in the middle of a pandemic. (Albeit, being accused of ignoring human rights is hardly a new look for Facebook.)
The Harvard professor Shoshana Zuboff’s academic critique of surveillance capitalism — including that it engages in propagating “epistemic chaos” for profit — has perhaps never felt quite so on the nose. (“We turned to Facebook in search of information. Instead we found lethal strategies of epistemic chaos for profit,” she wrote only last month.)
Facebook’s intentional over-flex has also underscored the vast power of its social monopoly — which will likely only strengthen calls for policymakers and antitrust regulators everywhere to grasp the nettle and rein in big tech. So its local lobbying effort may backfire on the global stage if it further sours public opinion against the scandal-hit company.
Facebook’s rush to censor may even encourage a proportion of its users to remember/discover that there’s a whole open Internet outside its walled garden — where they can freely access public information without having to log into Facebook’s ad-targeting platform (and be stripped of their privacy) first.
As others have noted, it’s also interesting to note how quickly Facebook can pull the content moderation trigger when it believes its bottom line is threatened. And a law to extract payment for sharing news content presents a clear threat.
Compare and contrast Facebook’s rush to silence information pages in Australia with its laid back approach to tackling outrage-inducing hate speech or violent conspiracy nonsense and it’s hard not to conclude that content moderation on (and by) Facebook is always viewed through the prism of Facebook’s global revenue growth goals. (Much like how the tech giant can here be seen in a court filing chainlinking revenue to its self-reported ad metric tools.)
The social network’s decision to block journalism rather than pay for it erased more than expected, leaving many outraged and debating what should happen next.
A scholar’s address about racism and music theory was met with a vituperative, personal response by a small journal. It faced calls to cease publishing.