The move plunges the social network into the post-coup politics of the roiled nation, after years of criticism over how Myanmar’s military had used the site.
The Kremlin has constructed an entire infrastructure of repression but has not displaced Western apps. Instead, it is turning to outright intimidation.
A panel of judges found the online outlet, Malaysiakini, guilty of contempt of court for the comments about Malaysia’s judiciary.
The tech giant’s intentionally broad-brush — call it antisocial — implementation of content restrictions took down a swathe of non-news publishers’ Facebook pages, as well as silencing news outlets’, illustrating its planned dodge of the (future) law.
Facebook took the step to censor a bunch of pages as parliamentarians in Australia are debating a legislative proposal to force Facebook (and Google) to pay publishers for linking to their news content. In recent years the media industry in the country has successfully lobbied for a law to extract payment from the tech giants for monetizing news content when it’s reshared on their platforms — though the legislation is still being drafted.
Last month Google also threatened to close its search engine in Australia if the law isn’t amended. But it’s Facebook that screwed its courage to the sticking place and flipped the chaos switch first.
Last night Internet users in Australia took to Twitter to report local scores of Facebook pages being wiped clean of content — including hospitals, universities, unions, government departments and the bureau of meteorology, to name a few.
In the wake of Facebook’s unilateral censorship of all sorts of Facebook pages, parliamentarians in the country accused the tech giant of “an assault on a sovereign nation”.
The prime minister of Australia also said today that his government “would not be intimidated”.
Reached for comment, Facebook confirmed it has applied an intentionally broad definition of news to restrict — saying it has done so to reflect the lack of clear guidance in the law “as drafted”.
So it looks like the collateral damage of Facebook silencing scores of public information pages is at least partly a PR tactic to illustrate potential ‘consequences’ of lawmakers forcing it to pay to display certain types of content — i.e. to ‘encourage’ a rethink while there’s still time.
The tech giant did also say it would reverse pages that are “inadvertently impacted”.
But it did not indicate whether it would be doing the leg work of checking its own homework there, or whether silenced pages must (somehow) petition it to be reinstated.
“The actions we’re taking are focused on restricting publishers and people in Australia from sharing or viewing Australian and international news content. As the law does not provide clear guidance on the definition of news content, we have taken a broad definition in order to respect the law as drafted. However, we will reverse any Pages that are inadvertently impacted,” a Facebook company spokesperson said in the statement.
It’s also not clear how many non-news pages have been affected by Facebook’s self-imposed content restrictions.
If the tech giant was hoping to kick off a wider debate about the merits of Australia’s (controversial) plan to make tech pay for news (including in its current guise, for links to news — not just snippets of content, as under the EU’s recent copyright reform expansion of neighbouring rights for news) — Facebook has certainly succeeded in grabbing global eyeballs by blocking regional access to vast swathes of useful, factual information.
However Facebook’s blunt action has also attracted criticism that it’s putting business interests before human rights — given it’s shuttering users’ ability to find what might be vital information, such as from hospitals and government departments, in the middle of a pandemic. (Albeit, being accused of ignoring human rights is hardly a new look for Facebook.)
The Harvard professor Shoshana Zuboff’s academic critique of surveillance capitalism — including that it engages in propagating “epistemic chaos” for profit — has perhaps never felt quite so on the nose. (“We turned to Facebook in search of information. Instead we found lethal strategies of epistemic chaos for profit,” she wrote only last month.)
Facebook’s intentional over-flex has also underscored the vast power of its social monopoly — which will likely only strengthen calls for policymakers and antitrust regulators everywhere to grasp the nettle and rein in big tech. So its local lobbying effort may backfire on the global stage if it further sours public opinion against the scandal-hit company.
Facebook’s rush to censor may even encourage a proportion of its users to remember/discover that there’s a whole open Internet outside its walled garden — where they can freely access public information without having to log into Facebook’s ad-targeting platform (and be stripped of their privacy) first.
As others have noted, it’s also interesting to note how quickly Facebook can pull the content moderation trigger when it believes its bottom line is threatened. And a law to extract payment for sharing news content presents a clear threat.
Compare and contrast Facebook’s rush to silence information pages in Australia with its laid back approach to tackling outrage-inducing hate speech or violent conspiracy nonsense and it’s hard not to conclude that content moderation on (and by) Facebook is always viewed through the prism of Facebook’s global revenue growth goals. (Much like how the tech giant can here be seen in a court filing chainlinking revenue to its self-reported ad metric tools.)
The social network’s decision to block journalism rather than pay for it erased more than expected, leaving many outraged and debating what should happen next.
A scholar’s address about racism and music theory was met with a vituperative, personal response by a small journal. It faced calls to cease publishing.
The 11-month-old app has exploded in popularity, even as it grapples with harassment, misinformation and privacy issues.
The platform’s problems in the country offer a stark example of the difficulty of adhering to its free-speech principles amid government worries over its influence.
China’s censors finally blocked Clubhouse, but not before users were able to bypass the caricatures painted by government-controlled media and freely discuss their hopes and fears.
For a little while, the social media platform Clubhouse provided the rare opportunity for cross-border dialogue on contentious topics free from the country’s usual tight controls.
On Friday just past midnight, I stumbled across a Clubhouse room hosted by a well-known figure in the Chinese startup community, Feng Dahui. At half-past midnight, the room still had nearly 500 listeners, many of whom were engineers, product managers, and entrepreneurs from China.
The discussion centered around whether Clubhouse, an app that lets people join pop-up voice chats in virtual rooms, will succeed in China. That’s a question I have been asking myself in recent weeks. Given the current hype swirling in Silicon Valley about the audio social network, it’s unsurprising to see well-informed, tech-savvy Chinese users start flocking to the platform. Demand for invitations in China runs high, with people paying as much as $100 to buy one from scalpers.
Many users I talked to believe the app won’t reach its full potential or even just find product-market fit in China before it gets banned. Indeed, a handful of well-attended Chinese-language rooms touch on topics that are normally censored in China, from crypto trading to protests in Hong Kong.
If it’s of any consolation, Clubhouse clones and derivatives are already in the making in China. A Chinese entrepreneur and blogger who goes by the nickname Herock told me he is aware of at least “dozens of local teams” that are working on something similar. Moreover, voice-based networking has been around in China for years, albeit in different forms. If Clubhouse is blocked, will any of its alternatives go on to succeed?
A direct Clubhouse clone probably won’t work in China.
A few factors dim its prospects in the country, which has nearly one billion internet users. The major appeal of Clubhouse is the organic flow of conversations in real time. But “how could the Chinese government allow free-flowing discussions to happen and spread without control,” a founder of a Chinese audio app rhetorically asked, declining to be named for this story. Video live streaming in China, for example, is under close regulatory oversight limiting who can speak and what they can say.
The founder then cited a famous online protest back in 2011. Thousands of small vendors launched a cyber attack on Alibaba’s online mall over a proposed fee hike. The tool they used to coordinate with one another was YY, which started out as a voice-based chatting software for gamers and later became known for video live streaming.
“The authorities dread the power of real-time audio communication,” the founder added.
There are signs that Clubhouse may already be the target of censorship. While Clubhouse works perfectly in China without the need for a virtual private network (VPN) or other censorship-circumvention tools (at least for the moment), the iOS-exclusive app is unavailable on China’s App Store. Clubhouse was removed there shortly after its global release in late September, app analytics firm Sensor Tower said.
Currently, in order to install Clubhouse, Chinese users need to install the app by switching to an App Store located in another country, which further limits the product’s reach to users who have the means of using a non-local store.
It’s unclear whether Apple preemptively delisted Clubhouse in anticipation of government action, given that any later removal of a major foreign app in China could stir up accusations of censorship. Alternatively, Clubhouse might have voluntarily pulled the app itself knowing that any form of real-time broadcasting won’t go unchecked by Chinese regulators, which would inevitably compromise user experience.
Entering China could be way down on Clubhouse’s to-do list given the traction it is gaining elsewhere. The app has seen about 3.6 million worldwide installs so far, according to Sensor Tower estimates. The majority of its lifetime installs originate in the United States, where the app has seen nearly 2 million first-time downloads, followed by Japan and Germany both with over 400,000 downloads.
The improbability of uncensored and open discussions on the Chinese internet may explain why the market hasn’t seen its own Clubhouse. But even if an app like Clubhouse is allowed to exist in China, it may not reach the same massive scale across the country as Douyin (TikTok’s Chinese version) and WeChat did.
The app is “elitist,” sort of like a voice version of Twitter, said Marco Lai, CEO and founder of Lizhi, a NASDAQ-listed Chinese audio platform. So far, Clubhouse’s invite-only model has confined its American user base largely to the tech, arts and celebrity circles. Herock observed that its Chinese demographics mirror the trend, with users concentrated in fields like finance, startup and product management, as well as crypto traders.
Even among these users though, there is the question of free time. The other night, I was up at midnight eavesdropping on a group of ByteDance employees. In fact, I’ve mostly been on Clubhouse in the late evenings after work, because that’s when user activity in China appears to peak. “Who in China has that much time?” said Zhou Lingyu, founder of Rainmaker, a Chinese networking community for professionals, when I asked whether she thinks Clubhouse will attract the masses in China.
While her remark may not apply to everyone, the tech-centric, educated crowds in China — the demographic that Clubhouse appears to be targeting or at least attracting — are also those most likely to work the notorious “996” schedule, the long hours practice common in Chinese tech companies. The type of “meaningful conversations” that Clubhouse encourages is desirable, but the app’s real-time, spontaneous nature is also a lot to ask of 996 workers, who likely prefer more efficient and manageable use of time.
Moderators may also need material incentives to remain active aside from the pure passion in connecting with other human beings. One potential solution is to turn quality conversations into podcast episodes. “Clubhouse is for one-off, casual conversations. Those who produce high-quality content would want to record the conversation so it could be for repeatable consumption later on,” said Zhou.
In China, audio networking has played out in slightly different shapes. Some companies place a great deal of focus on gamification, filling their apps with playful, interactive features.
Lizhi’s social podcast app, for example, is not just about listening. It also lets listeners message hosts, tip them through virtual gifts, record themselves shadowing a host who is reading a poem, compete in online karaoke contests, and more.
Interaction between hosts and listeners happens in a relatively orchestrated way, as Lizhi’s operational staff design campaigns and work with content creators behind the scenes to ensure content quality and user engagement. Clubhouse growth, in comparison, is more organic.
“The Chinese products focus more on spectatorship and performance, not so much translating natural social behavior in real life into a product. Clubhouse features are simple. It’s more like a coffee shop,” Lai said.
Lizhi’s other voice product Tiya is considered a close answer to Clubhouse, but Tiya’s users are young — the majority of whom are 15-22 years old — and it focuses on entertainment, letting users chat via audio while they play games and watch sports. That also feeds the need for companionship.
Dizhua, which launched in 2019, is another Chinese app that’s been compared to Clubhouse. Unlike Clubhouse, which relies on people’s existing networks for room discovery, Dizhua matches anonymous users based on their declared interests. Clubhouse conversations can start and die off casually. Dizhua encourages users to pick a theme and stay engaged.
“Clubhouse is a pure audio app, with no timeline, no comment, et cetera,” said Armin Li, an expert in residence with a venture capital firm in China. “It’s a kind of casual and drop-in style for the scenarios where user needs are not clear like hangout or multitasking … Its high community participation, content quality, and user quality are unseen in Chinese voice products.”
The bottom line is: The conversations that happen on Chinese platforms are monitored by content auditors. User registration requires real-name verification on internet platforms in China, so there’s no real anonymity online. The topics that users can discuss are limited, often leaning towards the fun and innocuous.
Why do people in China join Clubhouse anyway? Some, like me, joined out of FOMO. Entrepreneurs are always scouring for the next market opportunity, and product managers from internet giants hope to learn a thing or two from Clubhouse that they could apply to their own products. Bitcoin traders and activists, on the other hand, see Clubhouse as a haven outside the purview of Chinese regulators.
One thing I find impressive about Clubhouse is how smoothly it works in China. Even when a foreign app isn’t banned in China, it often loads slowly due to its servers’ distance from China.
Clubhouse doesn’t actually build the technology supporting its enormous chat groups that sometimes reach thousands of participants. Instead, it uses a real-time audio SDK from Agora, two sources told me. The South China Morning Post also reported that. When asked to verify the partnership, Agora CEO Tony Zhao said via email he can’t confirm or deny any engagement between his company and Clubhouse.
Rather, he emphasized Agora’s “virtual network,” which overlays on top of the public internet running on more than 200 co-located data centers worldwide. The company then uses algorithms to plan traffic and optimize routing.
Noticeably, Agora’s operations teams are mainly in China and the U.S., a setup that inevitably raises questions about whether Clubhouse data are within the scope of Chinese regulations.
With real-time voice technology providers like Agora, opportunists are able to build Clubhouse clones quickly at low costs, Herock said. Chinese entrepreneurs are unlikely to copy Clubhouse directly due to local regulatory challenges and different user behavior, but they will race to crank out their own interpretations of voice networking before the hype around Clubhouse fades away.
Critics say Prime Minister Narendra Modi’s approach to dissent increasingly involves stifling dissenting voices, blocking the internet and cracking down on journalists.
Despite China’s history of stringent media control, an industry of uninstitutionalized, individual publishers has managed to flourish on social media platforms like Tencent’s WeChat and ByteDance’s Toutiao. These self-publishers are called “We Media” in the Chinese internet lexicon, denoting the independent power of citizen journalists and content creators.
Meanwhile, self-publishers have always had to tread carefully on what they post or risk being targeted by censors who deem them illegal or inappropriate.
The topics they cover are myriad, ranging from fashion and food to politics and current affairs. WeChat, a major destination for self-publishers, hinted last July it had 20 million “public accounts”, platforms for individuals to broadcast content and in businesses’ case, reach customers. In 2020, 360 million users read articles published on WeChat public accounts, WeChat founder Allen Zhang disclosed recently.
Sina Weibo, China’s answer to Twitter, has long attracted citizen journalists. In the early days of COVID-19, millions of Chinese users rushed to Weibo seeking facts from accounts like that of Fang Fang, an author who chronicled what she had witnessed in Wuhan.
Now, a new development in China’s internet regulation is about to further restrict China’s tens of millions of self-publishers.
Public accounts that “provide online news service to the public shall obtain the Internet News Information Permit and other relevant media accreditation,” according to a new regulation (translation here) published January 22 by the Cyberspace Administration of China, the country’s internet watchdog.
In the following days, WeChat, Baidu, Sohu and other online information services began notifying publishers of the new rule. “If your account lacks relevant accreditation, you are advised not to edit, report, publish or comment on news about politics, the economy, military, foreign affairs or other major current events,” according to the notice sent by WeChat.
“The WeChat Public Account Platform always commits to providing a green, healthy online environment to users,” the message adds.
The requirement of news accreditation will likely be a death knell for independent social media publishers that have taken on journalistic roles, particularly those covering politics. “It’s not something you can obtain easily unless you’re an official news outlet or an organization with unmatched resources and background,” a WeChat account publisher told TechCrunch.
China’s control on news reaches into every corner of the internet, and regulations are always playing catchup with the pace at which new media, such as microblogs and live streaming, flourishes.
From 2017 to 2018, the cyberspace authority granted news permits to a total of 761 “internet news services,” which together operated 743 websites, 563 apps, 119 forums, 23 blogs, 3 microblogs, 2285 public accounts, one instant messenger, and 13 live streaming services. In other words, hard news is off limits for internet services of these categories that operate without a news license. It remains to see how platform operators like WeChat and Sina Weibo work to enforce the rules.
Heightening oversight on online information could have merit when it comes to battling misinformation. The new regulation also calls on operators to set up mechanisms like a creator blacklist to root out fake news. But the regulation overall could have an adverse impact on freedom of expression in China, the International Federation of Journalists warned.
“The vaguely defined new rule comes at a time when ‘self-media’ has gained huge popularity in China and journalists have begun using such platforms to publish work which was axed by their organisations,” the IFJ said in a statement published on January 28.
Companies inspired by the cryptocurrency are creating social networks, storing online content and hosting websites without any central authority.
Deplatforming President Trump showed that the First Amendment is broken — but not in the way his supporters think.
The app has helped fuel democracy movements in Iran and Belarus but now faces scrutiny as extremists and conspiracy theorists flock to it amid crackdowns by Facebook and Twitter on disinformation.
Mr. Murdoch of News Corp, who spoke in a video, has been relatively quiet publicly in recent years. He called conformity on social media “a straitjacket on sensibility.”
The Communist Party’s success in reclaiming the narrative has proved to the world its ability to rally the people to its side, no matter how stumbling its actions might be.
“The digital divide is now a matter of life and death for people who are unable to access essential healthcare information,” said UN Secretary General António Guterres in June 2020. Almost half the global population currently has no internet access, and many who do cannot freely access all information sources.
Freedom House, which tracks internet restrictions worldwide, says the coronavirus pandemic is accelerating a dramatic decline in global internet freedom. It found that governments in at least 28 countries censored websites and social media posts in 2020 to suppress unfavorable health statistics, corruption allegations and other COVID-19-related content.
Now, U.S. company Toki is building “school-in-a-box” devices to connect up to 1 billion people across Africa and Asia, using technologies that it claims could filter content to avoid some information sources and bypass local censorship. The devices will be Wi-Fi-ready servers that run on electric power or batteries and can handle dozens of concurrent users. If no networks are available, the servers will also come pre-installed with digital libraries curated to provide “locally relevant content.”
One of Toki’s country managers describes on LinkedIn that the devices would also run a decentralized search engine, designed to be anonymous, private and censorship-resistant. They will be donated to communities in the developing world by a U.S. nonprofit* called eRise, which was founded in 2019 to, according to its website, “focus on digital empowerment initiatives that are capital-efficient, and which improve access to content, community and commerce.”
Both Toki and eRise were founded by entrepreneur and free speech advocate Rob Monster. Monster owns domain registration company Epik, which allowed controversial social network Parler to come briefly back online last week after the site was booted from Amazon’s cloud service. Parler is just one of several platforms enabled by Epik, and Monster’s other domain and web hosting companies, that have been home to far-right content. Parler is accused of hosting users that helped to coordinate the attack on the U.S. Capitol on January 6.
The “school-in-a-box” would contain a memory card with educational content, games, books, maps and modules related to prayers, the story of religions and “the art of being grateful.” It says the device is intended for “parents who want their kids to be smarter and curious; schools who can’t afford a computer; [and] religious places who wish to spread awareness about education and empower the society.”
But one researcher says this effort recalls Facebook’s heavily criticized project offering free connectivity in India, which spawned accusations of bias and self-censorship.
“We’ve seen a similar tactic by Facebook, to provide digital access points that can also serve the purpose of delivering favorable content and ensuring that these groups become dependent on your benevolence,” said Dr. Joan Donovan, director of the Technology and Social Change Research Project at the Shorenstein Center. “It becomes that much harder later on to change the power dynamics when the ideology is in the infrastructure.”
Monster has used free speech arguments to defend Epik’s working with platforms that either welcome or tolerate extreme content. The Southern Poverty Law Center, which tracks hate groups, has been reported as saying that Monster “offers services to the most disreputable horrific people on the Internet.”
Epik spokesperson Rob Davis told TechCrunch that Epik actively works with its clients to help them moderate content, and claimed that the company has deplatformed Nazi groups and deleted those promoting genocide.
“Lawful, responsible freedom of speech is an amazing right,” said Davis. “Every [domain registrar] has groups like this but Epik is often held to a higher standard.”
In a series of posts in 2019 on a forum dedicated to domain-name trading, Monster provided more details about the Toki technology. The servers would be powered by cheap Raspberry Pi processors and run a proprietary version of Linux that would enable file sharing, peer-to-peer commerce, a digital wallet and a personalized search engine, with the option of “ignoring certain data sources.”
“Decentralization not only means decentralization of the narrative and talking points of big tech groups like Google, Twitter and Facebook,” said Epik’s Davis. “It also means anti-censorship by empowering people with things that they didn’t know.” The spokesperson gave the example of naturopathic remedies for minor health complaints. Naturopathic remedies have not been proven to be effective against COVID-19.
Eventually, each device might come pre-loaded with a “snapshot” of the internet, said Davis, although he did not describe how the internet might be reduced to fit on a single, small physical device. The eRise website notes that content would be curated by local digital librarians that it would recruit. Davis told TechCrunch that Toki has working models of its server, is already conducting field trials and hopes to start deploying the devices to 6,000 villages in Africa in 2022 or 2023, perhaps in collaboration with an unnamed Asian telecoms company.
The Toki devices’ selectivity, if practical, could raise its own content and censorship concerns; for example, if eRise allowed extreme content similar to that seen on Epik’s clients like Gab and Parler, or ignored scientific advice on COVID-19 or other health issues.
Donovan said she is wary of any one-box solution. “We have to focus on decoupling information companies from service providers,” she said. “That much control can be used for political gain. Technology is politics by other means.”
*Although eRise also claims on its website to be a 501(c)(3) nonprofit, which would exempt it from some taxes and allow tax-free donations, TechCrunch could not locate it on the IRS’s database of nonprofits. Monster later admitted eRise was not a registered 501(c)(3)).
The idea for Capsule started with a tweet about reinventing social media.
A day later cryptography researcher, Nadim Kobeissi — best known for authoring the open source e2e encrypted desktop chat app Cryptocat (now discontinued) — had pulled in a pre-seed investment of $100,000 for his lightweight mesh-networked microservices concept, with support coming from angel investor and former Coinbase CTO Balaji Srinivasan, William J. Pulte and Wamda Capital.
The nascent startup has a post-money valuation on paper of $10M, according to Kobeissi, who is working on the prototype — hoping to launch an MVP of Capsule in March (as a web app), after which he intends to raise a seed round (targeting $1M-$1.5M) to build out a team and start developing mobile apps.
For now there’s nothing to see beyond Capsule’s landing page and a pitch deck (which he shared with TechCrunch for review). But Kobeissi says he was startled by the level of interest in the concept.
“I posted that tweet and the expectation that I had was that basically 60 people max would retweet it and then maybe I’ll set up a Kickstarter,” he tells us. Instead the tweet “just completely exploded” and he found himself raising $100k “in a single day” — with $50k paid in there and then.
“I’m not a startup guy. I’ve been running a business based on consulting and based on academic R&D services,” he continues. “But by the end of the day — last Sunday, eight days ago — I was running a Delaware corporation valued at $10M with $100k in pre-seed funding, which is insane. Completely insane.”
Capsule is just the latest contender for retooling Internet power structures by building infrastructure that radically decentralizes social platforms to make speech more resilient to corporate censorship and control.
The list of decentralized/p2p/federated protocols and standards already out there is very long — even while usage remains low. Extant examples include ActivityPub, Diaspora, Mastodon, p2p Matrix, Scuttlebutt, Solid and Urbit, to name a few.
Interest in the space has been rekindled in recent weeks after mainstream platforms like Facebook and Twitter took decisions to shut down US president Donald Trump’s access to their megaphones — a demonstration of private power that other political leaders have described as problematic.
Kobeissi also takes that view, while adding the caveat that he’s not “personally” concerned about Trump’s deplatforming. But he says he is concerned about giant private corporations having unilateral power to shape Internet speech — whether takedown decisions are being made by Twitter’s trust & safety lead or Amazon Web Services (which recently yanked the plug on right-wing social network Parler for failing to moderate violent views).
He also points to a lawsuit that’s been filed in US court seeking damages and injunctive relief from Apple for allowing Telegram, a messaging platform with 500M+ users, to be made available through its iOS App Store — “despite Apple’s knowledge that Telegram is being used to intimidate, threaten, and coerce members of the public” — raising concerns about “the odds of these efforts catching on”.
“That is kind of terrifying,” he suggests.
Capsule would seek to route around the risk of mass deplatforming via “easy to deploy” p2p microservices — starting with a forthcoming web app.
“When you deploy Capsule right now — I have a prototype that does almost nothing running — it’s basically one binary. And you get that binary and you deploy it and you run it, and that’s it. It sets up a server, it contacts Let’s Encrypt, it gets you a certificate, it uses SQLite for the database, which is a server-less database, all of the assets for the web server are within the binary,” he says, walking through the “really nice technical idea” which snagged $100k in pre-seed backing insanely fast.
“There are no other files — and then once you have it running, in that folder when you set up your capsule server, it’s just the Capsule program and a Capsule database which is a file. And that’s it. And that is so self-contained that it’s embeddable everywhere, that’s migratable — and it’s really quite impossible to get this level of simplicity and elegance so quickly unless you go this route. Then, for the mesh federation thing, we’re just doing HTTPS calls and then having decentralized caching of the databases and so on.”
Among the Twitter back-and-forth about how (or whether) Kobeissi’s concept differs to various other decentralized protocols, someone posted a link to this XKCD cartoon — which lampoons the techie quest to resolve competing standards by proposing a tech that covers all use-cases (yet is of course doomed to increase complexity by +1). So given how many protocols already offer self-hosted/p2p social media services it seems fair to ask what’s different here — and, indeed, why build another open decentralized standard?
Kobeissi argues that existing options for decentralizing social media are either: A) not fully p2p (Mastodon is “self-hosted but not decentralized”, per a competitive analysis on Capsule’s pitch deck, ergo its servers are “vulnerable to Parler-style AWS takedowns”); or B) not focused enough on the specific use-case of social media (some other decentralized protocols like Matrix aim to support many more features/apps than social media and therefore can’t be as lightweight is the argument); or C) simply aren’t easy enough to use to be more than a niche geeky option.
He talks about Capsule having the same level of focus on social media as Signal does on private messaging, for example — albeit intending it to support both short-form ‘tweet’ style public posts and long-form Medium-style postings. But he’s vocal about not wanting any ‘bloat’.
He also invokes Apple’s ‘design for usability’ philosophy. Albeit, it’s a lot easier to say you want to design something that ‘just works’ vs actually pulling off effortless mainstream accessibility. But that’s the bar Kobeissi is setting himself here.
“I always imagine Glenn Greenwald when I think of my user,” he says on the usability point, referring to the outspoken journalist and Intercept co-founder who recently left to launch his own newsletter-based offering on Substack. “He’s the person I see setting this up. Basically the way that this would work is he’d be able to set this up or get someone to set it up really easily — I think Capsule is going to offer automated deployments as also a way to make revenue, by the way, i.e. for a bit extra we deploy the server for you and then you’re self-hosting but we also make a margin off of that — but it’s going to be open source, you can set it up yourself as well and that’s perfectly okay. It’s not going to be hindered at all in that sense.
“In the case of Capsule, each content creator has their own website — has their own address, like Capsule.Greenwald.com — and then people go there and their first discovers of the mesh is through people that they’re interested in hearing from.”
Individual Capsules would be decentralized from the risk of platform-level censorship since they’d be beyond the reach of takedowns by a single centralizing entity. Although they would still be being hosted on the web — and therefore could be subject to a takedown by their own web host. That means illegal speech on Capsule could still be removed. However there wouldn’t be a universal host that could be hit up with the risk of a whole platform being taken down at a sweep — as Parler just was by AWS.
“For every takedown it is entirely between that Capsule user and their hosting provider,” says Kobeissi. “Capsule users are going to have different hosting providers that they’re able to choose and then every time that there is a takedown it is going to be a decision that is made by a different entity. And with a different — perhaps — judgement, so there isn’t this centralized focus where only Amazon Web Services decides who gets to speak or only Twitter decides.”
And while the business of web hosting at platform giant level involves just a handful of cloud hosting giants able to offer the required scalability, he argues that that censorship-prone market concentration goes away once you’re dealing with scores of descentralized social media instances.
“We have the big hosting providers — like AWS, Azure, Google Cloud — but aside from that we have a lot of tiny hosting providers or small businesses… Sure if you’re running a big business you do get to focus on these big providers because they allow you to have these insane servers that are very powerful and deployable very easily but if you’re running a Capsule instance, as a matter of fact, the server resource requirements of running a Capsule instance are generally speaking quite small. In most instances tiny.”
Content would also be harder to scrub from Capsule because the mesh infrastructure would mean posts get mirrored across the network by the poster’s own followers (assuming they have any). So, for example, reposts wouldn’t just vanish the moment the original poster’s account was taken down by their hosting provider.
Separate takedown requests would likely be needed to scrub each reposted instance, adding a lot more friction to the business of content moderation vs the unilateral takedowns that platform giants can rain down now. The aim is to “spare the rest of the community from the danger of being silenced”, as Kobeissi puts it.
Trump’s deplatforming does seem to have triggered a major penny dropping moment for some that allowing a handful of corporate giants to own and operate centalized mass communication machines isn’t exactly healthy for democratic societies as this unilateral control of infrastructure gives them the power to limit speech. (As, indeed, their content-sorting algorithms determine reach and set the agenda of much public debate.)
Current social media infrastructure also provides a few mainstream chokepoints for governments to lean on — amplifying the risk of state censorship.
With concerns growing over the implications of platform power on data flows — and judging by how quickly Kobeissi’s tweet turned heads — we could be on the cusp of an investor-funded scramble to retool Internet infrastructure to redefine where power (and data) lies.
It’s certainly interesting to note that Twitter recently reupped its own decentralized social media open standard push, Bluesky, for example. It obviously wouldn’t want to be left behind any such shift.
“It seems to really have blown up,” Kobeissi adds, returning to his week-old Capsule concept. “I thought when I tweeted that I was maybe the only person who cared. I guess I live in France so I’m not really in tune with what’s going on in the US a lot — but a lot of people care.”
“I am not like a cypherpunk-style person these days, I’m not for full anonymity or full unaccountability online by any stretch,” he adds. “And if this is abused then sincerely it might even be the case that we would encourage — have a guidelines page — for hosting providers like on how to deal with instances of someone hosting an abusive Capsule instance. We do want that accountability to exist. We are not like a full on, crazy town ‘free speech’ wild west thing. We just think that that accountability has to be organic and decentralized — just as originally intended with the Internet.”
Artists say officials tied to the country’s ruling party use pressure to stem criticism. Streaming services hoping to tap the Indian market’s potential have been caught in the middle.
YouTube has been the slowest of the big social media platforms to react to the threat of letting president Trump continue to use its platform as a megaphone to whip up insurrection in the wake of the attack on the US capital last week. But it’s now applied a temporary upload ban.
In a short Twitter thread today, the Google-owned service said it had removed new content uploaded to Trump’s YouTube channel “in light of concerns about the ongoing potential violence”.
It also said it’s applied a first strike — triggering a temporary upload ban for at least seven days.
At the time of writing the verified Donald J Trump YouTube channel has some 2.78M subscribers.
“Given the ongoing concerns about violence, we will also be indefinitely disabling comments on President Trump’s channel, as we’ve done to other channels where there are safety concerns found in the comments section,” YouTube adds.
We reached out to YouTube with questions about the content that was removed and how it will determine whether to extend the ban on Trump’s ability to post to its platform beyond seven days.
A spokeswoman confirmed content that was uploaded to the channel on January 12 had been taken down for violating its policies on inciting violence, with the platform saying it perceiving an increased risk of violence in light of recent events and due to earlier remarks by Trump.
She did not confirm the specific content of the video that triggered the takedown and strike.
According to YouTube, platform is applying its standard ‘three strikes’ policy — whereby, within a 90 day period, if a channel receives three strikes it gets permanently suspended. Under this policy a first strike earns around a week’s suspension, a second strike earns around two weeks and a third strike triggers a termination of the channel.
At the time of writing, Trump’s official YouTube channel has a series of recent uploads — including five clips from a speech he gave at the Mexican border wall, where he lauded “successful” completion of the pledge during the 2016 election campaign to ‘build the wall’.
In one of these videos, entitled “President Trump addresses the events of last week”, Trump characterizes supporters who attacked the US capital as a “mob” — and claims his administration “believes in the rule of law, not in violence or rioting” — before segueing into a series of rambling comments about the pandemic and vaccine development.
The clip ends with an entreaty by Trump for “our nation to heal”, for “peace and for calm”, and for respect for law enforcement — with the president claiming people who work in law enforcement form the backbone of the “MAGA agenda”.
An earlier clip of Trump speaking to reporters before he left for the tour of the border wall is also still viewable on the channel.
In it the president attacks the process to impeach him a second time as “a continuation of the greatest witch-hunt in the history of politics”. Here Trump name-checks Nancy Pelosi and Chuck Schumer — in what sounds like a veiled but targeted threat.
“[For them] to continue on this path, I think it’s causing tremendous danger to our country and it’s causing tremendous anger,” he says, before tossing a final caveat at reporters that “I want no violence”. (But, well, if you have to add such a disclaimer what does that say about the sentiments you know you’re whipping up?)
While YouTube has opted for a temporary freeze on Trump’s megaphone, Twitter banned the president for good last week after one too many violations of its civic integrity policy.
Facebook has also imposed what it describes as an “indefinite” suspension — leaving open the possibility that it could in future restore Trump’s ability to use its tools to raise hell.
Up to now, YouTube has managed to avoid being the primary target of ire for those criticizing social media platforms for providing Trump with a carve out from their rules of conduct and a mainstream platform to abuse, bully, lie and (most recently) whip up insurrection.
However the temporary freeze on his account comes after civil rights groups had threatened to organize an advertiser boycott of its platform.
“If YouTube does not agree with us and join the other platforms in banning Trump, we’re going to go to the advertisers,” one of SHP’s organizers, Jim Steyer, told the news agency.
In its official comments about the enforcement action against president Trump, YouTube makes no mention of any concern about ramifications from its own advertisers. Though, in recent years, it has faced some earlier boycotts from advertisers over hateful and offensive content.
In background remarks to reporters, YouTube also claims it consistently enforces its policies, regardless of who owns the channel — and says it makes no exceptions for public figures. However the platform has been known to reverse a three strike termination — recently reinstating the channel of UK broadcaster TalkRadio, for example, after it received a third strike related to coronavirus misinformation.
In that case the channel’s reinstatement was reported to have followed an intervention by TalkRadio’s owner News Corp’s chairman, Rupert Murdoch. UK ministers had also defended the channel’s right to debate the merits of government policy.
In Trump’s case there are a dwindling number of (GOP) politicians willing to ride to his defense in light of the shocking events in Washington last week and continued violent threats being made online by his supporters.
However concern about the massive market power of tech platforms — meaning they are in a position to be able to take unilateral action and shut down the US president’s ability to broadcast to millions of people — is far more widespread.
Earlier this week Germany’s chancellor, Angela Merkel, called Twitter’s ban on Trump “problematic”, while lawmakers elsewhere in Europe have said it must lead to regulatory consequences for big tech.
So whatever his wider legacy, Trump certainly looks set to have a lasting policy impact on the tech giants he is now busy railing at for putting him on mute.
The actions followed the barring of President Trump from the service last week, as Twitter has moved to distance itself from violent content.
Tech giants were right to ban the president. We still need to break them up.
As part of an exhibition at Miami Dade College’s art museum, Forensic Architecture planned to examine the treatment of migrant children at a nearby facility. The pandemic is only one reason that never happened.
Recent court rulings require officers to keep watch over artists’ rap lyrics, which prosecutors say celebrate gangs and violent crimes.
The Chinese Communist Party’s efforts to hide its missteps have taken on new urgency as the anniversary of the world’s first Covid-19 lockdown nears.
The companies removed the “free speech” social network from their app stores, limiting its reach just as many conservatives are seeking alternatives to Facebook and Twitter.
The ability of a handful of people to control our public discourse has never been more obvious.
Users of major mobile carriers can no longer access a service that detailed the personal information of police officers, a possible sign that the city is turning to tactics used in mainland China.
“We believe the risks of allowing the president to continue to use our service during this period are simply too great,” Mark Zuckerberg, Facebook’s chief executive, said.
After years of gentle wrist slaps, social media companies are finally revoking President Trump’s megaphone.
Surveillance and censorship bolster Beijing’s uncompromising grip on power. But in the country’s cities and streets, people have resumed normal lives.
Zhang Zhan, a former lawyer, is the first known person to be tried for challenging the Chinese government’s narrative about the coronavirus pandemic.
Zhang Zhan, who reported about the coronavirus from Wuhan during the lockdown, will face trial next week, in the first known case against a citizen journalist from the crisis.
Content moderation has been a thorny topic in 2020. And when I say “thorny,” I mean in the sense of having multiple congressional hearings on the subject. Twitter and Facebook in particular have been mired in concerns around the subject, fielding complaints that they both haven’t done enough to weed out problematic content and suggestions that they’re a censorship-happy, shadow-banning enemy of the First Amendment.
The latter appears to be the sole reason for the existence of the right wing-focused Twitter competitor, Parler.
As Substack grows in popularity, the newsletter platform is going to face some tremendously difficult questions around content moderation. Today it published a lengthy blog post hoping to nip some of those concerns in the bud. The write-up offers some caveats, but largely espouses the platform’s commitment to free speech, noting:
In most cases, we don’t think that censoring content is helpful, and in fact it often backfires. Heavy-handed censorship can draw more attention to content than it otherwise would have enjoyed, and at the same time it can give the content creators a martyr complex that they can trade off for future gain. We prefer a contest of ideas. We believe dissent and debate is important. We celebrate nonconformity.
The stance reflects Substack’s commitment to a subscription-based model, rather than the ads that currently keep the lights on for services like Twitter and Facebook. Instead, it takes a 10% cut of writers’ subscription revenue. Certainly that frees it up from sponsorship boycotts to some degree. The subscription model also means that users have to opt into specific content more so than on platforms like Twitter and Facebook, where content boundaries are far more fluid.
“We are happy to compete with ‘Substack but with more controls on speech’ just as we are happy to compete with ‘Substack but with advertising,’ ” the company writes.
Of course, there are financial considerations — there always are. Substack has a vested interest in supporting right-wing and conservative voices who have decried Facebook and Twitter’s practices. Notably, The Dispatch is at the top of the service’s politics leaderboard. In an interview with TechCrunch earlier this year, editor Stephen Hayes called the service, “unapologetically center-right,” while its current blurb refers to it as “conservative.”
“None of these views are neutral,” Substack writes. “Many Silicon Valley technology companies strive to make their platforms apolitical, but we think such a goal is impossible to achieve.” There’s no doubt some truth in that. Any position on content moderation can be viewed as a political one to some degree. And equally, none will make everyone — or even most people — completely happy.
But it’s also easy to see the service facing some major tests of its current hands-off approach as the service continues to grow in popularity. The service’s approach has involved putting its name out there in front of consumers, meaning it won’t be viewed as a kind of invisible publishing platform.
Substack is quick to add that there is, naturally, content that crosses the line in spite of this. “Of course, there are limits,” it writes. “We do not allow porn on Substack, for example, or spam. We do not allow doxxing or harassment.”
Michael Pack, the head of the U.S. Agency for Global Media, is moving to stop federal funding of the Open Technology Fund, which develops tools that allow people to get around controls on internet access.
Google, Mozilla, Apple, and Microsoft said they’re joining forces to stop Kazakhstan’s government from decrypting and reading HTTPS-encrypted traffic sent between its citizens and overseas social media sites.
All four of the companies’ browsers recently received updates that block a root certificate the government has been requiring some citizens to install. The self-signed certificate caused traffic sent to and from select websites to be encrypted with a key controlled by the government. Under industry standards HTTPS keys are supposed to be private and under the control only of the site operator.
A thread on Mozilla’s bug-reporting site first reported the certificate in use on December 6. The Censored Planet website later reported that the certificate worked against dozens of Web services that mostly belonged to Google, Facebook, and Twitter. Censored Planet identified the sites affected as:
Thousands of internal directives and reports reveal how Chinese officials stage-managed what appeared online in the early days of the outbreak.
U.S. prosecutors have charged a company executive based in China with conspiring to terminate online meetings about the Tiananmen Square massacre.
Artists gathered by the hundreds in Cuba’s largest protest in decades after seeing videos of police detentions that were filmed on cellphones and circulated online.
The Cyberspace Administration of China (CAC) announced it has banned 105 mobile apps for violating Chinese internet regulations. While almost all of the apps are made by Chinese developers, American travel booking and review site TripAdvisor is also on the list.
While TripAdvisor is based in the United States, like other foreign tech companies, it struck a partnership with a local tech company for its Chinese operations. In TripAdvisor’s case, it entered into an agreement with Trip.com — the Nasdaq-listed Chinese travel titan formerly known as Ctrip — in November 2019 to operate a joint venture called TripAdvisor China. The deal made Trip.com subsidiary Ctrip Investment a majority shareholder in the JV, with TripAdvisor owning 40%.
As part of the deal, TripAdvisor agreed to share content with Trip.com brands, including Chinese travel platforms Ctrip and Qunar, which gained access to the American firm’s abundant overseas travel reviews. That put TripAdvisor in a race with regional players, including Alibaba-backed Qyer and Hong Kong-based Klook, to capture China’s increasingly affluent and savvy outbound tourists.
The CAC is the government agency in charge of overseeing internet regulations and censorship. In a brief statement, the bureau said it began taking action on November 5 to “clean up” China’s internet by removing apps that broke regulations. The 105 apps constituted the first group to be banned, and were targeted after users reported illegal activity or content, the agency said.
Though the CAC did not specify exactly what each app was banned for, the list of illegal activities included spreading pornography, incitements to violence or terrorism, fraud or gambling and prostitution.
In addition, eight app stores were taken down for not complying with review regulations or allowing the download of illegal content.
Such “app cleansing” takes place periodically in China where the government has a stranglehold on information flows. Internet services in China, especially those involving user-generated content, normally rely on armies of censors or filtering software to ensure their content is in line with government guidelines.
The Chinese internet is evolving so rapidly that regulations sometimes fall behind the development of industry players, so the authorities are constantly closing gaps. Apps and services could be pulled because regulators realize they are lacking essential government permits, or they might have published illegal or politically sensitive information.
Foreign tech firms operating in China often find themselves walking a fine line between the “internet freedom” celebrated in the West and adherence to Beijing’s requirements. The likes of Bing.com, LinkedIn, and Apple — the few remaining Western tech giants in China — have all drawn criticism for caving to China’s censorship pressure in the past.
Anand Patwardhan spent decades tracking the rise of Hindu nationalism. And now, under an increasingly repressive government, he holds his screenings in secret.
Nationalist leaders decry scenes in the show “A Suitable Boy” between a Hindu and a Muslim, at a time of rising interfaith conflict and government efforts to control online content.
After terrorist attacks, France’s leader accuses the English-language media of “legitimizing this violence.”
Austria’s Supreme Court has dismissed Facebook’s appeal in a long running speech takedown case — ruling it must remove references to defamatory comments made about a local politician worldwide for as long as the injunction lasts.
We’ve reached out to Facebook for comment on the ruling.
Green Party politician, Eva Glawischnig, successfully sued the social media giant seeking removal of defamatory comments made about her by a user of its platform after Facebook had refused to take down the abusive postings — which referred to her as a “lousy traitor”, a “corrupt tramp” and a member of a “fascist party”.
After a preliminary injunction in 2016 Glawischnig won local removal of the defamatory postings the next year but continued her legal fight — pushing for similar postings to be removed and take downs to also be global.
Questions were referred up to the EU’s Court of Justice. And in a key judgement last year the CJEU decided platforms can be instructed to hunt for and remove illegal speech worldwide without falling foul of European rules that preclude platforms from being saddled with a “general content monitoring obligation”. Today’s Austrian Supreme Court ruling flows naturally from that.
Austrian newspaper Der Standard reports that the court confirmed the injunction applies worldwide, both to identical postings or those that carry the same essential meaning as the original defamatory posting.
It said the Austrian court argues that EU Member States and civil courts can require platforms like Facebook to monitor content in “specific cases” — such as when a court has identified user content as unlawful and “specific information” about it — in order to prevent content that’s been judged to be illegal from being reproduced and shared by another user of the network at a later point in time with the overarching aim of preventing future violations.
The case has important implications for the limitations of online speech.
Regional lawmakers are also working on updating digital liability regulations. Commission lawmakers have said they want to force platforms to take more responsibility for the content they fence and monetize — fuelled by concerns about the impact of online hate speech, terrorist content and divisive disinformation.
A long-standing EU rule, prohibiting Member States from putting a general content monitoring obligation on platforms, limits how they can be forced to censor speech. But the CJEU ruling has opened the door to bounded monitoring of speech — in instances where it’s been judged to be illegal — and that in turn may influence the policy substance of the Digital Services Act which the Commission is due to publish in draft early next month.
In a reaction to last year’s CJEU ruling, Facebook argued it “opens the door to obligations being imposed on internet companies to proactively monitor content and then interpret if it is ‘equivalent’ to content that has been found to be illegal”.
“In order to get this right national courts will have to set out very clear definitions on what ‘identical’ and ‘equivalent’ means in practice. We hope the courts take a proportionate and measured approach, to avoid having a chilling effect on freedom of expression,” it added.
Since the election, millions have migrated to alternative social media and media sites like Parler, Rumble and Newsmax.
Since the election, millions have migrated to alternative social media and media sites like Parler, Rumble and Newsmax.
Chafing at new misinformation safeguards and a lost election, dejected Trump supporters flocked to the alternative social network Parler over the weekend. Parler’s homepage promises that users can “Speak freely and express yourself openly, without fear of being ‘deplatformed’ for your views.”
Parler shot up the charts across Apple’s App Store and the Google Play Store in the days following official election results. An Android app called “Parlor” was also trending Monday, likely due to misspelled searches for Parler.
Joe Biden prevailed on Saturday, picking up the critical state of Pennsylvania to become president-elect. Biden’s win followed a tense five days of vote tallying, as Trump repeatedly attacked the U.S. election process.
Parler sat at #7 in the App Store on Saturday, November 7, according to mobile app market analysis from Sensor Tower. By the next day, it shot up to #1 — a first for the app. It remains in the top slot now, in contrast to its position a week ago as the 1023th most downloaded app.
The story is similar in Google’s own app marketplace, where Parler climbed from #51 on Saturday to #5 on Sunday, topping out in the #1 slot today. The Fox News competitor Newsmax TV and the self-described “next-gen social network” MeWe also sat in Google’s top 5.
Parler’s ascent is notable but not totally new. Accounts anticipating a ban have been pointing their followers toward Parler and other far-right havens with every new platform policy change that Twitter and Facebook make. Gab, which describes itself as “the free speech social network” is also vying for Trump loyalists.
“It’s crazy to believe that only a handful of Silicon Valley companies will have complete control over the flow of information, communication, and news forever,” Gab CEO Andrew Torba wrote in a blog post Sunday. Torba was booted from Y Combinator’s alumni network for threatening comments and harassment shortly after the 2016 election.
But in spite of calls for a mass exodus, many prominent conservative figures accusing Twitter and Facebook of censorship have maintained their presences on the platforms, knowing that their reach would be dramatically limited on the alternative social networks.
Fox News contributor and Trump enthusiast Dan Bongino called for his own supporters to move to Parler last week, warning that “Fakebook” might act against his page. On Facebook, Bongino’s content regularly ranks in the top performing posts on the platform and his page has nearly four million followers. Notably, Bongino announced an ownership stake in Parler earlier this year.
Over on Parler, the Trump campaign is raising donations for an “election defense task force,” but according to the fine print half of every donation will go toward existing debt. The campaign doesn’t appear to have made much original content on the niche social network lately, instead reposting very similar messages over and over.
For the many Trump supporters pushing dangerous false claims about the election, the writing was on the wall. Facebook made a rapid-fire series of policy changes in the months preceding the election, banning QAnon, cracking down on violent militias and introducing new tools to slow the spread of misinformation, which metastasized on the social network over the last four years.
As it became clear that Trump’s effort to delegitimize the election was picking up steam, Facebook cracked down. The company began hiding search results for the #StopTheSteal hashtag and removed one of its popular groups over “calls for violence” made by some members.
In spite of his loss, President Trump has refused to concede the election. But by Monday, Biden’s transition team had already kicked into high gear, announcing members of a coronavirus task force that will seek to rein in the deadly virus where Trump has failed.
With election results settled, the vast machinery of the U.S. government moved steadily toward January’s transfer of power, as it has in every other election.
In its short life span, it was one of the fastest growing groups in Facebook’s history and a hub for those trying to delegitimize the election.