There is no silver bullet to slay internet lies and fictions. But students can be taught to know when information is reliable.
There is no silver bullet to slay internet lies and fictions. But students can be taught to know when information is reliable.
Our institutions have failed to rein in Donald Trump. So people look to Big Tech.
When their annual conference was moved online, they were amused to find seemingly benign words blocked and replaced with asterisks during virtual sessions.
The companies have said they would do more to stop misinformation and hacked materials from spreading. This is what that effort looks like.
How a torrent of propaganda, lies and conspiracy theories has weaponized the First Amendment.
Mark Zuckerberg, Facebook’s chief executive, once cited Holocaust denial as something he would allow on the social network for free speech reasons.
China now has a tool that lets users access YouTube, Facebook, Twitter, Instagram, Google, and other internet services that have otherwise long been banned in the country.
Called Tuber, the mobile browser recently debuted on China’s third-party Android stores, with an iOS launch in the pipeline. The landing page of the app features a scrolling feed of YouTube videos, with tabs at the bottom that allow users to visit other mainstream Western internet services.
While some celebrate the app as an unprecedented “opening up” of the Chinese internet, others quickly noticed the browser comes with a veil of censorship. YouTube queries for politically sensitive keywords such as “Tiananmen” and “Xi Jinping” returned no results on the app, according to tests done by TechCrunch.
Using the app also comes with liabilities. Registration requires a Chinese phone number, which is tied to a person’s real identity. The platform could suspend users’ accounts and share their data “with the relevant authorities” if they “actively watch or share” content that breaches the constitution, endangers national security and sovereignty, spreads rumors, disrupts social orders, or violates other local laws, according to the app’s terms of service.
Rather than blocking sites that are beyond the purview of Beijing and tracking individuals using VPNs to circumvent the Great Firewall, China now has an app that gives its people a glimpse into the Western internet — with the caveat that their digital footprint may be under close watch by the authorities.
Much about the app remains unclear, such as its origin and the motive behind it. The operator of the app’s official website (上海丰炫信息技术有限公司) is 70% owned by a subsidiary of Qihoo 360, a Chinese cybersecurity software giant. It remains to be seen whether the app will take off.
This is an updating story.
China’s enthusiasm for teaching children to code is facing a new roadblock as organizations and students lose an essential tool: the Scratch programming language developed by the Lifelong Kindergarten Group at the MIT Media Lab.
China-based internet users can no longer access Scratch’s website. Greatfire.org, an organization that monitors internet censorship in China, shows that the website was 100% blocked as early as August 20, while a Scratch user flagged the ban on August 14.
Nearly 60 million children around the world have used Scratch’s visual programming language to make games, animations, stories and the likes. That includes students in China, which is seeing a gold rush to early coding as the country tries to turn its 200 million kids into world-class tech talents.
At last count, 5.65% or 3 million of Scratch’s registered users are based in China, though its reach is greater than the figure suggests as many Chinese developers have built derivatives based on Scratch, an open-source software.
Projects on Scratch contains “a great deal of humiliating, fake, and libelous content about China,” including placing Hong Kong, Macau and Taiwan in a dropdown list of “countries”, a state-run news outlet reported on August 21.
The article added that “any service distributing information in China” must comply with local regulations, and Scratch’s website and user forum had been shut down in the country.
The Scratch editor, which claims coders in every country in the world and available in more than 50 languages, is downloadable and used offline. That means Chinese users who have installed the software can continue using it for now. It’s unclear whether the restriction will extend to and hamper the software’s future version updates.
The Scratch team cannot be immediately reached for comment. Its ban in China, if proven permanent, will likely drum up support for home-grown replacements.
“Scratch is very widely used in China by student users. Inside schools, it’s used in many official information technology textbooks for primary school students,” said Anqi Zhou, chief executive of Shenzhen-based Dream Codes True, a coding startup targeting primary and secondary school kids. “There are many coding competitions for kids using Scratch.”
Indeed, the infiltration of Scratch into the public school system is what had initially alarmed the Chinese authority. An article published August 11 on a youth-focused state outlet blasted:
“Platforms like Scratch have a large number of young Chinese users. That’s exactly why the platform must exercise self-discipline. Allowing the free flow of anti-China and separatist discourse will cause harm to Chinese people’s feelings, cross China’s red line, and poison China’s future generation.”
The article headline captured Beijing’s attitude towards imported technologies, including those that are open-source and meant to be educational and innocuous: An open China is not “xenophobic” but must “detoxify”.
Regardless of the “problematic” user-generated content on Scratch, China will likely encourage more indigenous tech players to grow, as it has done in a sweeping effort to localize semiconductors and even source code hosting.
Outside textbooks, Scratch had also found its way into pricey afterschool centers across China. Some companies attribute Scratch’s open-source codes as their foundation while others build lookalikes that claim to be in-house made, several Chinese founders working in the industry told TechCrunch.
“Scratch is like the benchmark for kids’ programming software. Most parents learn about Scratch from extracurricular programs, which tend to keep all the web traffic to themselves rather than directing users to Scratch,” said Yi Zhang, founder of Tangiplay, a Shenzhen-based startup teaching children to code through hardware.
Despite Scratch’s popularity in China, competitors of all sizes have cropped up. That includes five-year-old Code Mao, a Shenzhen startup that’s an early and major player in the space — and well-financed by venture capital firms. With its own Kitten language “more robust than Scratch,” the startup boasts a footprint in 21 countries, over 30 million users, and about 11,000 institutional customers. Internet incumbents NetEase and Tencent have also come up with their own products for young coders.
“If it’s something permanent and if mainstream competitions and schools stop using it, we too will consider stopping using it,” said Zhou, whose startup is also based in Shenzhen, which has turned into a hub for early coding thanks to its emerging players like Code Mao and Makeblock.
A vital connection for the Chinese diaspora, the app has also become a global conduit of Chinese state propaganda, surveillance and intimidation. The United States has proposed banning it.
Nine years after the C.I.A. blacked out parts of Ali Soufan’s book, the agency has finally allowed a more complete version of his story to be published.
We will respond to repression by demonstrating our solidarity creatively.
As countries around the world ban or threaten to restrict TikTok, interest in virtual private networks has spiked.
The use of VPNs can let users access an online service from an encrypted tunnel and thus bypass app blocks. “We are seeing an increasing number of governments around the world attempting to control the information their citizens can access,” observes Harold Li, vice president of ExpressVPN, which claims to have over 3,000 servers across 94 countries. “For this reason, VPNs are used to access blocked sites and services by many worldwide.”
Indeed, ExpressVPN’s website saw a 10% week-over-week increase in traffic following the U.S. government’s announcement of a potential TikTok ban. The VPN service recorded similar trends in Japan and Australia, where it saw a 19% and 41% WoW increase in traffic respectively after the governments said they might block TikTok.
When India officially shut down TikTok, ExpressVPN saw a 22% WoW jump in web traffic. In Hong Kong, where TikTok voluntarily pulled out following the enactment of the national security law, the VPN service logged a 10% WoW traffic growth.
VPNs have long been a popular solution for people to elude restrictions on the internet, be it censored content or app bans. We wrote about Hong Kong residents flocking to VPN services in anticipation of heightened censorship, but the use of a VPN is not a ‘magic bullet,’ as a Hong Kong media scholar warned.
Governments can make it difficult for average users to access VPNs by removing them from local app stores. Users will have to register in another regional app store, which often involves roadblocks like owning a local credit card. Countries can also illegalize the use of VPNs, imposing fines on users and even imprisoning VPN vendors as China did.
Depending on how an app block plays out in practice, there may be other challenges unsolvable by VPNs. “We don’t know how potential bans may be enforced yet, and it may require users to jump through other hoops on top of using a VPN, such as removing their local SIM card,” suggested Li.
Users can look for alternatives to banned apps, but switching services can entail high costs, especially when a product has strong network effects. TikTok, for instance, enjoys a ‘content network effect’ that makes it difficult for rivals to match its user experience, as my former colleague Josh Constine pointed out.
Similarly, those who worry about a potential WeChat ban in the U.S. may simply lack a viable alternative to the Chinese messenger with over 1 billion users. For members of the Chinese diaspora in the U.S., WeChat is the only way for them to reach their families and friends in China, where it’s the dominant chat app while major Western social networks are unavailable.
Smaller apps are flying under the radar of the authority. Unlike rivals Telegram and Whatsapp, encrypted messenger Signal is still accessible in China — for now, and the app climbed 51 spots in the China rank of iOS social apps just between August 7-9, currently sitting in the 36th place. Others in China use iMessenger, which also remains unblocked, to stay in touch with their U.S. contacts, but the option is exclusive to iPhone users.
Individuals and businesses worldwide increasingly need to adapt to service shutdowns or risk losing access to the free and open internet. As Telegram founder Pavel Durov lamented: “[T]he U.S. move against TikTok is setting a dangerous precedent that may eventually kill the internet as a truly global network (or what is left of it).”
The all-purpose app, which the administration is restricting along with TikTok, is how many Chinese living abroad stay in touch with each other, and with people back home.
The region’s lawmakers and regulators are taking direct aim at Amazon, Facebook, Google and Apple in a series of proposed laws.
The legislation extends control over social media platforms like Facebook, Twitter and YouTube. Critics worry it will be used to stifle dissent and criticism of the government
Two Egyptian social media stars with millions of followers were convicted on charges of violating family values. Two more women are scheduled to stand trial on similar charges on Wednesday.
There is plenty the U.S. government could do to ensure that TikTok acts responsibly without getting rid of it altogether.
The illiberal left is a lot less threatening than the right. That doesn’t mean it doesn’t exist.
As the city grapples with new restrictions on online speech, American tech giants are on the front line of a clash between China and the United States over the internet’s future.
An alumnus has filed a suit to save a fresco at the University of Kentucky that depicts enslaved people; a Black artist whose work is shown with it also wants the mural to stay.
A battle involving Michael Pack and a U.S.-funded tech group revolves around software from Falun Gong, the secretive, anti-Beijing spiritual movement with pro-Trump elements.
His coming-of-age novel “Bless Me, Ultima” reframed the way many in New Mexico viewed their own history, even as school districts tried to ban it.
Censorship and politics are fracturing the global internet, isolating users and industries accustomed to ignoring national borders.
The influential pro-Trump community broke the rules on harassment and targeting, said Reddit, which also banned other groups.
It’s more than four years since major tech platforms signed up to a voluntary pan-EU Code of Conduct on illegal hate speech removals. Yesterday the European Commission’s latest assessment of the non-legally binding agreement lauds “overall positive” results — with 90% of flagged content assessed within 24 hours and 71% of the content deemed to be illegal hate speech removed. The latter is up from just 28% in 2016.
However the report cards finds platforms are still lacking in transparency. Nor are they providing users with adequate feedback on the issue of hate speech removals, in the Commission’s view.
Platforms responded and gave feedback to 67.1% of the notifications received, per the report card — up from 65.4% in the previous monitoring exercise. Only Facebook informs users systematically — with the Commission noting: “All the other platforms have to make improvements.”
In another criticism, its assessment of platforms’ performance in dealing with hate speech reports found inconsistencies in their evaluation processes — with “separate and comparable” assessments of flagged content that were carried out over different time periods showing “divergences” in how they were handled.
Signatories to the EU online hate speech code are: Dailymotion, Facebook, Google+, Instagram, Jeuxvideo.com, Microsoft, Snapchat, Twitter and YouTube.
This is now the fifth biannual evaluation of the code. It may not yet be the final assessment but EU lawmakers’ eyes are firmly tilted toward a wider legislative process — with commissioners now busy consulting on and drafting a package of measures to update the laws wrapping digital services.
A draft of this Digital Services Act is slated to land by the end of the year, with commissioners signalling they will update the rules around online liability and seek to define platform responsibilities vis-a-vis content.
Unsurprisingly, then, the hate speech code is now being talked about as feeding that wider legislative process — while the self-regulatory effort looks to be reaching the end of the road.
The code’s signatories are also clearly no longer a comprehensive representation of the swathe of platforms in play these days. There’s no WhatsApp, for example, nor TikTok (which did just sign up to a separate EU Code of Practice targeted at disinformation). But that hardly matters if legal limits on illegal content online are being drafted — and likely to apply across the board.
Commenting in a statement, Věra Jourová, Commission VP for values and transparency, said: “The Code of conduct remains a success story when it comes to countering illegal hate speech online. It offered urgent improvements while fully respecting fundamental rights. It created valuable partnerships between civil society organisations, national authorities and the IT platforms. Now the time is ripe to ensure that all platforms have the same obligations across the entire Single Market and clarify in legislation the platforms’ responsibilities to make users safer online. What is illegal offline remains illegal online.”
In another supporting statement, Didier Reynders, commissioner for Justice, added: “The forthcoming Digital Services Act will make a difference. It will create a European framework for digital services, and complement existing EU actions to curb illegal hate speech online. The Commission will also look into taking binding transparency measures for platforms to clarify how they deal with illegal hate speech on their platforms.”
Earlier this month, at a briefing discussing Commission efforts to tackle online disinformation, Jourová suggested lawmakers are ready to set down some hard legal limits online where illegal content is concerned, telling journalists: “In the Digital Services Act you will see the regulatory action very probably against illegal content — because what’s illegal offline must be clearly illegal online and the platforms have to proactively work in this direction.” Disinformation would not likely get the same treatment, she suggested.
The Commission has now further signalled it will consider ways to prompt all platforms that deal with illegal hate speech to set up “effective notice-and-action systems”.
In addition, it says it will continue — this year and next — to work on facilitating the dialogue between platforms and civil society organisations that are focused on tackling illegal hate speech, saying that it especially wants to foster “engagement with content moderation teams, and mutual understanding on local legal specificities of hate speech”
In its own report last year assessing the code of conduct, the Commission concluded that it had contributed to achieving “quick progress”, particularly on the “swift review and removal of hate speech content”.
It also suggested the effort had “increased trust and cooperation between IT Companies, civil society organisations and Member States authorities in the form of a structured process of mutual learning and exchange of knowledge” — noting that platforms reported “a considerable extension of their network of ‘trusted flaggers’ in Europe since 2016.”
“Transparency and feedback are also important to ensure that users can appeal a decision taken regarding content they posted as well as being a safeguard to protect their right to free speech,” the Commission report also notes, specifying that Facebook reported having received 1.1 million appeals related to content actioned for hate speech between January 2019 and March 2019, and that 130,000 pieces of content were restored “after a reassessment”.
On volumes of hate speech, the Commission suggested the amount of notices on hate speech content are roughly in the range of 17-30% of total content, noting for example that Facebook reported having removed 3.3M pieces of content for violating hate speech policies in the last quarter of 2018 and 4M in the first quarter of 2019.
“The ecosystems of hate speech online and magnitude of the phenomenon in Europe remains an area where more research and data are needed,” the report added.
The encrypted instant messenger Telegram said on Monday it’s ramping up efforts to develop anti-censorship technologies serving users in countries where it is banned or partially blocked, including China and Iran.
“Over the course of the last two years, we had to regularly upgrade our ‘unblocking’ technology to stay ahead of the censors… We don’t want this technology to get rusty and obsolete. That is why we have decided to direct our anti-censorship resources into other places where Telegram is still banned by governments — places like Iran and China,” co-founder and chief executive Pavel Durov, who lived in Russia for years before going into self-imposed exile, posted on his personal Telegram channel on Monday.
The pledge noticeably came on the heels of the Russian government’s decision to lift its ban on Telegram last week. The app has generated impressive growth in Russia even after it was officially banned in the country in 2018 over its refusal to hand over encryption keys to the authorities who would then have access to users’ content. The restriction prompted the company to launch the “Digital Resistance” initiative that would provide anti-blocking tools to users.
As a result, Telegram resumed accessibility within weeks in most of Russia and the ban had since remained patchy. It doubled its monthly active users to 400 million in May since 2018, with 30 million coming from Russia.
Despite its popularity, the app is trapped in limbo as it copes with disgruntled investors who put up big bucks for the company’s ambitious blockchain platform, Telegram Open Network, which terminated abruptly in May.
It’s unclear why Russia suddenly decided to changed tack on Telegram. In a statement, Roskomnadzor, the telecommunications authority that initially ordered the ban, said the decision arrived after it had assessed the “readiness expressed by the founder of Telegram to counter terrorism and extremism.”
This inevitably raised questions of the kind of concession Telegram has made to the Russian state. Durov stressed that his company uses advanced mechanisms to detect and prevent terrorist acts without compromising user privacy, the very ethos of Telegram. Time will tell how the app can accommodate two challenging tasks that are widely seen as mutually exclusive.
The government may also have a motive to unblock Telegram, which is particularly popular among Russian youngsters, as a constitutional vote that could extend Putin’s rule is scheduled for next month.
Many users in countries where Telegram is inaccessible, like China, run the app with virtual private networks (VPN) or other forms of proxy. The app has turned into a refuge for Chinese users to share and discuss information censored by the authorities.
For instance, following Beijing’s crackdown on bitcoins in 2017, traders flocked to Telegram and other encrypted messengers that were out of reach by the Chinese government. Earlier this year, many Chinese citizens seeking clarity around the coronavirus situation got around the Great Firewall to join Telegram channels maintained by volunteers sharing hourly updates on the virus. One of the largest Chinese channels focused on COVID-19 has amassed more than 85,000 followers.
While a French online hate speech law has just been derailed by the country’s top constitutional authority on freedom of expression grounds, Germany is beefing up hate speech rules — passing a provision that will require platforms to send suspected criminal content directly to the Federal police at the point it’s reported by a user.
The move is part of a wider push by the German government to tackle a rise in right wing extremism and hate crime — which it links to the spread of hate speech online.
Germany’s existing Network Enforcement Act (aka the NetzDG law) came into force in the country in 2017, putting an obligation on social network platforms to remote hate speech within set deadlines as tight as 24 hours for easy cases — with fines of up to €50M should they fail to comply.
Yesterday the parliament passed a reform which extends NetzDG by placing a reporting obligation on platforms which requires them to report certain types of “criminal content” to the Federal Criminal Police Office.
A wider reform of the NetzDG law remains ongoing in parallel, that’s intended to bolster user rights and transparency, including by simplifying user notifications and making it easier for people to object to content removals and have successfully appealed content restored, among other tweaks. Broader transparency reporting requirements are also looming for platforms.
The NetzDG law has always been controversial, with critics warning from the get go that it would lead to restrictions on freedom of expression by incentivizing platforms to remove content rather than risk a fine. (Aka, the risk of ‘overblocking’.) In 2018 Human Rights Watch dubbed it a flawed law — critiquing it for being “vague, overbroad, and turn[ing] private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal”.
The latest change to hate speech rules is no less controversial: Now the concern is that social media giants are being co-opted to help the state build massive databases on citizens without robust legal justification.
A number of amendments to the latest legal reform were rejected, including one tabled by the Greens which would have prevented the personal data of the authors of reported social media posts from being automatically sent to the police.
The political party is concerned about the risk of the new reporting obligation being abused — resulting in data on citizens who have not in fact posted any criminal content ending up with the police.
It also argues there are only weak notification requirements to inform authors of flagged posts that their data has been passed to the police, among sundry other criticisms.
The party had proposed that only the post’s content would be transmitted directly to police who would have been able to request associated personal data from the platform should there be a genuine need to investigate a particular piece of content.
The German government’s reform of hate speech law follows the 2019 murder of a pro-refugee politician, Walter Lübcke, by neo nazis — which it said was preceded by targeted threats and hate speech online.
The government also argues that hate speech online has a chilling effect on free speech and a deleterious impact on democracy by intimidating those it targets — meaning they’re unable to freely express themselves or participate without fear in society.
At the pan-EU level, the European Commission has been pressing platforms to improve their reporting around hate speech takedowns for a number of years, after tech firms signed up to voluntary EU Code of Conduct on hate speech.
It is also now consulting on wider changes to platform rules and governance — under a forthcoming Digital Services Act which will consider how much liability tech giants should face for content they’re fencing.
FCC Commissioner Geoffrey Starks has examined the President’s Executive Order that attempts to spur the FCC into action against social media companies and found it wanting. “There are good reasons for the FCC to stay out of this debate,” he said. “The decision is ours alone.”
The Order targets Section 230 of the Communications Decency Act, which ensures that platforms like Facebook and YouTube aren’t liable for illegal content posted to them, as long as they are making efforts to take them down in accordance with the law.
Some in government feel these protections go too far and have led to social media companies suppressing free speech. Trump himself clearly felt suppressed when Twitter placed a fact-check warning on unsupported claims of fraud in mail-in voting, leading directly to the Order.
Starks gave his take on the topic in an interview with the Information Technology and Innovation Foundation, a left-leaning think tank that pursues tech-related issues. While he is just one of five commissioners and the FCC has yet to consider the order in any official sense, his words have weight, as they indicate serious legal and procedural objections to it.
“The Executive Order definitely gets one thing right, and that is that the President cannot instruct the FCC to do this or anything else,” he said. “We’re an independent agency.”
He was careful to make clear that he doesn’t think the law is perfect — just that this method of changing it is completely unjustified.
“The broader debate about section 230 long predates President Trump’s conflict with Twitter in particular, and there are so many smart people who believe the law here should be updated,” he explained. “But ultimately that debate belongs to Congress. That the president may find it more expedient to influence a 5-member commission than a 538-member Congress is not a sufficient reason, much less a good one, to circumvent the constitutional function of our democratically elected representatives.”
The Justice Department has entered the picture as well, offering its own recommendations for changing Section 230 today — though like the White House, Justice has no power to directly change or invent responsibilities for the FCC.
Fellow Commissioner Jessica Rosenworcel echoed his concerns, paraphrasing an earlier statement on the order: “Social media can be frustrating, but turning the FCC into the President’s speech police is not the answer.”
After detailing some of the legal limitations of the FCC, Section 230, and the difficulty and needlessness of narrowly defining “good faith” actions, Starks concluded that the order simply doesn’t make a lot of sense in their context.
“The first amendment allows social media companies to censor content freely in ways the government never could, and it prohibits the government from retaliating against them for that speech,” he said. “So much — so much — of what the president proposes here seems inconsistent with those core principles, making an FCC rulemaking even less desirable.”
“The worst case scenario, the one that burdens the proper functioning of our democracy, would be to allow the laxity here to bestow some type of credibility on the Executive Order, one that threatens certainly a new regulatory regime upon internet service providers with no credible legal support,” he continued.
Having said that, he acknowledged that the order does mean that some action should take place at the FCC — it may just not be the kind of resolution Trump wishes.
“I’m calling to press [the National Telecommunications Industry Association] to send the petition as quickly as possible. I see no reason why they should need more than 30 days from the Executive Order’s issuance itself so we can get on with it, have the FCC review it and vote,” he said. “And if, as I suspect it ultimately will, the petition fails at a legal question of authority, I think we should say it loud and clear, and close the book on this unfortunate detour. Let us avoid an upcoming election season can use a pending proceeding to, in my estimation, intimidate private parties.”
A lot of this is left to Chairman Ajit Pai, who has fairly consistently fallen in line with the administration’s wishes. And if the eagerness of Commissioner Carr is any indicator, the Republican members of the Commission are happy to respond to the President’s “call for guidance.”
So far there has been no official announcement of FCC business relating to the Executive Order, but if the NTIA moves quickly we could hear about it as early as next month’s open meeting.
The change lets Facebook play both sides of the debate about political advertising on social media.
Efforts to block research on climate change don’t just come from the Trump political appointees on top. Lower managers in government are taking their cues, and running with them.
Though the account was restored, the digital commemoration of the 1989 crackdown in Beijing spotlights thorny questions for a company with operations in both the United States and China.
MoCA Cleveland has apologized to the artist Shaun Leonardo, whose charcoal drawings represent victims of police violence.
In an open letter, nearly three dozen called on the chief executive to take action on President Trump’s messages.
Inside the company, one faction wanted Jack Dorsey, Twitter’s chief, to take a hard line against the president’s tweets while another urged him to remain hands-off.
President Trump’s taking aim at Twitter for fact-checking his tweets is part of a long tradition upheld by aggrieved internet trolls. The stakes are high.
Without certain liability protections, companies like Twitter would have to be more aggressive about policing messages that press the boundaries — like the president’s.