Watch Zuckerberg, Pichai and Dorsey testify at the House hearing on disinformation and extremism

Big tech is back on the virtual hill.

Three of tech’s most prominent CEOs will appear before the House Energy and Commerce committee today at 9AM PT as lawmakers grill the companies on their failure to contain disinformation and extremism.

In opening statements made available before the hearing, Facebook’s Mark Zuckerberg, Twitter’s Jack Dorsey and Google’s Sundar Pichai each laid out the conversation they’d prefer to have.

Zuckerberg pushed for reforms to Section 230 of the Communications Decency Act that wouldn’t resolve the issues at hand, but would probably give Facebook another leg up on smaller competitors. Google defended Section 230 and pointed to its own often mild or delayed efforts to contain election misinformation that ultimately snowballed into the attack on the U.S. Capitol. Twitter mostly looked forward rather than back, pointing to initiatives to make its own algorithms transparent and to invite more community-level moderation efforts.

The topic at hand Thursday is a big one, and there are plenty of directions lawmakers might take the hearing. In recent months, the two subcommittees leading the joint hearing have questioned Facebook about its algorithmic group recommendations — a frequent concern among extremism experts — and reports that the company served combat gear ads next to posts promoting the Capitol riot. More broadly, the committee will delve into social media’s role in disseminating dangerous misinformation, but it’s possible we’ll take detours through some regulatory solutions like antitrust legislation and Section 230 reform along the way.

If you’d like to follow along, we’ve embedded the hearing above or you can check back for other coverage as we go.

#capitol-riot, #congress, #facebook, #government, #tc, #the-battle-over-big-tech

0

You can now give Facebook’s Oversight Board feedback on the decision to suspend Trump

Facebook’s “Supreme Court” is now accepting comments on one of its earliest and likely most consequential cases. The Facebook Oversight Board announced Friday that it would begin accepting public feedback on Facebook’s suspension of former President Trump.

Mark Zuckerberg announced Trump’s suspension on January 7, after the then-president of the United States incited his followers to riot at the nation’s Capitol, an event that resulted in a number of deaths and imperiled the peaceful transition of power.

In a post calling for feedback, the Oversight Board describes the two posts that led to Trump’s suspension. One is a version of the video the president shared the day of the Capitol riot in which he sympathizes with rioters and validates their claim that the “election was stolen from us.” In the second post, Trump reiterates those views, falsely bemoaning a “sacred landslide election victory” that was “unceremoniously & viciously stripped away.”

The board says the point of the public comment process is to incorporate “diverse perspectives” from third parties who wish to share research that might inform their decisions, though it seems a lot more likely the board will wind up with a tidal wave of subjective and probably not particularly useful political takes. Nonetheless, the comment process will be open for 10 days and comments will be collected in an appendix for each case. The board will issue a decision on Trump’s Facebook fate within 90 days of January 21, though the verdict could come sooner.

The Oversight Board specifically invites public comments that consider:

Whether Facebook’s decision to suspend President Trump’s accounts for an indefinite period complied with the company’s responsibilities to respect freedom of expression and human rights, if alternative measures should have been taken, and what measures should be taken for these accounts going forward.

How Facebook should assess off-Facebook context in enforcing its Community Standards, particularly where Facebook seeks to determine whether content may incite violence.

How Facebook should treat the expression of political candidates, office holders, and former office holders, considering their varying positions of power, the importance of political opposition, and the public’s right to information.

The accessibility of Facebook’s rules for account-level enforcement (e.g. disabling accounts or account functions) and appeals against that enforcement.

Considerations for the consistent global enforcement of Facebook’s content policies against political leaders, whether at the content-level (e.g. content removal) or account-level (e.g. disabling account functions), including the relevance of Facebook’s “newsworthiness” exemption and Facebook’s human rights responsibilities.

The Oversight Board’s post gets very granular on the Trump suspension, critiquing Facebook for lack of specificity when the company didn’t state exactly which part of its community standards were violated. Between this and the five recent cases, the board appears to view its role as a technical one, in which it examines each case against Facebook’s existing ruleset and then makes recommendations for future policy rather than working backward from its own broader recommendations.

The Facebook Oversight Board announced its first cluster of decisions this week, overturning the company’s own choice to remove potentially objectionable content in four of five cases. None of those cases pertained to content relevant to Trump’s account suspension, but they prove that the Oversight Board isn’t afraid to go against the company’s own thinking — at least when it comes to what gets taken down.

#capitol-riot, #donald-trump, #facebook, #social, #tc

0

Threat of inauguration violence casts a long shadow over social media

As the U.S. heads into one of the most perilous phases of American democracy since the Civil War, social media companies are scrambling to shore up their patchwork defenses for a moment they appear to have believed would never come.

Most major platforms pulled the emergency break last week, deplatforming the president of the United States and enforcing suddenly robust rules against conspiracies, violent threats and undercurrents of armed insurrection, all of which proliferated on those services for years. But within a week’s time, Amazon, Facebook, Twitter, Apple and Google had all made historic decisions in the name of national stability — and appearances. Snapchat, TikTok, Reddit and even Pinterest took their own actions to prevent a terror plot from being hatched on their platforms.

Now, we’re in the waiting phase. More than a week after a deadly pro-Trump riot invaded the iconic seat of the U.S. legislature, the internet still feels like it’s holding its breath, a now heavily-fortified inauguration ceremony looming ahead.

(Photo by SAUL LOEB/AFP via Getty Images)

What’s still out there

On the largest social network of all, images hyping follow-up events continued to circulate mid this week. One digital Facebook flyer promoted an “armed march on Capitol Hill and all state Capitols,” pushing the dangerous and false conspiracy that the 2020 presidential election was stolen.

Facebook says that it’s working to identify flyers calling for “Stop the Steal” adjacent events using digital fingerprinting, the same process it uses to remove terrorist content from ISIS and Al Qaeda. The company noted that it has seen flyers calling for events on January 17 across the country, January 18 in Virginia and inauguration day in D.C.

At least some of Facebook’s new efforts are working: one popular flyer TechCrunch observed on the platform was removed from some users’ feeds this week. A number of “Stop the Steal” groups we’d observed over the last month also unceremoniously blinked offline early this week following more forceful action from the company. Still, given the writing on the wall, many groups had plenty of time to tweak their names by a few words or point followers elsewhere to organize.

With only days until the presidential transition, acronym-heavy screeds promoting QAnon, an increasingly mainstream collection of outrageous pro-Trump government conspiracy theories, also remain easy to find. On one page with 2,500 followers, a QAnon believer pushed the debunked claim that anti-fascists executed the attack on the Capitol, claiming “January 6 was a trap.”

QAnon sign

(Photo by Win McNamee/Getty Images)

On a different QAnon group, an ominous post from an admin issued Congress a warning: “We have found a way to end this travesty! YOUR DAYS ARE NUMBERED!” The elaborate conspiracy’s followers were well represented at the deadly riot at the Capitol, as the many giant “Q” signs and esoteric t-shirt slogans made clear.

In a statement to TechCrunch about the state of extremism on the platform, Facebook says it is coordinating with terrorism experts as well as law enforcement “to prevent direct threats to public safety.” The company also noted that it works with partners to stay aware of violent content taking root on other platforms.

Facebook’s efforts are late and uneven, but they’re also more than the company has done to date. Measures from big social networks coupled with the absence of far-right social networks like Parler and Gab have left Trump’s most ardent supporters once again swearing off Silicon Valley and fanning out for an alternative.

Social media migration

Private messaging apps Telegram and Signal are both seeing an influx of users this week, but they offer something quite different from a Facebook or Twitter-like experience. Some expert social network observers see the recent migration as seasonal rather than permanent.

“The spike in usage of messaging platforms like Telegram and Signal will be temporary,” Yonder CEO Jonathon Morgan told TechCrunch. “Most users will either settle on platforms with a social experience, like Gab, MeWe, or Parler, if it returns, or will migrate back to Twitter and Facebook.”

That company uses AI to track how social groups connect online and what they talk about — violent conspiracies included. Morgan believes that propaganda-spreading “performative internet warriors” make a lot of noise online, but a performance doesn’t work without an audience. Others may quietly pose a more serious threat.

“The different types of engagement we saw during the assault on the Capitol mirror how these groups have fragmented online,” Morgan said. “We saw a large mob who was there to cheer on the extremists but didn’t enter the Capitol, performative internet warriors taking selfies, and paramilitaries carrying flex cuffs (mislabeled as “zip ties” in a lot of social conversation), presumably ready to take hostages.

“Most users (the mob) will be back on Parler if it returns, and in the meantime, they are moving to other apps that mimic the social experience of Twitter and Facebook, like MeWe.”

Still, Morgan says that research shows “deplatforming” extremists and conspiracy-spreaders is an effective strategy and efforts by “tech companies from Airbnb to AWS” will reduce the chances of violence in the coming days.

Cleaning up platforms can help turn the masses away from dangerous views, he explained, but the same efforts might further galvanize people with an existing intense commitment to those beliefs. With the winds shifting, already heterogeneous groups will be scattered too, making their efforts desperate and less predictable.

Deplatforming works, with risks

Jonathan Greenblatt, CEO of the Anti-Defamation League, told TechCrunch that social media companies still need to do much more to prepare for inauguration week. “We saw platforms fall short in their response to the Capitol insurrection,” Greenblatt said.

He cautioned that while many changes are necessary, we should be ready for online extremism to evolve into a more fractured ecosystem. Echo chambers may become smaller and louder, even as the threat of “large scale” coordinated action diminishes.

“The fracturing has also likely pushed people to start communicating with each other via encrypted apps and other private means, strengthening the connections between those in the chat and providing a space where people feel safe openly expressing violent thoughts, organizing future events, and potentially plotting future violence,” Greenblatt said.

By their own standards, social media companies have taken extraordinary measures in the U.S. in the last two weeks. But social networks have a long history of facilitating violence abroad, even as attention turns to political violence in America.

Greenblatt repeated calls for companies to hire more human moderators, a suggestion often made by experts focused on extremism. He believes social media could still take other precautions for inauguration week, like introducing a delay into livestreams or disabling them altogether, bolstering rapid response teams and suspending more accounts temporarily rather than focusing on content takedowns and handing out “strikes.”

“Platforms have provided little-to-nothing in the way of transparency about learnings from last week’s violent attack in the Capitol,” Greenblatt said.

“We know the bare minimum of what they ought to be doing and what they are capable of doing. If these platforms actually provided transparency and insights, we could offer additional—and potentially significantly stronger—suggestions.”

#capitol-riot, #facebook-misinformation, #hate-speech, #misinformation, #social, #tc

0

This Week in Apps: Parler deplatformed, alt apps rise, looking back at 2020 trends

Welcome back to This Week in Apps, the weekly TechCrunch series that recaps the latest in mobile OS news, mobile applications and the overall app economy.

The app industry is as hot as ever, with a record 218 billion downloads and $143 billion in global consumer spend in 2020.

Consumers last year also spent 3.5 trillion minutes using apps on Android devices alone. And in the U.S., app usage surged ahead of the time spent watching live TV. Currently, the average American watches 3.7 hours of live TV per day, but now spends four hours per day on their mobile devices.

Apps aren’t just a way to pass idle hours — they’re also a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus. In 2020, investors poured $73 billion in capital into mobile companies — a figure that’s up 27% year-over-year.

Top Stories

The right-wing gets deplatformed

Last weekend, Google and Apple removed Parler from their respective app stores, the latter after first giving the app 24 hours to come up with a new moderation strategy to address the threats of violence and illegal activity taking place on the app in the wake of the Capitol riot. When Parler failed to take adequate measures, the app was pulled down.

What happened afterwards was unprecedented. All of Parler’s technology backend services providers pulled support for Parler, too, including Amazon AWS (which has led to a lawsuit), Stripe and even Okta, which Parler was only using as a free trial. Other vendors also refused to do business with the app, potentially ending its ability to operate for good.

But although Parler is down, its data lives on. Several efforts have been made to archive Parler data for posterity — and for tipping off the FBI. Gizmodo made a map using the GPS data of 70,000 Parler posts. Another effort, Y’all Qaeda, is also using location data to map videos from Parler to locations around the Capitol building.

These visualizations are possible because the data itself was quickly archived by internet archivist @donk_enby before Parler was taken down, and because Parler stored rich metadata with each user’s post. That means each user’s precise location was recorded when they uploaded their photos and videos to the app.

It’s a gold mine for investigators and a further indication of the privilege these rioters believed they had to avoid prosecution or the extent to which they were willing to throw their life away for their cause — the false reality painted for them by Trump, his allies and other outlets that repeated the “big lie” until they truly believed only a revolution could save our democracy.

The move to kick Parler offline followed the broader deplatforming of Trump, who’s accused of inciting the violence, in part by his refusal to concede and his continued lies about a “rigged election.” As a result, Trump has been deplatformed across social platforms like Twitter, Facebook, Instagram, TikTok, Twitch, YouTube, Reddit, Discord and Snapchat, while e-commerce platform Shopify kicked out Trump merch shops and PayPal refused to process transactions for some groups of Trump supporters.

Alternative social apps post gains following Capitol riot

Parler was the most high-profile app used by the Capitol rioters, but others found themselves compromised by the same crowd. Walkie-talkie app Zello, for instance, was used by some insurrectionists during the January 6 riot to communicate. Telegram, meanwhile, recently had to block dozens of hardcore hate channels that were threatening violence, including those led by Nazis (which were reported for years with no action by the company, some claim).

Now, many in the radical right are moving to new platforms outside of the mainstream. Immediately following the Capitol riot, MeWe, CloutHub and other privacy-focused rivals to big tech began topping the app stores, alongside the privacy-focused messengers Signal and Telegram. YouTube alternative Rumble also gained ground due to recent events. Right-wingers even mistakenly downloaded the wrong “Parlor” app and a local newspaper app they thought was the uncensored social network Gab. (They’re not always the brightest bulbs.)

This could soon prove to be another difficult situation for the platforms to address, as we already came across highly concerning posts distributed on MeWe, which had used extreme hate speech or threatened violence. MeWe claims it moderates its content, but its recent growth to now 15 million users may be making that difficult — especially since it’s inheriting the former Parler users, including the radical far-right. The company has not been able to properly moderate the content, which may make it the next to be gone.

2020 annual review

App Annie this week released its annual review of the mobile app industry finding (as noted above) that mobile app downloads grew by 7% year-over-year to a record 218 billion in 2020. Consumer spending also grew by 20% to also hit a new milestone of $143 billion, led by markets that included China, the United States, Japan, South Korea and the United Kingdom. Consumers spent 3.5 trillion minutes on Android devices in 2020. Meanwhile, U.S. users now spend more time in apps (four hours) than watching live TV (3.7 hours).

The full report examines other key trends across social, gaming, finance, e-commerce, video and streaming, mobile food ordering, business apps, edtech and much more. We pulled out some highlights here, such as TikTok’s chart-topping year by downloads, the rise in livestreamed and social shopping, consumers spending 40% more time streaming on mobile YoY and other key trends.

Sensor Tower also released its own annual report, which specifically explored the impact of COVID-19; the growth in business apps, led by Zoom; mobile gaming; and the slow recovery of travel apps, among other things.

Samsung reveals its new flagships

Image Credits: Samsung

Though not “apps” news per se, it’s worth making note of what’s next in the Android ecosystem of high-end devices. This week was Samsung’s Unpacked press event, where the company revealed its latest flagship devices and other products. The big news was Samsung’s three new phones and their now lower prices: the glass-backed Galaxy S21 ($799) and S21 Plus ($999), and the S21 Ultra ($1,199), which is S Pen compatible.

The now more streamlined camera systems are the key feature of the new phones, and include:

  • S21 and S21 Plus: A 12-megapixel ultrawide, 12-megapixel wide and 64-megapixel telephoto with 30x space zoom.
  • S21 Ultra: A 12-megapixel ultra-wide, 108-megapixel wide and, for the first time, a dual-telephoto lens system with 3x and 10x optical zoom. The Ultra also improves low-light shooting with its Bright Night sensor.

The devices support UWB and there’s a wild AI-powered photo feature that lets you tap to remove people from the background of your photos. (How well it works is TBD). Other software imaging updates allow you to pull stills from 8K shooting, better image stabilization and a new “Vlogger view” for shooting from front and back cameras as the same time.

Also launched were Samsung’s AirPods rival, the Galaxy Buds Pro, and its Tile rival, the Galaxy SmartTag.

 

Weekly News

Platforms: Apple

  • Apple releases second iOS 14.2 developer beta. The update brings improvements to the HomePod mini handoff experience and an update to the Find My app to ready it for supporting third-party accessories.
  • Apple will soon allow third-parties to join the Find My app ahead of its AirTags launch. Tile had argued before regulators last year that Apple was giving itself first-party advantage with AirTags in Find My. Apple subsequently launched the Find My Accessory Program to begin certifying third-party products. AirTags’ existence was also leaked again this week.
  • Apple is working to bring its Music and Podcasts apps to the Microsoft Store.
  • Apple may be working on a podcast subscription service, per The Information.

Platforms: Google

  • Google appears to be working on an app hibernation feature for Android 12. The feature would hibernate unused apps to free up space.
  • Google pulls several personal loan apps from the Play Store in India. The company said several of the apps had been targeting vulnerable borrowers, then abusing them and using other extreme tactics when they couldn’t pay. Critics say Google took too long to respond to the outcry, which has already prompted suicides. Police have also frozen bank accounts holding $58 million for alleged scams conducted through 30 apps, none of which had approval from India’s central bank.

Gaming

Image Credits: Sensor Tower

  • 48,000 mobile games were purged from the China App Store in December 2020, reports Sensor Tower. The games removed in 2020 for not having acquired the proper Chinese gaming license, had generated nearly $3 billion in lifetime revenue.
  • The top grossing mobile game in December 2020 was Honor of Kings with $258 million in player spending, up 58% year-over-year, according to Sensor Tower. PUBG Mobile was No. 2. followed by Genshin Impact.
  • Among Us was the most downloaded mobile game in December 2020, per Apptopia. with an estimated 48 million new downloads in the month, most through Google Play.
  • Epic Games demands Fortnite to be reinstated on the App Store, in a U.K. legal filing. The game maker is engaged in multiple lawsuits over the “Apple tax.”

Security

  • Amazon’s Ring app exposed users’ home addresses. Amazon says there’s no evidence the security flaw had been exploited by anyone.
  • New research details how law enforcements gets into iOS and Android smartphones and cloud backups of their data.

Privacy

  • Signal’s Brian Acton says recent outrage over WhatsApp’s terms are driving installs of the private messaging app. Third-party data indicates Signal has around 20 million MAUs as of December 2020. The app also saw a surge due to the U.S. Capitol riots, with 7.5 million downloads from January 6-10.
  • Telegram user base in India was up 110% in 2020. The app now has 115 million MAUs in India, which could allow it to better compete with WhatsApp.
  • Privacy concerns are also driving sign-ups for encrypted email providers, ProtonMail and Tutanota. The former reports a 3x rise in recent weeks, while the latter said usage has doubled size WhatsApp released its new T&Cs.
  • FTC settled with period-tracking app Flo for sharing user health data with third-party analytics and marketing services, when it had promised to keep data private. The app must now obtain user consent and will be subject to an independent review of its practices.
  • FTC settled with Ever, the maker of a photo storage app that had pivoted to selling facial recognition services. The company used the photos it collected to train facial recognition algorithms. It’s been order to delete that data and all face embeddings derived from photos without user consent.
  • Muslim prayer app Salaat First (Prayer Times) was found to be recording and selling user location info to a data broker. The firm collecting the data had been linked to a supply chain that involved a U.S. government contractor who worked with ICE, Customs and Border Protection, and the FBI.
  • TikTok changed the privacy settings and defaults for users under 18. Children 13-15 will have private accounts by default. Other restrictions apply on features like commenting, Dueting, Stitching and more for all under 18. TikTok also partnered with Common Sense Networks to help it curate age-appropriate content for users under 13.

Government & Policy

  • Italy’s data protection agency, the GPDP, said it contacted the European Data Protection Board (EDPB) to raise concerns over WhatsApp’s requirement for users to accept its updated T&Cs to continue to use the service. The law requires that users are informed of each specific use of their data and given a choice as to whether their data is processed. The new in-app notification doesn’t make the changes clear nor allow that option.
  • Turkey starts an antitrust investigation into Facebook and WhatsApp. The investigation was prompted by WhatsApp’s new Terms of Service, effective February 8, which allows data sharing with Facebook.
  • WhatsApp then delayed its T&C changes, as a result.

Health & Fitness

  • Google this week fixed an issue with its Android Exposure Notification System that’s used by COVID-19 tracking apps. The impacted apps took longer to load and carry out their exposure checks.

Edtech

  • Amazon makes an education push in India with JEE preparation app. The company launched Amazon Academy, a service that will help students in India prepare for the Joint Entrance Examinations (JEE), a government-backed entrance assessment for admission into various engineering colleges.

Funding and M&A (and IPOs)

  • PayPal acquired the 30% stake it didn’t already own in China’s GoPay, making it the first foreign firm in China with full ownership of its payments business.
  • Therapy app Talkspace will go public through a $1.4 billion merger with SPAC Hudson Executive Investment Corp.
  • Snap acquired location data startup StreetCred. The team will join the company and work on maps and location-related products for Snapchat.
  • BlaBla raised $1.5 million for its language-learning app that teaches English using TikTok-like videos. The startup, a participant in Y Combinator’s 2020 summer batch, had previously applied to YC seven times. Other investors include Amino Capital, Starling Ventures and Wayra X.
  • Poshmark, the online and mobile app for reselling clothing, IPO’d and closed up more than 140% on day one.
  • Dating app Bumble also filed to go public. The company claims 42 million MAUs, with 2.4 million paying users through the first nine months of 2020. It lost $117 million on $417 million in revenue during that time.
  • Blog platform Medium acquired Paris-based Glose, a mobile app that lets you buy and read books on mobile devices.
  • Indonesian investment app Ajaib raised $25 million Series A led by Horizons Venture and Alpha JWC. Inspired by Robinhood, the app offers low-fee stock trading and access to mutual funds.
  • Mailchimp acquired Chatitive, a B2B messaging startup that helps businesses reach customers over text messages.
  • Chinese fitness app Keep raised $360 million Series F led by SoftBank Vision Fund. The six-year-old startup that allows fitness influencers to host live classes over video is now valued at $2 billion.
  • Google finalized Fitbit acquisition. Google confirmed it will allow Fitbit users to continue to connect with third-party services and said the health data will be kept separate and not used for ads.
  • On-demand U.K. supermarket Weezy raised $20 million Series A for its Postmates-like app that delivers groceries in as fast as 15 minutes, on average.

Downloads

Bandsintown

COVID has cancelled concerts, which required Bandsintown to pivot from helping people find shows to attend to a new subscription service for live music. The company this week launched Bandsintown Plus, a $9.99 per month pass that gives users access to more than 25 concerts per month. The shows offered are exclusive to the platform, and not available on other sites like YouTube, Twitch, Apple Music or Spotify.

Piñata Farms

Image Credits: Piñata Farms

This new social video app lets you put anyone or anything into an existing video to make humorous video memes. The computer vision-powered app lets you do things like crop out a head from a photo, for example, or use thousands of in-app items to add to your existing video. The resulting creations can be shared in the app, privately through messaging or out to other social platforms. Available on iOS only.

Capture App

Image Credits: Numbers Protocol

This new blockchain camera app, reviewed here on TechCrunch, uses tech commercialized by the Taiwan-based startup, Numbers Protocol. The app secures the metadata associated with photos you take on the blockchain, also allowing users to adjust privacy settings if they don’t want to share a precise location. Any subsequent changes to the photo are then traced and recorded. Use cases for the technology include journalism (plus combating fake news), as well as a way for photographers to assure their photos are attributed correctly. The app is available on the App Store and Google Play.

Marsbot for AirPods

Image Credits: Foursquare Labs, Inc.

A new experiment from Foursquare Labs, Marsbot, offers an audio guide to your city. As you walk or bike around, the app gives you running commentary about the places around you using data from Foursquare, other content providers and snippets from other app users. The app is also optimized for AirPods, making it iOS-only.

Loupe

Image Credits: Loupe

Loupe is a new app that modernizes sports card collecting. The app allows users to participate in daily box breaks, host their own livestreams with chats, collect alongside fellow collectors and purchase new sports card singles, packs and boxes when they hit the market, among other things. The app is available on iOS.

 

#android, #android-apps, #app-stores, #apple, #apps, #capitol-riot, #developers, #google, #ios, #ios-apps, #mobile, #mobile-apps, #parler, #tc, #this-week-in-apps

0

Facebook blocks new events around DC and state capitols

As a precaution against coordinated violence as the U.S. approaches President-elect Joe Biden’s inauguration, Facebook announced a few new measures it’s putting in place.

In a blog post and tweets from Facebook Policy Communications Director Andy Stone, the company explained that it would block any events slated to happen near the White House, the U.S. Capitol or any state capitol building through Wednesday.

The company says it will also do “secondary” sweeps through any inauguration-related events to look for violations of its policies. At this point, that includes any content connected to the “Stop the Steal” movement perpetuating the rampant lie that Biden’s victory is illegitimate. Those groups continued to thrive on Facebook until measures the company took at the beginning of this week.

Facebook will apparently also be putting new restrictions in place for U.S. users who repeatedly break the company’s rules, including barring those accounts from livestreaming videos, events and group pages.

Those precautions fall short of what some of Facebook’s critics have called for, but they’re still notable measures for a company that only began taking dangerous conspiracies and armed groups seriously in the last year.

#capitol-riot, #facebook, #social, #tc

0

Telegram blocks ‘dozens’ of channels threatening violence

With many social networks suddenly reevaluating their policies in light of political violence in the U.S., the popular messaging app Telegram is implementing a crackdown of its own.

Telegram confirmed to TechCrunch that it has removed “dozens” of public channels over the course of the last 24 hours after those accounts, some of which have thousands of followers. “Our Terms of Service expressly forbid public calls to violence,” Telegram spokesperson Mike Ravdonikas told TechCrunch.

Asked if those takedowns relate to last week’s violent siege of the U.S. Capitol, Ravdonikas said that Telegram is “monitoring the current situation closely.”

The company confirmed that a number of accounts TechCrunch had previously observed promoting white supremacy, Nazi iconography and other forms of far-right extremism were part of the new enforcement action, which is still expanding. Some of the blocked channels were still viewable on Telegram’s web client Wednesday.

One of those groups bemoaned Telegram’s bans Tuesday in a post displaying a Nazi flag and the warning “you can’t kill an idea.” Prior to being taken down Wednesday, that channel boasted more than 10,000 followers.

Many extremist channels began publicizing backup accounts Tuesday, pointing subscribers to dozens of other groups where they could continue to gather. Other sympathetic channels chronicled the bans in real-time, posting screenshots documenting violations of Telegram’s terms of service.

Telegram’s new batch of takedowns appears to be connected to an effort by self-described anti-fascist and activist Gwen Snyder, who marshaled Twitter users in a “mass-reporting campaign” following last week’s violent invasion of the U.S. Capitol.

“For years, we’ve been tracking these Nazi Terrorgram channels and reporting horrendous, explicit calls to racist violence and insurrection, and Telegram did nothing,” Snyder told TechCrunch. “It worked, and Telegram is finally dismantling the network of Nazi channels that have spent months and years overtly attempting to incite just the sort of terror we saw in DC.”

With Telegram channels blaming Snyder for the takedowns, her home address has widely circulated on the app in an ongoing doxing campaign. On one channel calling for her death, an image depicts Snyder’s face with a bloody hole in its forehead. Another image includes an address, screenshots of her Twitter posts and the text “You know what to do.”

Snyder says she heard pounding on her door Tuesday night. “My address is all over those channels with people saying I should be shot and raped for this, and they only have to convince one person.”

With President Trump suspended from most major social media platforms and restrictions tightening on pro-Trump conspiracies like QAnon and the Stop the Steal movement, the president’s followers have fled in droves to platforms that remain willing to incubate extremism.

Prominent among those is Parler, a social network hailed by many pro-Trump figures as a politically friendly alternative to mainstream social media. But with Parler offline after Amazon suspended the account’s web hosting services and Apple and Google booted it from their respective app stores, a number of users flocked to more private options where violent extremism continues to flourish, including Telegram.

This story is developing…

#capitol-riot, #tc, #telegram

0

The Capitol riot and its aftermath makes the case for tech regulation more urgent, but no simpler

Last week and throughout the weekend, technology companies took the historic step of deplatforming the president of the United States in the wake of a riot in which the US Capitol was stormed by a collection of white nationalists, QAnon supporters, and right wing activists.

The decision to remove Donald Trump, his fundraising and moneymaking apparatus, and a large portion of his supporters from their digital homes because of their incitements to violence in the nation’s Capitol on January 6th and beyond, has led a chorus of voices to call for the regulation of the giant tech platforms.

They argue that private companies shouldn’t have the sole power to erase the digital footprint of a sitting president.

But there’s a reason why the legislative hearings in Congress, and the pressure from the president, have not created any new regulations. And there’s also a reason why — despite all of the protestations from the president and his supporters — no lawsuits have effectively been brought against the platforms for their decisions.

The law, for now, is on their side.

The First Amendment and freedom of speech (for platforms)

Let’s start with the First Amendment. The protections of speech afforded to American citizens under the First Amendment only apply to government efforts to limit speech. While the protection of all speech is assumed as something enshrined in the foundations of American democracy, the founders appear to have only wanted to shield speech from government intrusions.

That position makes sense if you’re a band of patriots trying to ensure that a monarch or dictator can’t abuse government power to silence its citizens or put its thumb on the lever in the marketplace of ideas.

The thing is, that marketplace of ideas is always open, but publishers and platforms have the freedom to decide what they want to sell into it. Ben Franklin would never have published pro-monarchist sentiments on his printing presses, but he would probably have let Thomas Paine have free rein.

So, the First Amendment doesn’t protect an individuals’ rights to access any platform and say whatever the hell they want. In fact, it protects businesses in many cases from having their freedom of speech violated by having the government force them to publish something they don’t want to on their platforms.

Section 230 and platform liability 

BuT WhAt AbOUt SeCTiOn 230, one might ask (and if you do, you’re not alone)?

Unfortunately, for Abbott and others who believe that repealing Section 230 would open the door for less suppression of speech by online platforms, they’re wrong.

First, the cancellation of speech by businesses isn’t actually hostile to the foundation America was built on. If a group doesn’t like the way it’s being treated in one outlet, it can try and find another. Essentially, no one can force a newspaper to print their letter to the editor.

Second, users’ speech isn’t what is protected under Section 230; it protects platforms from liability for that speech, which indirectly makes it safe for users to speak freely.

Where things get complicated is in the difference between the letter to an editor in a newspaper and a tweet on Twitter, post on Facebook, or blog on Medium (or WordPress). And this is where U.S. Code Section 230 comes into play.

Right now, Section 230 protects all of these social media companies from legal liability for the stuff that people publish on their platforms (unlike publishers). The gist of the law is that since these companies don’t actively edit what people post on the platforms, but merely provide a distribution channel for that content, then they can’t be held accountable for what’s in the posts.

The companies argue that they’re exercising their own rights to freedom of speech through the algorithms they’ve developed to highlight certain pieces of information or entertainment, or in removing certain pieces of content. And their broad terms of service agreements also provide legal shields that allow them to act with a large degree of impunity.

Repealing Section 230 would make platforms more restrictive rather than less restrictive about who gets to sell their ideas in the marketplace, because it would open up the tech companies to lawsuits over what they distribute across their platforms.

One of the authors of the legislation, Senator Ron Wyden, thinks repeal is an existential threat to social media companies. “Were Twitter to lose the protections I wrote into law, within 24 hours its potential liabilities would be many multiples of its assets and its stock would be worthless,” Senator Wyden wrote back in 2018. “The same for Facebook and any other social media site. Boards of directors should have taken action long before now against CEOs who refuse to recognize this threat to their business.”

Others believe that increased liability for content would actually be a powerful weapon to bring decorum to online discussions. As Joe Nocera argues in Bloomberg BusinessWeek today:

“… I have come around to an idea that the right has been clamoring for — and which Trump tried unsuccessfully to get Congress to approve just weeks ago. Eliminate Section 230 of the Communications Decency Act of 1996. That is the provision that shields social media companies from legal liability for the content they publish — or, for that matter, block.

The right seems to believe that repealing Section 230 is some kind of deserved punishment for Twitter and Facebook for censoring conservative views. (This accusation doesn’t hold up upon scrutiny, but let’s leave that aside.) In fact, once the social media companies have to assume legal liability — not just for libel, but for inciting violence and so on — they will quickly change their algorithms to block anything remotely problematic. People would still be able to discuss politics, but they wouldn’t be able to hurl anti-Semitic slurs. Presidents and other officials could announce policies, but they wouldn’t be able to spin wild conspiracies.”

Conservatives and liberals crowing for the removal of Section 230 protections may find that it would reinstitute a level of comity online, but the fringes will be even further marginalized. If you’re a free speech absolutist, that may or may not be the best course of action.

What mechanisms can legislators use beyond repealing Section 230? 

Beyond the blunt instrument that is repealing Section 230, legislators could take other steps to mandate that platforms carry speech and continue to do business with certain kinds of people and platforms, however odious their views or users might be.

Many of these steps are outlined in this piece from Daphne Keller on “Who do you sue?” from the Hoover Institution.

Most of them hinge on some reinterpretation of older laws relating to commerce and the provision of services by utilities, or on the “must-carry” requirements put in place in the early days of 20th century broadcasting when radio and television were distributed over airways provided by the federal government.

These older laws involve either designating internet platforms as “essential, unavoidable, and monopolistic services to which customers should be guaranteed access”; or treating the companies like the railroad industry and mandating compulsory access, requiring tech companies to accept all users and not modify any of their online speech.

Other avenues could see lawmakers use variations on the laws designed to limit the power of channel owners to edit the content they carried — including things like the fairness doctrine from the broadcast days or net neutrality laws that are already set to be revisited under the Biden Administration.

Keller notes that the existing body of laws “does not currently support must-carry claims against user-facing platforms like Facebook or YouTube, because Congress emphatically declined to extend it to them in the 1996 Telecommunications Act.”

These protections are distinct from Section 230, but their removal would have similar, dramatic consequences on how social media companies, and tech platforms more broadly, operate.

“[The] massive body of past and current federal communications law would be highly relevant,” Keller wrote. “For one thing, these laws provide the dominant and familiar model for US regulation of speech and communication intermediaries. Any serious proposal to legislate must-carry obligations would draw on this history. For another, and importantly for plaintiffs in today’s cases, these laws have been heavily litigated and are still being litigated today. They provide important precedent for weighing the speech rights of individual users against those of platforms.”

The establishment of some of these “must-carry” mandates for platforms would go a long way toward circumventing or refuting platforms’ First Amendment claims, because some cases have already been decided against cable carriers in cases that could correspond to claims against platforms.

This is really happening already so what could legislation look like

At this point the hypothetical scenario that Keller sketched out in her essay, where private actors throughout the technical stack have excluded speech (although the legality of the speech is contested), has, in fact, happened.

The question is whether the deplatforming of the president and services that were spreading potential calls to violence and sedition, is a one-off; or a new normal where tech companies will act increasingly to silence voices that they — or a significant portion of their user base — disagree with.

Lawmakers in Europe, seeing the actions from U.S. companies over the last week, aren’t wasting any time in drafting their own responses and increasing their calls for more regulation.

In Europe, that regulation is coming in the form of the Digital Services Act, which we wrote about at the end of last year.

On the content side, the Commission has chosen to limit the DSA’s regulation to speech that’s illegal (e.g., hate speech, terrorism propaganda, child sexual exploitation, etc.) — rather than trying to directly tackle fuzzier “legal but harmful” content (e.g., disinformation), as it seeks to avoid inflaming concerns about impacts on freedom of expression.

Although a beefed up self-regulatory code on disinformation is coming next year, as part of a wider European Democracy Action Plan. And that (voluntary) code sounds like it will be heavily pushed by the Commission as a mitigation measure platforms can put toward fulfilling the DSA’s risk-related compliance requirements.

EU lawmakers do also plan on regulating online political ads in time for the next pan-EU elections, under a separate instrument (to be proposed next year) and are continuing to push the Council and European parliament to adopt a 2018 terrorism content takedown proposal (which will bring specific requirements in that specific area).

Europe has also put in place rules for very large online platforms that have more stringent requirements around how they approach and disseminate content, but regulators on the continent are having a hard time enforcing htem.

Keller believes that some of those European regulations could align with thinking about competition and First Amendment rights in the context of access to the “scarce” communication channels — those platforms whose size and scope mean that there are few competitive alternatives.

Two approaches that Keller thinks would perhaps require the least regulatory lift and are perhaps the most tenable for platforms to pursue involve solutions that either push platforms to make room for “disfavored” speech, but tell them that they don’t have to promote it or give it any ranking.

Under this solution, the platforms would be forced to carry the content, but could limit it. For instance, Facebook would be required to host any posts that don’t break the law, but it doesn’t have to promote them in any way — letting them sink below the stream of constantly updating content that moves across the platform.

“On this model, a platform could maintain editorial control and enforce its Community Guidelines in its curated version, which most users would presumably prefer. But disfavored speakers would not be banished enitrely and could be found by other users who prefer an uncurated experience,” Keller writes. “Platforms could rank legal content but not remove it.”

Perhaps the regulation that Keller is most bullish on is one that she calls the “magic APIs” scenario. Similar to the “unbundling” requirements from telecommunications companies, this regulation would force big tech companies to license their hard-to-duplicate resources to new market entrants. In the Facebook or Google context, this would mean requiring the companies open up access to their user generated content, and other companies could launch competing services with new user interfaces and content ranking and removal policies, Keller wrote.

“Letting users choose among competing ‘flavors’ of today’s mega-platforms would solve some First Amendment problems by leaving platforms own editorial decisions undisturbed,” Keller writes.

Imperfect solutions are better than none 

It’s clear to speech advocates on both the left and the right that having technology companies control what is and is not permissible on the world’s largest communications platforms is untenable and that better regulation is needed.

When the venture capitalists who have funded these services — and whose politics lean toward the mercenarily libertarian — are calling for some sort of regulatory constraints on the power of the technology platforms they’ve created, it’s clear things have gone too far. Even if the actions of the platforms are entirely justified.

However, in these instances, much of the speech that’s been taken down is clearly illegal. To the point that even free speech services like Parler have deleted posts from their service for inciting violence.

The deplatforming of the president brings up the same points that were raised back in 2017 when Cloudflare, the service that stands out for being more tolerant of despicable speech than nearly any other platform, basically erased the Daily Stormer.

“I know that Nazis are bad, the content [on The Daily Stormer] was so incredibly repulsive, it’s stomach turning how bad it is,” Prince said at the time. “But I do believe that the best way to battle bad speech is with good speech, I’m skeptical that censorship is the right scheme.

“I’m worried the decision we made with respect to this one particular site is not particularly principled but neither was the decision that most tech companies made with respect to this site or other sites. It’s important that we know there is convention about how we create principles and how contraptions are regulated in the internet tech stack,” Prince continued.

“We didn’t just wake up and make some capricious decision, but we could have and that’s terrifying. The internet is a really important resource for everyone, but there’s a very limited set of companies that control it and there’s such little accountability to us that it really is quite a dangerous thing.”

#capitol-riot, #first-amendment, #tc, #trump

0

Ahead of inauguration, Airbnb pledges bans for anyone involved in Capitol riot

Building on a policy that the company said has been in place since the Charlottesville protests back in 2017, Airbnb said it will take additional steps to beef up community protections for the DC metro area ahead of the presidential inauguration.

Airbnb already removes people from the platform who are associated with violent hate groups ahead of specific events, the company said.

And ahead of the inauguration, the company said it would use a seven-step plan to ensure that the DC metro-area isn’t overwhelmed with white supremacists, neo-Nazis, or “western chauvinists.”

Airbnb said it would ban individuals identified as involved in criminal activity around the Capitol at last week’s riot. “When we learn through media or law enforcement sources the names of individuals confirmed to have been responsible for the violent criminal activity at the United States Capitol on January 6, we investigate whether the named individuals have an account on Airbnb,” the company said. “This includes cross-referencing the January 6 arrest logs of D.C. Metro Police. If the individuals have an Airbnb account, we take action, which includes banning them from using Airbnb.”

That’s in addition to another sweep of existing reservations at locations around the Capitol in the days leading up to the inauguration to ensure that no one associated with hate groups slips through its dragnet.

The company will also tighten up booking requirements, with additional identity verification measures and other security checks to ensure that background checks are up-to-date.

As final steps, the company said that it is communicating with booking guests to inform them that if they’re bringing people who are associated with hate groups then they could face legal action from Airbnb. Hosts are also being told by the company that if they suspect anything about individuals staying on their properties that they should contact the company’s Urgent Safety Line.

#airbnb, #capitol-riot, #hospitality-industry, #tc, #travel, #vacation-rental

0

Stripe reportedly joins the tech platforms booting President Trump from their services

It might be easier at this point to ask which tech platforms President Donald Trump can still use.

Payment-processing company Stripe is the latest tech company to kick Donald Trump off of its platform, according to a report in The Wall Street Journal.

That means the president’s campaign website and online fundraising arms will no longer have access to the payment processor’s services, cutting off the Trump campaign from receiving donations.

Sources told the Journal that the reason for the company’s decision was the violation of company policies against encouraging violence.

The move comes as the president has remained largely silent through the official channels at his disposal in the wake of last week’s riot at the Capitol building.

While Trump has been silent, technology companies have been busy repudiating the president’s support by cutting off access to a range of services.

The deplatforming of the president has effectively removed Trump from all social media outlets including Snap, Facebook, Twitter, Pinterest, Spotify and TikTok.

The technology companies that power most financial transactions online have also blocked the president. Shopify and PayPal were the first to take action against the extremists among President Trump supporters who participated in the riot.

As we wrote earlier this week, PayPal has been deactivating the accounts of some groups of Trump supporters who were using the money-transfer fintech to coordinate payments to underwrite the rioters’ actions on Capitol Hill.

The company has actually been actively taking steps against far-right activists for a while. After the Charlottesville protests and subsequent rioting in 2017, the company banned a spate of far-right organizations. These bans have so far not extended directly to the president himself from what TechCrunch can glean.

On Thursday, Shopify announced that it was removing the storefronts for both the Trump campaign and Trump’s personal brand. That’s an evolution on policy for the company, which years ago said that it would not moderate its platform, but in recent years has removed some controversial stores, such as some right-wing shops in 2018.

Now, Stripe has joined the actions against the president, cutting off a lucrative source of income for his political operations.

As the Journal reported, the Trump campaign launched a fundraising blitz to raise money for the slew of lawsuits that the president brought against states around the country. The lawsuits were almost all defeated, but the effort did bring in hundreds of millions of dollars for the Republican party.

 

#capitol-riot, #donald-trump, #facebook, #paypal, #shopify, #stripe, #tc, #technology

0

Amazon Web Services gives Parler 24-hour notice that it will suspend services to the company

Parler is at risk of disappearing, just as the social media network popular among conservatives was reaching new heights of popularity in the wake of President Donald Trump’s ban from all major tech social platforms.

Amazon Web Services, which provides backend cloud services, has informed Parler that it intends to cut ties with the company in the next 24 hours, according to a report in BuzzFeed News. Parler’s application is built on top of AWS infrastructure, services that are critical for the operation of its platform. Earlier today, Apple announced that it was following Google in blocking the app from its App Store, citing a lack of content moderation.

Parler, whose fortunes have soared as users upset at the President’s silencing on mainstream social media outlets flocked to the service, is now another site of contention in the struggle over the limits of free speech and accountability online.

Parler CEO John Matze said that the platform would be offline for at least a week, as “they rebuild from scratch” in response to AWS’ communications.

In the wake of the riots at the Capitol on Wednesday and a purge of accounts accused of inciting violence on Twitter and Facebook, Parler had become the home for a raft of radical voices calling for armed “Patriots” to commit violence at the nation’s capitol and statehouses around the country.

Most recently, conservative militants on the site had been calling for “Patriots” to amplify the events of January 6 with a march on Washington DC with weapons on January 19.

Even as pressure was came from Apple and Amazon, whose employees had called for the suspension of services with the company, Parler was taking steps to moderate posts on its platform.

The company acknowledged that it had removed some posts from Trump supporter Lin Wood, who had called for the execution of Vice President Mike Pence in a series of proclamations on the company’s site.

Over the past few months, Republican lawmakers including Sen. Ted Cruz and Congressman Devin Nunes — along with conservative firebrands like Wood have found a home on the platform, where they can share conspiracy theories with abandon.

In an email quoted by BuzzFeed News, Amazon Web Services’ Trust and Safety Team told Parler’s chief policy officer, Amy Peikoff that calls for violence that were spreading across Parler’s platform violated its terms of service. The company’s team also said that Parler’s plan to use volunteers to moderate content on the platform would prove effective, according to BuzzFeed.

“Recently, we’ve seen a steady increase in this violent content on your website, all of which violates our terms. It’s clear that Parler does not have an effective process to comply with the AWS terms of service,” BuzzFeed reported the email as saying.

Here’s Amazon’s letter to Parler in full.

Dear Amy,

Thank you for speaking with us earlier today.

As we discussed on the phone yesterday and this morning, we remain troubled by the repeated violations of our terms of service. Over the past several weeks, we’ve reported 98 examples to Parler of posts that clearly encourage and incite violence. Here are a few examples below from the ones we’ve sent previously: [See images above.]

Recently, we’ve seen a steady increase in this violent content on your website, all of which violates our terms. It’s clear that Parler does not have an effective process to comply with the AWS terms of service. It also seems that Parler is still trying to determine its position on content moderation. You remove some violent content when contacted by us or others, but not always with urgency. Your CEO recently stated publicly that he doesn’t “feel responsible for any of this, and neither should the platform.” This morning, you shared that you have a plan to more proactively moderate violent content, but plan to do so manually with volunteers. It’s our view that this nascent plan to use volunteers to promptly identify and remove dangerous content will not work in light of the rapidly growing number of violent posts. This is further demonstrated by the fact that you still have not taken down much of the content that we’ve sent you. Given the unfortunate events that transpired this past week in Washington, D.C., there is serious risk that this type of content will further incite violence.

AWS provides technology and services to customers across the political spectrum, and we continue to respect Parler’s right to determine for itself what content it will allow on its site. However, we cannot provide services to a customer that is unable to effectively identify and remove content that encourages or incites violence against others. Because Parler cannot comply with our terms of service and poses a very real risk to public safety, we plan to suspend Parler’s account effective Sunday, January 10th, at 11:59PM PST. We will ensure that all of your data is preserved for you to migrate to your own servers, and will work with you as best as we can to help your migration.

– AWS Trust & Safety Team

#amazon-web-service, #capitol-riot, #government, #parler, #policy, #tc

0

Why Twitter banned President Trump

Twitter permanently banned the U.S. president Friday, taking a dramatic step to limit Trump’s ability to communicate with his followers. That decision, made in light of his encouragement for Wednesday’s violent invasion of the U.S. Capitol, might seem sudden for anyone not particularly familiar with his Twitter presence.

In reality, Twitter gave Trump many, many second chances over his four years as president, keeping him on the platform due to the company’s belief that speech by world leaders is in the public interest, even if it breaks the rules.

Now that Trump’s gone for good, we have a pretty interesting glimpse into the policy decision making that led Twitter to bring the hammer down on Friday. The company first announced Trump’s ban in a series of tweets from its @TwitterSafety account but also linked to a blog post detailing its thinking.

In that deep dive, the company explains that it gave Trump one last chance after suspending and then reinstating his account for violations made on Wednesday. But the following day, a pair of tweets the president made pushed him over the line. Twitter said those tweets, pictured below, were not examined on a standalone basis, but rather in the context of his recent behavior and this week’s events.

“… We have determined that these Tweets are in violation of the Glorification of Violence Policy and the user @realDonaldTrump should be immediately permanently suspended from the service,” Twitter wrote.

Screenshot via Twitter

This is how the company explained its reasoning, point by point:

  • “President Trump’s statement that he will not be attending the Inauguration is being received by a number of his supporters as further confirmation that the election was not legitimate and is seen as him disavowing his previous claim made via two Tweets (1, 2) by his Deputy Chief of Staff, Dan Scavino, that there would be an ‘orderly transition’ on January 20th.
  • “The second Tweet may also serve as encouragement to those potentially considering violent acts that the Inauguration would be a ‘safe’ target, as he will not be attending.
  • “The use of the words ‘American Patriots’ to describe some of his supporters is also being interpreted as support for those committing violent acts at the US Capitol.
  • “The mention of his supporters having a ‘GIANT VOICE long into the future’ and that ‘They will not be disrespected or treated unfairly in any way, shape or form!!!’ is being interpreted as further indication that President Trump does not plan to facilitate an ‘orderly transition’ and instead that he plans to continue to support, empower, and shield those who believe he won the election.
  • “Plans for future armed protests have already begun proliferating on and off-Twitter, including a proposed secondary attack on the US Capitol and state capitol buildings on January 17, 2021.”

All of that is pretty intuitive, though his most fervent supporters aren’t likely to agree. Ultimately these decisions, as much as they do come down to stated policies, involve a lot of subjective analysis and interpretation. Try as social media companies might to let algorithms make the hard calls for them, the buck stops with a group of humans trying to figure out the best course of action.

Twitter’s explanation here offers a a rare totally transparent glimpse into how social networks decide what stays and what goes. It’s a big move for Twitter — one that many people reasonably believe should have been made months if not years ago — and it’s useful to have what is so often an inscrutable high-level decision making process laid out plainly and publicly for all to see.

#capitol-riot, #government, #president-donald-trump, #social, #tc, #twitter

0

Stolen computers are the least of the government’s security worries

Reports that a laptop from House Speaker Nancy Pelosi’s office was stolen during the pro-Trump rioters’ sack of the Capitol building has some worried that the mob may have access to important, even classified information. Fortunately that’s not the case — even if this computer and others had any truly sensitive information, which is unlikely, like any corporate asset it can almost certainly be disabled remotely.

The cybersecurity threat in general from the riot is not as high as one might think, as we explained yesterday. Specific to stolen or otherwise compromised hardware, there are several facts to keep in mind.

In the first place, the offices of elected officials are in many ways already public spaces. These are historic buildings through which tours often go, in which meetings with foreign dignitaries and other politicians are held, and in which thousands of ordinary civil servants without any security clearance would normally be working shoulder-to-shoulder. The important work they do is largely legislative and administrative — largely public work, where the most sensitive information being exchanged is probably unannounced speeches and draft bills.

But recently, you may remember, most of these people were working from home. Of course during the major event of the joint session confirming the electors, there would be more people than normal. But this wasn’t an ordinary day at the office by a long shot — even before hundreds of radicalized partisans forcibly occupied the building. Chances are there wasn’t a lot of critical business being conducted on the desktops in these offices. Classified data lives in the access-controlled SCIF, not on random devices sitting in unsecured areas.

In fact, the laptop is reported by Reuters as having been part of a conference room’s dedicated hardware — this is the dusty old Inspiron that lives on the A/V table so you can put your Powerpoint on it, not Pelosi’s personal computer, let alone a hard line to top secret info.

Even if there was a question of unintended access, it should be noted that the federal government, as any large company might, has a normal IT department with a relatively modern provisioning structure. The Pelosi office laptop, like any other piece of hardware being used for official House and Senate business, is monitored by IT and should be able to be remotely disabled or wiped. The challenge for the department is figuring out which hardware does actually need to be handled that way — as was reported earlier, there was (understandably) no official plan for a violent takeover of the Capitol building.

In other words, it’s highly likely that the most that will result from the theft of government computers on the 6th will be inconvenience or at most some embarrassment should some informal communications become public. Staffers do gossip and grouse, of course, on both back and official channels.

That said, the people who invaded these office and stole that equipment — some on camera — are already being arrested and charged. Just because the theft doesn’t present a serious security threat doesn’t mean it wasn’t highly illegal in several different ways.

Any cybersecurity official will tell you that the greater threat by far is the extensive infiltration of government contractors and accounts through the SolarWinds breach. Those systems are packed with information that was never meant to be public, and will likely provide fuel for credential-related attacks for years to come.

#capitol-riot, #cybersecurity, #government, #nancy-pelosi, #security

0

TikTok bans videos of Trump inciting mob, blocks #stormthecapital and other hashtags

For obvious reasons, Trump doesn’t have a TikTok account. But the President’s speeches that helped incite the mob who yesterday stormed the U.S. Capitol will have no home on TikTok’s platform. The company confirmed to TechCrunch its content policy around the Capitol riots will see it removing videos of Trump’s speeches to supporters. It will also redirect specific hashtags used by rioters, like #stormthecapitol and #patriotparty, to reduce their content’s visibility in the app.

TikTok says that Trump’s speeches, where the President again reiterated claims of a fraudulent election, are being removed on the grounds that they violate the company’s misinformation policy. That policy defines misinformation as content that is inaccurate or false. And it explains that while TikTok encourages people to have respectful conversations on subjects that matter to them, it doesn’t permit misinformation that can cause harm to individuals, their community or the larger public.

A rioting mob intent on stopping democratic processes in the United States seems to fit squarely under that policy.

However, TikTok says it will allow what it calls “counter speech” against the Trump videos. This is a form of speech that’s often used to fight misinformation, where the creator presents the factual information or disputes the claims being made in another video. TikTok in November had allowed counter speech in response to claims from Trump supporters that the election was “rigged,” even while it blocked top hashtags that were used to promote these ideas.

In the case of Trump’s speeches, TikTok will allow a user to, for example, use the green screen effect to comment on the speech — unless those comments support the riots.

In addition, TikTok is allowing some videos of the violence that took place at the Capitol to remain. For example, if the video condemns the violence or originates from a news organization, it may be allowed. TikTok is also applying its recently launched opt-in viewing screens on “newsworthy” content that may depict graphic violence.

These screens, announced in December, appear on top of videos some viewers may find graphic or distressing. Videos with the screens applied are already eligible for TikTok’s main “For You” feed, but may not be prohibited. When viewer encounters a screen, they can just tap a button to skip the video or they can choose to “watch anyway.” (It could not provide any example of the screens in use, however.)

Anecdotally, we saw videos that showed the woman who was shot and killed yesterday appear on TikTok and then quickly disappear. But those we came across were from individual users, not news organizations. They were also not really condemning the riot — they were just direct video footage. It’s unclear if the specific videos we saw were those that TikTok itself censored or if the user chose to remove them instead.

Separately from graphic content, TikTok says it will remove videos that seek to incite, glorify, or promote violence, as those also violate its Community Guidelines. In these cases, the videos will be removed as TikTok identifies them — either via automation or user reporting.

And, as it did in November, TikTok is proactively blocking hashtags to reduce content’s visibility. It’s now blocking tags like #stormthecapitol and #patriotparty among others, and redirects those queries to its Community Guidelines. There are currently redirections across dozens of variations of those hashtags and others. The company doesn’t share its full list in order to protect its safeguards, it says.

TikTok had previously blocked tags like #stopthesteal and #QAnon, in a similar proactive manner.

We should point out that for all Twitter’s posturing about safety and moderation, it allowed Trump to return to its app, after a few key tweets were deleted. And it has yet to block hashtags associated with false claims, like #stopthesteal, which continues to work today. Facebook, on the other hand, banned Trump from Facebook and Instagram for at least two weeks. Like TikTok, it had previously blocked the #stopthesteal and #sharpiegate hashtags with a messages about its Community Standards. (Today those searches are erroring out with messages that say “This Page Isn’t Available Right Now,” we noticed.)

TikTok’s content moderation efforts have been fairly stringent in comparison with other social networks, as it regularly hides, downranks, and removes users’ posts. But it’s also been accused of engaging in “censorship” by those who believe it’s being too aggressive about newsworthy content.

That’s led to users finding more creative ways to keep their videos from being banned — like using misspellings, coded language or clever editing to route around TikTok policies. Other times, creators will simply give up and direct viewers to their Instagram where their content is backed up and less policed.

“Hateful behavior and violence have no place on TikTok,” a TikTok spokesperson told TechCrunch, when we asked for a statement on the Capitol events. “Content or accounts that seek to incite, glorify, or promote violence violate our Community Guidelines and will be removed,” they added.

 

 

#apps, #capitol-riot, #mobile, #riots, #social, #tiktok, #trump, #video

0

Michelle Obama calls on Silicon Valley to permanently ban Trump and prevent platform abuse by future leaders

In a new statement issued by former First Lady Michelle Obama, she calls on Silicon Valley specifically to address its role in the violent insurrection attempt by pro-Trump rioters at the U.S. Capitol building on Wednesday. Obama’s statement also calls out the obviously biased treatment that the primarily white pro-Trump fanatics faced by law enforcement relative to that received by mostly peaceful BLM supporters during their lawful demonstrations (as opposed to Wednesday’s criminal activity), but it includes a specific redress for the tech industry’s leaders and platform operators.

“Now is the time for companies to stop enabling this monstrous behavior – and go even further than they have already by permanently banning this man from their platforms and putting in place policies to prevent their technology from being used by the nation’s leaders to fuel insurrection,” Obama wrote in her statement, which she shared on Twitter and on Facebook.

The call for action goes beyond what most social platforms have done already: Facebook has banned Trump, but though it describes the term of the suspension as “indefinite,” it left open the possibility for a restoration of his accounts in as little as two weeks’ time once Joe Biden has officially assumed the presidency. Twitter, meanwhile, initially removed three tweets it found offended its rules by inciting violence, and then locked Trump’s account pending his deletion of the same. Earlier on Thursday, Twitter confirmed that Trump had removed these, and that his account would subsequently be restored twelve hours after their deletion. Twitch has also disabled Trump’s channel at least until the end of his term, while Shopify has removed Trump’s official merchandise stores from its platform.

No social platform thus far has permanently banned Trump, so far as TechCrunch is aware, which is what Obama is calling for in her statement. And while both Twitter and Facebook have discussed how Trump’s recent behavior have violated their policies regarding use of their platform, neither have yet provided any detailed information regarding how they’ll address any potential similar behavior from other world leaders going forward. In other words, we don’t yet know what would be different (if anything) should another Trump-styled megalomaniac take office and use available social channels in a similar manner.

Obama is hardly the only political figure to call for action from social media platforms around “sustained misuse of their platforms to sow discord and violence,” as Senator Mark Warner put it in a statement on Wednesday. Likely once the dust clears from this week’s events, Facebook, Twitter, YouTube, et al. will face renewed scrutiny from lawmakers and public interest groups around any corrective action they’re taking.

#articles, #capitol-riot, #deception, #donald-trump, #joe-biden, #law-enforcement, #mark-warner, #michelle-obama, #qanon, #shopify, #social-media, #social-media-platforms, #tc, #trump, #twitch, #twitter

0