Twitter taps crypto developer to lead ‘bluesky’ decentralized social network effort

Twitter’s ambitious upstart decentralized social media working group “bluesky” took an important step Monday as the social media company appointed a formal project lead who will direct how the protocol develops moving forward.

Crypto developer Jay Graber was tapped by Twitter to helm the initiative, which the company hopes will eventually create a decentralized social media protocol that a number of social networks including Twitter will operate on. The separate bluesky organization will operate independently but to date has been funded and managed largely by employees at Twitter.

Graber had already been working in a less formal role inside the bluesky team, with Twitter paying her to create a technical review of the decentralized social ecosystem for a working group of developers in the space. Graber previously worked on the developer team behind privacy focused cryptocurrency Zcash and built out her own decentralized social network called Happening, designed to compete with Facebook Events. Graber eventually walked away from the effort after having issues bootstrapping a user base interested in the benefits of decentralization, something that has grown to be a near-insurmountable issue for most upstart networks in the space.

In an interview back in January, Graber told TechCrunch she saw a major opportunity in Twitter entering the decentralized social space due to the hefty user base on the Twitter platform, which will itself eventually migrate to the protocol, the company has said.

“The really powerful thing about Twitter doing a decentralized protocol move is that if you could design a protocol that works in an ideal way, you don’t have to go through the initial effort of finding the niche to bootstrap from because Twitter will bring so many users,” Graber told us.

In January, TechCrunch profiled the initiative as it gathered more attention following Twitter’s permanent ban of former President Donald Trump from its platform. Following Trump’s removal, Twitter CEO Jack Dorsey highlighted the bluesky effort as one of the company’s ongoing initiatives to ensure that social media moderation could be less decentralized in the future. A decentralized social media protocol would allow for individual networks to govern themselves without one company or organization exercising monolithic control over the sphere of online conversations. 

“I think a huge focus for everyone involved has been thinking how do we enable better moderation, and not just coming from one source,” Graber told TechCrunch.

The bluesky organization is still in its earliest stages. Graber’s next task is bulking up the team with its first hires, which include a protocol developer and web developer.

#blockchain, #cryptocurrency, #decentralization, #donald-trump, #facebook, #forward, #jack-dorsey, #operating-systems, #president, #real-time-web, #social-media, #social-network, #social-networks, #software, #tc, #technology, #text-messaging, #twitter

Automakers urge greater government investment to meet Biden’s EV sales target

President Joe Biden is expected to set an ambitious new target for half of all new auto sales in the U.S. to be low- or zero-emission by 2030, a plan that has received tentative support from the Big Three automakers pending what they say will require hefty government support.

General Motors, Ford and Stellantis (formerly Fiat Chrysler) issued a joint statement Thursday that they had “shared aspiration[s]” to achieve a 40% to 50% share of electric in new vehicle sales by the end of the decade, with the caveat that such a target “can be achieved only with the timely deployment of the full suite of electrification policies committed to by the Administration in the Build Back Better Plan.”

Some of the investments they list include consumer incentives, a national EV charging network “of sufficient density,” funding for R&D and manufacturing and supply chain incentives.

Biden’s target, which will come in the form of an executive order on Thursday, will be nonbinding and entirely voluntary. The target includes vehicles powered by batteries, hydrogen fuel cells or plug-in hybrids.

Executives from the three OEMs, as well as representatives from the United Automobile Workers union, are expected to attend an event on the new target at the White House Thursday. Tesla, it seems, was not invited, according to a tweet from CEO Elon Musk.

Biden will also be calling for new fuel economy standards for passenger and medium- and heavy-duty vehicles through model year 2026, which were rolled back under President Trump’s tenure, according to a White House factsheet released Thursday. The new standards, which will be crafted under the jurisdiction of the Department of Transportation and the Environmental Protection Agency, should come as no surprise to automakers: They were included in Biden’s so-called “Day One Agenda” and mark a cornerstone of his strategy to combat climate change.

The new standards will likely borrow from those passed by California last year, which were finalized in concert with a coalition of five automakers: BMW AG, Ford, Honda Motor Co., Volkswagen AG, and Volvo AB. Those automakers, in a separate statement Thursday, said they supported the White House’s plan to reduce emissions. However, like the Big Three, they said that “bold action” from the federal government will be needed to achieve emission reductions targets.

The road to 2030

While Biden’s nonbinding order is more of a symbolic one, the targets are likely achievable, Jessica Caldwell, Edmunds’ executive director of insights said in a statement. She added that automotive industry leaders “have seen the writing on the wall for some time now” regarding electrification, regardless of who has been in the White House.

Thanks to the relatively long product development lead time, many of the major automakers have already announced multibillion-dollar investments in EVs and AVs at least through the middle of the decade. That includes a $35 billion investment through 2025 from GM and $30 billion through the same year from Ford — not to mention similar announcements from Stellantis and many billions earmarked for battery R&D from Volkswagen, and even Volvo Cars’ shift to all-electric by 2030.

These massive numbers follow the automakers’ own sales targets, which are for the most part in line with Biden’s goal.

Fuel economy rules, however, have historically garnered slightly more mixed reactions from automakers. GM, Fiat Chrysler (now Stellantis) and Toyota had previously supported a Trump-era lawsuit that sought to strip California’s authority to set its own emissions standards — but each company eventually made an about-face, leaving the road open for Biden to introduce his own standards this year.

In a very real sense, Biden’s announcement is as much about geopolitics as it is about climate change. He, too, has seen the writing on the wall regarding EVs. His administration notes in the factsheet that “China is increasingly cornering the global supply chain” for EVs and EV battery materials. “By setting clear targets for electric vehicle sale trajectories, these countries are becoming magnets for private investment into their manufacturing sectors — from parts and materials to final assembly.”

While three times as many EVs were registered in the U.S. in 2020 versus 2016, America still lags behind both Europe and China in terms of EV market share, according to the International Energy Agency.

The news has garnered a slew of mixed reactions, with some environmental groups urging more decisive action on the part of the administration. Carol Lee Rawn, senior director of transportation at Ceres, said in a statement that future standards should target a 60% reduction in emissions and a “clear trajectory” to 100% vehicle sales by 2035.

Although the UAW will be joining Biden at the White House on Thursday, President Ray Curry said in a statement that the group is “not focused on hard deadlines or percentages, but on preserving the wages and benefits that have been the heart and soul of the American middle class.”

#automotive, #bmw, #donald-trump, #electric-vehicles, #fiat-chrysler, #ford, #general-motors, #joe-biden, #policy, #stellantis, #transportation, #volkswagen, #volvo-cars

Cybereason raises $275M at Series F, adds Steven Mnuchin to board

Cybereason, a US-Israeli late-stage cybersecurity startup that provides extended detection and response (XDR) services, has secured $275 million in Series F funding. 

The investment was led by Liberty Strategic Capital, a venture capital fund recently founded by Steven Mnuchin, who served as U.S. Treasury Secretary under the Trump administration. As part of the deal, Mnuchin will join Cybereason’s board of directors, along with Liberty advisor Gen. Joseph Dunford, who was chairman of the Joint Chiefs of Staff under Trump until his retirement in 2019.

Lior Div, CEO and co-founder of Cybereason, tells TechCrunch that the startup’s decision to work with Liberty Strategy Capital came down to the firm’s “massive network” and the “understanding of the financial and government markets that Mnuchin and Gen. Joseph Dunford bring to our team.”

“For example, the executive order on cybersecurity put out by the Biden Administration recommends that endpoint detection and response solutions be deployed on all endpoints,” Dior added. “This accelerates the importance of solutions like ours in the public market, and Liberty Strategic Capital has the relationships to help accelerate our go-to-market strategy in the federal sector.”

This round, which will be used to fuel “hypergrowth driven by strong market demand,” follows $389 million in prior funding from SoftBank, CRV, Spark Capital, and Lockheed Martin. The company didn’t state at what valuation it raised the funds, but it is estimated to be in the region of $3 billion.

Cybereason’s recent growth, which saw it end 2020 at over $120 million in annual recurring revenue, has been largely driven by its AI-powered platform. Unlike traditional alert-centric models, Cybereason’s Defense Platform is operation-centric, which means it exposes and remediates entire malicious operations. The service details the full attack story from root cause to impacted users and devices, which the company claims significantly reduces the time taken to investigate and recover from an enterprise-wide cyber attack. 

The company, whose competitors include the likes of BlackBerry-owned Cylance and CrowdStrike, also this week expanded its channel presence with the launch of its so-called Defenders League, a global program that enables channel partners to use its technology and services to help their customers prevent and recover from cyberattacks. Cybereason claims its technology has helped protect customers from the likes of the recent SolarWinds supply-chain attack and other high-profile ransomware attacks launched by DarkSide, REvil, and Conti groups. 

Today’s $275 million funding round is likely to be Cybereason’s last before it goes public. Div previously said in August 2019 the company planned to IPO within two years, though he wouldn’t be pressed on whether the company is gearing up to go public when asked by TechCrunch. However, the company did compare its latest investment to SentinelOne‘s November 2020 Series F round, which was secured just months before it filed for a $100 million IPO.

#artificial-intelligence, #biden-administration, #companies, #computing, #crowdstrike, #crv, #cybereason, #cylance, #donald-trump, #executive, #funding, #lockheed-martin, #neuberger-berman, #president, #security, #softbank, #softbank-group, #solarwinds, #spark-capital, #steve-mnuchin, #techcrunch, #united-states

Trump’s new lawsuits against social media companies are going nowhere fast

Trump’s spicy trio of lawsuits against the social media platforms that he believes wrongfully banned him have succeeded in showering the former president with a flurry of media attention, but that’s likely where the story ends.

Like Trump’s quixotic and ultimately empty quest to gut Section 230 of the Communications Decency Act during his presidency, the new lawsuits are all sound and fury with little legal substance to back them up.

The suits allege that Twitter, Facebook and YouTube violated Trump’s First Amendment rights by booting him from their platforms, but the First Amendment is intended to protect citizens from censorship by the government — not private industry. The irony that Trump himself was the uppermost figure in the federal government at the time probably won’t be lost on whoever’s lap this case lands in.

In the lawsuits, which also name Twitter and Facebook chief executives Jack Dorsey and Mark Zuckerberg as well as Google CEO Sundar Pichai (Susan Wojcicki escapes notice once again!), Trump accuses the three companies of engaging in “impermissible censorship resulting from threatened legislative action, a misguided reliance upon Section 230 of the Communications Decency Act, and willful participation in joint activity with federal actors.”

The suit claims that the tech companies colluded with “Democrat lawmakers,” the CDC and Dr. Anthony Fauci, who served in Trump’s own government at the time.

The crux of the argument is that communication between the tech companies, members of Congress and the federal government somehow transforms Facebook, Twitter and YouTube into “state actors” — a leap of epic proportion:

“Defendant Twitter’s status thus rises beyond that of a private company to that of a state actor, and as such, Defendant is constrained by the First Amendment right to free speech in the censorship decisions it makes.”

Trump’s own Supreme Court appointee Brett Kavanaugh issued the court’s opinion on a relevant case two years ago. It examined whether a nonprofit running public access television channels in New York qualified as a “state actor” that would be subject to First Amendment constraints. The court ruled that running the public access channels didn’t transform the nonprofit into a government entity and that it retained a private entity’s rights to make editorial decisions.

“… A private entity… who opens its property for speech by others is not transformed by that fact alone into a state actor,” Justice Kavanaugh wrote in the decision.

It’s not likely that a court would decide that talking to the government or being threatened by the government somehow transform Twitter, YouTube and Facebook into state actors either.

Trump vs. Section 230 (again)

First Amendment aside — and there’s really not much of an argument there — social media platforms are protected by Section 230 of the Communications Decency Act, a concise snippet of law that shields them from liability not just for the user-generated content they host but for the moderation decisions they make about what content to remove.

In line with Trump’s obsessive disdain for tech’s legal shield, the lawsuits repeatedly rail against Section 230. The suits try to argue that because Congress threatened to revoke tech’s 230 protections, that forced them to ban Trump, which somehow makes social media companies part of the government and subject to First Amendment constraints.

Of course, Republican lawmakers and Trump’s own administration made frequent threats about repealing Section 230, not that it changes anything because this line of argument doesn’t make much sense anyway.

The suit also argues that Congress crafted Section 230 to intentionally censor speech that is otherwise protected by the First Amendment, ignoring that the law was born in 1996, well before ubiquitous social media, and for other purposes altogether.

For the four years of his presidency, Trump’s social media activity — his tweets in particular — informed the events of the day, both nationally and globally. While other world leaders and political figures used social media to communicate or promote their actions, Trump’s Twitter account was usually the action itself.

In the shadow of his social media bans, the former president has failed to re-establish lines of communication to the internet at large. In May, he launched a new blog, “From the Desk of Donald J. Trump,” but the site was taken down just a month later after it failed to attract much interest.

The handful of pro-Trump alternative social platforms are still struggling with app store content moderation requirements at odds with their extreme views on free speech, but that didn’t stop Gettr, the latest, from going ahead with its own rocky launch last week.

Viewed in one light, Trump’s lawsuits are a platform too, his latest method for broadcasting himself to the online world that his transgressions eventually cut him off from. In that sense, they seem to have succeeded, but in all other senses, they won’t.

#articles, #brett-kavanaugh, #ceo, #communications-decency-act, #congress, #donald-j-trump, #donald-trump, #federal-government, #google, #government, #jack-dorsey, #mark-zuckerberg, #new-york, #president, #qanon, #section-230, #social, #social-media, #social-media-platforms, #sundar-pichai, #supreme-court, #susan-wojcicki, #tc, #the-battle-over-big-tech, #twitter

Trump is suing Twitter, Facebook and Google over censorship claims

In his first press event since leaving office earlier this year, former President Donald Trump announced that he would be launching a volley of class action lawsuits against Twitter, Facebook and Google and their CEOs, claiming that the three companies violated his First Amendment rights.

“We’re demanding an end to the shadow-banning, a stop to the silencing and a stop to the blacklisting, banishing and canceling that you know so well,” Trump said at the press conference, held at his Bedminster, New Jersey golf club.

Following the January 6 attack on the Capitol, social media platforms swiftly revoked then President Trump’s posting privileges. For years, Trump tested the boundaries of platforms’ policies around misinformation and even violent threats, but his role in the events of that day crossed a line. Trump soon found himself without a megaphone with which to reach his many millions of followers across Twitter, Facebook and YouTube.

Trump’s fate on Twitter is known: the former president faces a lifetime ban there. But on Facebook and YouTube, there’s a possibility that his accounts could be restored. Facebook is currently deliberating that decision in a back-and-forth exchange with its new external policy making body, the Facebook Oversight Board.

Trump will be the lead plaintiff in the suits, which are being filed in the U.S. District Court for the Southern District of Florida. The lawsuits seek “compensatory and punitive damages” and the restoration of Trump’s social media accounts.

#donald-trump, #section-230, #social, #tc, #the-battle-over-big-tech

Gettr, the latest pro-Trump social network, is already a mess

Well, that was fast. Just days after a Twitter clone from former Trump spokesperson Jason Miller launched, the new social network is already beset by problems.

For one, hackers quickly leveraged Gettr’s API to scrape the email addresses of more than 85,000 of its users. User names, names and birthdays were also part of the scraped data set, which was surfaced by Alon Gal, co-founder of cybersecurity firm Hudson Rock.

“When threat actors are able to extract sensitive information due to neglectful API implementations, the consequence is equivalent to a data breach and should be handled accordingly by the firm [and] examined by regulators,” Gal told TechCrunch.

Last week, TechCrunch’s own Zack Whittaker predicted that Gettr would soon see its data scraped through its API.

The scraped data is just one of Gettr’s headaches. The app actually went live in the App Store and Google Play last month but left beta on July 4 following a launch post in Politico. While the app is meant to appeal to the famously anti-China Trump sphere, Gettr apparently received early funding from Chinese billionaire Guo Wengui, an ally of former Trump advisor Steve Bannon. Earlier this year, The Washington Post reported that Guo is at the center of a massive online disinformation network that spreads anti-vaccine claims and QAnon conspiracies.

On July 2, the app’s team apologized for signup delays citing a spike in downloads, but a bit of launch downtime is probably the least of its problems. Over the weekend, a number of official Gettr accounts including Marjorie Taylor-Greene, Steve Bannon, and Miller’s own were compromised, raising more questions about the app’s shoddy security practices.

That incident aside, fake accounts overwhelm any attempt to find verified users on Gettr. That goes for the app’s own recommendations too: a fake brand account for Steam was among the app’s own recommendations during TechCrunch’s testing.

Another red flag: The app’s design is conspicuously identical to Twitter and appears to have used the company’s API to copy some users’ follower counts and profiles. Gettr encourages new users to use their Twitter handle in the sign up process, saying that it will allow tweets to be copied over in some cases (we signed up, but this didn’t work for us). TechCrunch reached out to Twitter about Gettr’s striking similarities and the use of its API but the company declined to comment.

On mobile, Gettr is basically an exact clone of Twitter — albeit one that’s very rough around the edges. Some of Gettr’s copy is stilted and strange, including the boast that it’s a “non-bias” social network that “tried the best to provide best software quality to the users, allow anyone to express their opinion freely.”

The company is positioning itself as an alternative for anyone who believes that mainstream social networks are hostile to far right ideas. Gettr’s website beckons new users with familiar Trumpian messaging: “Don’t be Cancelled. Flex Your 1st Amendment. Celebrate Freedom.”

“Hydroxycholoroquine works!” Miller shared (Gettr’d?) over the weekend, quoting the former president. “And nobody is going to take down this post or suspend this account! #GETTR.” So far on Gettr, content moderation is either lax or nonexistent. But as we’ve seen with Parler and other havens for sometimes violent conspiracies, that approach can only last so long.

In spite of being widely associated with Trump through Miller and former Trump campaign staffer Tim Murtaugh, the former president doesn’t yet have a presence on the app. Some figures from Trump’s orbit have established profiles on Gettr, including Steve Bannon (84.7K followers) and Mike Pompeo (1.3M followers), but a search for Trump only brings up unofficial accounts. Bloomberg reported that Trump has no plans to join the app. (Given Gettr’s preponderance of Sonic the Hedgehog porn, we can’t exactly blame him.)

The online pro-Trump ecosystem remains scattered in mid-2021. With Trump banned and the roiling conspiracy network around QAnon no longer welcome on Facebook and Twitter, Gettr positioned itself as a refuge for mainstream social media’s many outcasts. But given Gettr’s mounting early woes, the sketchy Twitter clone’s moment in the sun might already be coming to an end.

#api, #app-store, #computer-security, #data-security, #donald-trump, #google, #mike-pompeo, #security, #social, #social-network, #tc

Facebook to end “Trump exemption” for politicians’ posts

Facebook to end “Trump exemption” for politicians’ posts

Enlarge (credit: Chris Ratcliffe/Bloomberg via Getty Images)

Facebook has given politicians extreme leeway with what they can post, essentially treating them as a special class of user. Now, that policy will reportedly change, perhaps as early as today.

The impetus for the change seems to be a looming deadline that Facebook’s Oversight Board gave the company regarding its suspension of former President Donald Trump’s accounts in the wake of the January 6 insurrection at the US Capitol. The board gave Facebook until June 5 to respond to recommendations that it clarify how influential users are treated relative to the rest of the site’s user base. The forthcoming updates were first reported by The Verge.

Under the new policy, politicians’ posts would be treated like everyone else’s, at least initially. If Facebook reviews a post and decides it’s both legal and newsworthy, even if it violates site policy, moderators will allow the post to appear on the site and flag it so users can see that the newsworthiness exemption was applied. It’s unclear exactly how that newsworthiness notice will appear or what standards Facebook will use to determine newsworthiness. Ars had reached out to Facebook for comment, and we’ll update this story if we hear back.

Read 6 remaining paragraphs | Comments

#donald-trump, #facebook, #moderation, #oversight-board, #policy

Facebook’s Oversight Board throws the company a Trump-shaped curveball

Facebook’s controversial policy-setting supergroup issued its verdict on Trump’s fate Wednesday, and it wasn’t quite what most of us were expecting.

We’ll dig into the decision to tease out what it really means, not just for Trump, but also for Facebook’s broader experiment in outsourcing difficult content moderation decisions and for just how independent the board really is.

What did the Facebook Oversight Board decide?

The Oversight Board backed Facebook’s determination that Trump violated its policies on “Dangerous Individuals and Organizations,” which prohibits anything that praises or otherwise supports violence. The the full decision and accompanying policy recommendations are online for anyone to read.

Specifically, the Oversight Board ruled that two Trump posts, one telling Capitol rioters “We love you. You’re very special” and another calling them “great patriots” and telling them to “remember this day forever” broke Facebook’s rules. In fact, the board went as far as saying the pair of posts “severely” violated the rules in question, making it clear that the risk of real-world harm in Trump’s words was was crystal clear:

The Board found that, in maintaining an unfounded narrative of electoral fraud and persistent calls to action, Mr. Trump created an environment where a serious risk of violence was possible. At the time of Mr. Trump’s posts, there was a clear, immediate risk of harm and his words of support for those involved in the riots legitimized their violent actions. As president, Mr. Trump had a high level of influence. The reach of his posts was large, with 35 million followers on Facebook and 24 million on Instagram.”

While the Oversight Board praised Facebook’s decision to suspend Trump, it disagreed with the way the platform implemented the suspension. The group argued that Facebook’s decision to issue an “indefinite” suspension was an arbitrary punishment that wasn’t really supported by the company’s stated policies:

It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.

In applying this penalty, Facebook did not follow a clear, published procedure. ‘Indefinite’ suspensions are not described in the company’s content policies. Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account.”

The Oversight Board didn’t mince words on this point, going on to say that by putting a “vague, standardless” punishment in place and then kicking the ultimate decision to the Oversight Board, “Facebook seeks to avoid its responsibilities.” Turning things around, the board asserted that it’s actually Facebook’s responsibility to come up with an appropriate penalty for Trump that fits its set of content moderation rules.

 

Is this a surprise outcome?

If you’d asked me yesterday, I would have said that the Oversight Board was more likely to overturn Facebook’s Trump decision. I also called Wednesday’s big decision a win-win for Facebook, because whatever the outcome, it wouldn’t ultimately be criticized a second time for either letting Trump back onto the platform or kicking him off for good. So much for that!

A lot of us didn’t see the “straight up toss the ball back into Facebook’s court” option as a possible outcome. It’s ironic and surprising that the Oversight Board’s decision to give Facebook the final say actually makes the board look more independent, not less.

Facebook likely saw a more clear-cut decision on the Trump situation in the cards. This is a challenging outcome for a company that’s probably ready to move on from its (many, many) missteps during the Trump era. But there’s definitely an argument that if the board declared that Facebook made the wrong call and reinstated Trump that would have been a much bigger headache.

What does it mean that the Oversight Board sent the decision back to Facebook?

Ultimately the Oversight Board is asking Facebook to either a) give Trump’s suspension and end date or b) delete his account. In a less severe case, the normal course of action would be for Facebook to remove whatever broke the rules, but given the ramifications here and the fact that Trump is a repeat Facebook rule-breaker, this is obviously all well past that option.

What will Facebook do?

We’re in for a wait. The board called for Facebook to evaluate the Trump situation and reach a final decision within six months, calling for a “proportionate” response that is justified by its platform rules. Since Facebook and other social media companies are re-writing their rules all the time and making big calls on the fly, that gives the company a bit of time to build out policies that align with the actions it plans to take. See you again on November 5.

In the months following the violence at the U.S. Capitol, Facebook repeatedly defended its Trump call as “necessary and right.” It’s hard to imagine the company deciding that Trump will get reinstated six months from now, but in theory Facebook could decide that length of time was an appropriate punishment and write that into its rules. The fact that Twitter permanently banned Trump means that Facebook could comfortably follow suit at this point.

If Trump had won reelection, this whole thing probably would have gone down very differently. As much as Facebook likes to say its decisions are aligned with lofty ideals — absolute free speech, connecting people — the company is ultimately very attuned to its regulatory and political environment. Trump’s actions were on January 6 were dangerous and flagrant, but Biden’s looming inauguration two weeks later probably influenced the company’s decision just as much.

In direct response to the decision, Facebook’s Nick Clegg wrote only: “We will now consider the board’s decision and determine an action that is clear and proportionate.” Clegg says Trump will stay suspended until then but didn’t offer further hints at what comes next.

Did the board actually change anything?

Potentially. In its decision, the Oversight Board said that Facebook asked for “observations or recommendations from the Board about suspensions when the user is a political leader.” The board’s policy recommendations aren’t binding like its decisions are, but since Facebook asked, it’s likely to listen.

If it does, the Oversight Board’s recommendations could reshape how Facebook handles high profile accounts in the future:

The Board stated that it is not always useful to draw a firm distinction between political leaders and other influential users, recognizing that other users with large audiences can also contribute to serious risks of harm.

While the same rules should apply to all users, context matters when assessing the probability and imminence of harm. When posts by influential users pose a high probability of imminent harm, Facebook should act quickly to enforce its rules. Although Facebook explained that it did not apply its ‘newsworthiness’ allowance in this case, the Board called on Facebook to address widespread confusion about how decisions relating to influential users are made. The Board stressed that considerations of newsworthiness should not take priority when urgent action is needed to prevent significant harm.

Facebook and other social networks have hidden behind newsworthiness exemptions for years instead of making difficult policy calls that would upset half their users. Here, the board not only says that political leaders don’t really deserve special consideration while enforcing the rules, but that it’s much more important to take down content that could cause harm than it is to keep it online because it’s newsworthy.

So… we’re back to square one?

Yes and no. Trump’s suspension may still be up in the air, but the Oversight Board is modeled after a legal body and its real power is in setting precedents. The board kicked this case back to Facebook because the company picked a punishment for Trump that wasn’t even on the menu, not because it thought anything about his behavior fell in a gray area.

The Oversight Board clearly believed that Trump’s words of praise for rioters at the Capitol created a high stakes, dangerous threat on the platform. It’s easy to imagine the board reaching the same conclusion on Trump’s infamous “when the looting starts, the shooting starts” statement during the George Floyd protests, even though Facebook did nothing at the time. Still, the board stops short of saying that behavior like Trump’s merits a perma-ban — that much is up to Facebook.

#donald-trump, #facebook, #nick-clegg, #oversight-board, #social, #social-media, #social-networks, #the-battle-over-big-tech, #trump, #twitter

Facebook’s hand-picked ‘oversight’ panel upholds Trump ban — for now

Facebook’s content decision review body, a quasi-external panel that’s been likened to a ‘Supreme Court of Facebook’ but isn’t staffed by sitting judges, can’t be truly independent of the tech giant which funds it, has no legal legitimacy or democratic accountability, and goes by the much duller official title ‘Oversight Board’ (aka the FOB) — has just made the biggest call of its short life…

Facebook’s hand-picked ‘oversight’ panel has voted against reinstating former U.S. president Donald Trump’s Facebook account.

However it has sought to row the company back from an ‘indefinite’ ban — finding fault with its decision to impose an indefinite restriction, rather than issue a more standard penalty (such as a penalty strike or permanent account closure).

In a press release announcing its decision the board writes:

Given the seriousness of the violations and the ongoing risk of violence, Facebook was justified in suspending Mr. Trump’s accounts on January 6 and extending that suspension on January 7.

However, it was not appropriate for Facebook to impose an ‘indefinite’ suspension.

It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.”

The board wants Facebook to revision its decision on Trump’s account within six months — and “decide the appropriate penalty”. So it appears to have succeeded in… kicking the can down the road.

The FOB is due to hold a press conference to discuss its decision shortly so stay tuned for updates.

This story is developing… refresh for updates…

It’s certainly been a very quiet five months on mainstream social media since Trump had his social media ALL CAPS megaphone unceremoniously shut down in the wake of his supporters’ violent storming of the capital.

For more on the background to Trump’s deplatforming do make time for this excellent explainer by TechCrunch’s Taylor Hatmaker. But the short version is that Trump finally appeared to have torched the last of his social media rule-breaking chances after he succeeded in fomenting an actual insurrection on U.S. soil on January 6. Doing so with the help of the massive, mainstream social media platforms whose community standards don’t, as a rule, give a thumbs up to violent insurrection…

#alan-rusbridger, #alex-stamos, #content-moderation, #donald-j-trump, #donald-trump, #facebook, #facebook-oversight-board, #fob, #freedom-of-speech, #hate-speech, #joe-biden, #mark-zuckerberg, #nick-clegg, #oversight-board, #policy, #social, #social-media, #united-states

Facebook Oversight Board upholds ban on former President Trump

President Donald Trump speaking and gesturing with his hands.

Enlarge / President Donald Trump speaks to members of the media on the South Lawn of the White House on Thursday, May 14, 2020. (credit: Getty Images | Bloomberg)

Facebook’s quasi-independent Oversight Board announced its decision today to uphold the ban on former President Donald Trump’s account.

In a ruling issued Wednesday, the board said that while the ban was justified, its open-ended nature was not. “However, it was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension,” the board wrote. “Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account.”

Instead, the board is calling on Facebook to review the issue and “justify a proportionate response that is consistent with the rules that are applied to other users of its platform.” The review should be completed within six months, according to the ruling.

Read 11 remaining paragraphs | Comments

#donald-trump, #facebook, #facebook-oversight-board, #insurrection, #policy, #social-media

For Trump and Facebook, judgment day is around the corner

Facebook unceremoniously confiscated Trump’s biggest social media megaphone months ago, but the former president might be poised to snatch it back.

Facebook’s Oversight Board, an external Supreme Court-like policy decision making group, will either restore Trump’s Facebook privileges or banish him forever on Wednesday. Whatever happens, it’s a huge moment for Facebook’s nascent experiment in outsourcing hard content moderation calls to an elite group of global thinkers, academics and political figures and allowing them to set precedents that could shape the world’s biggest social networks for years to come.

Facebook CEO Mark Zuckerberg announced Trump’s suspension from Facebook in the immediate aftermath of the Capitol attack. It was initially a temporary suspension, but two weeks later Facebook said that the decision would be sent to the Oversight Board. “We believe the risks of allowing the President to continue to use our service during this period are simply too great,” Facebook CEO Mark Zuckerberg wrote in January.

Facebook’s VP of Global Affairs Nick Clegg, a former British politician, expressed hope that the board would back the company’s own conclusions, calling Trump’s suspension an “unprecedented set of events which called for unprecedented action.”

Trump inflamed tensions and incited violence on January 6, but that incident wasn’t without precedent. In the aftermath of the murder of George Floyd, an unarmed Black man killed by Minneapolis police, President Trump ominously declared on social media “when the looting starts, the shooting starts,” a threat of imminent violence with racist roots that Facebook declined to take action against, prompting internal protests at the company.

The former president skirted or crossed the line with Facebook any number of times over his four years in office, but the platform stood steadfastly behind a maxim that all speech was good speech, even as other social networks grew more squeamish.

In a dramatic address in late 2019, Zuckerberg evoked Martin Luther King Jr. as he defended Facebook’s anything goes approach. “In times of social turmoil, our impulse is often to pull back on free expression,” Zuckerberg said. “We want the progress that comes from free expression, but not the tension.” King’s daughter strenuously objected.

A little over a year later, with all of Facebook’s peers doing the same and Trump leaving office, Zuckerberg would shrink back from his grand free speech declarations.

In 2019 and well into 2020, Facebook was still a roiling hotbed of misinformation, conspiracies and extremism. The social network hosted thousands of armed militias organizing for violence and a sea of content amplifying QAnon, which moved from a fringe belief on the margins to a mainstream political phenomenon through Facebook.

Those same forces would converge at the U.S. Capitol on January 6 for a day of violence that Facebook executives characterized as spontaneous, even though it had been festering openly on the platform for months.

 

How the Oversight Board works

Facebook’s Oversight Board began reviewing its first cases last October. Facebook can refer cases to the board, like it did with Trump, but users can also appeal to the board to overturn policy decisions that affect them after they exhaust the normal Facebook or Instagram appeals process. A five member subset of its 20 total members evaluate whether content should be allowed to remain on the platform and then reach a decision, which the full board must approve by a majority vote. Initially, the Oversight Board was only empowered to reinstate content removed on Facebook and Instagram, but in mid-April began accepting requests to review controversial content that stayed up.

Last month, the Oversight Board replaced departing member Pamela Karlan, a Stanford professor and voting rights scholar critical of Trump, who left to join the Biden administration. Karlan’s replacement, PEN America CEO Susan Nossel, wrote an op-ed in the LA Times in late January arguing that extending a permanent ban on Trump “may feel good” but that decision would ultimately set a dangerous precedent. Nossel joined the board too late to participate in the Trump decision.

The Oversight Board’s earliest batch of decisions leaned in the direction of restoring content that’s been taken down — not upholding its removal. While the board’s other decisions are likely to touch on the full spectrum of frustration people have with Facebook’s content moderation preferences, they come with far less baggage than the Trump decision. In one instance, the Oversight Board voted to restore an image of a woman’s nipples used in the context of a breast cancer post. In another, the board decided that a quote from a famous Nazi didn’t merit removal because it wasn’t an endorsement of Nazi ideology. In all cases, the Oversight Board can issue policy recommendations, but Facebook isn’t obligated to implement them — just the decisions.

Befitting its DNA of global activists, political figures and academics, the Oversight Board’s might have ambitions well beyond one social network. Earlier this year, Oversight Board co-chair and former Prime Minister of Denmark Helle Thorning-Schmidt declared that other social media companies would be “welcome to join” the project, which is branded in a conspicuously Facebook-less way. (The group calls itself the “Oversight Board” though everyone calls it the “Facebook Oversight Board.”)

“For the first time in history, we actually have content moderation being done outside one of the big social media platforms,” Thorning-Schmidt declared, grandly. “That in itself… I don’t hesitate to call it historic.”

Facebook’s decision to outsource some major policy decisions is indeed an experimental one, but that experiment is just getting started. The Trump case will give Facebook’s miniaturized Supreme Court an opportunity to send a message, though whether the takeaway is that it’s powerful enough to keep a world leader muzzled or independent enough to strike out from its parent and reverse the biggest social media policy decision ever made remains to be seen.

If Trump comes back, the company can shrug its shoulders and shirk another PR firestorm, content that its experiment in external content moderation is legitimized. If the board doubles down on banishing Trump, Facebook will rest easy knowing that someone else can take the blowback this round in its most controversial content call to date. For Facebook, for once, it’s a win-win situation.

 

#biden-administration, #ceo, #computing, #donald-trump, #elite, #facebook, #george-floyd, #king, #mark-zuckerberg, #nick-clegg, #oversight-board, #president, #schmidt, #social, #social-media, #social-media-platforms, #social-network, #social-networks, #software, #stanford, #supreme-court, #tc, #trump, #world-wide-web

In new deal, Wisconsin slashes controversial Foxconn subsidies 30-fold

A man in a open-collar suit speaks into a microphone.

Enlarge / Foxconn chairman Young Liu speaks in Taipei on March 16, 2021. (credit: -Hwa Cheng/Bloomberg via Getty Images)

The state of Wisconsin has negotiated a dramatically scaled-back deal with Taiwanese contract manufacturer Foxconn. The move, announced Tuesday by Democratic Gov. Tony Evers, is a repudiation of a deal negotiated four years earlier by Evers’ Republican predecessor Scott Walker.

The original deal envisioned Foxconn spending as much as $10 billion to manufacture a state-of-the-art factory for manufacturing large liquid-crystal display panels. The deal was announced in 2017, and then-President Donald Trump traveled to Wisconsin for the 2018 groundbreaking, describing the new factory as “the eighth wonder of the world.” Foxconn was supposed to get $2.85 billion in state and local incentives under that original deal.

The deal may have been savvy politics for Foxconn in 2017. The company uses factories in other countries to assemble consumer electronics products for Apple and other American companies—products that are often then sent back to the United States for sale. So Trump’s protectionist inclinations seemed like a serious threat. Announcing plans to create of thousands of jobs in a key battleground state gave Trump something to boast about, and that may have helped Foxconn curry favor with the new administration.

Read 5 remaining paragraphs | Comments

#donald-trump, #foxconn, #policy, #tony-evers, #wisconsin

Facebook’s decision-review body to take “weeks” longer over Trump ban call

Facebook’s self-styled and handpicked ‘Oversight Board’ will make a decision on whether or not to overturn an indefinite suspension of the account of former president Donald Trump within “weeks”, it said in a brief update statement on the matter today.

The high profile case appears to have attracted major public interest, with the FOB tweeting that it’s received more than 9,000 responses so far to its earlier request for public feedback.

It added that its commitment to “carefully reviewing all comments” after an earlier extension of the deadline for feedback is responsible for the extension of the case timeline.

The Board’s statement adds that it will provide more information “soon”.

Trump’s indefinite suspension from Facebook and Instagram was announced by Facebook founder Mark Zuckerberg on January 7, after the then-president of the U.S. incited his followers to riot at the nation’s Capitol — an insurrection that led to chaotic and violent scenes and a number of deaths as his supporters clashed with police.

However Facebook quickly referred the decision to the FOB for review — opening up the possibility that the ban could be overturned in short order as Facebook has said it will be bound by the case review decisions issued by the Board.

After the FOB accepted the case for review it initially said it would issue a decision within 90 days of January 21 — a deadline that would have fallen next Wednesday.

However it now looks like the high profile, high stakes call on Trump’s social media fate could be pushed into next month.

It’s a familiar development in Facebook-land. Delay has been a long time feature of the tech giant’s crisis PR response in the face of a long history of scandals and bad publicity attached to how it operates its platform. So the tech giant is unlikely to be uncomfortable that the FOB is taking its time to make a call on Trump’s suspension.

After all, devising and configuring the bespoke case review body — as its proprietary parody of genuine civic oversight — is a process that has taken Facebook years already.

In related FOB news this week, Facebook announced that users can now request the board review its decisions not to remove content — expanding the Board’s potential cases to include reviews of ‘keep ups’ (not just content takedowns).

This report was updated with a correction: The FOB previously extended the deadline for case submissions; it has not done so again as we originally stated

#donald-trump, #facebook, #mark-zuckerberg, #oversight-board, #platform-regulation, #policy, #social, #social-media, #united-states

Facebook, Instagram users can now ask ‘oversight’ panel to review decisions not to remove content

Facebook’s self-styled ‘Oversight Board’ (FOB) has announced an operational change that looks intended to respond to criticism of the limits of the self-regulatory content-moderation decision review body: It says it’s started accepting requests from users to review decisions to leave content up on Facebook and Instagram.

The move expands the FOB’s remit beyond reviewing (and mostly reversing) content takedowns — an arbitrary limit that critics said aligns it with the economic incentives of its parent entity, given that Facebook’s business benefits from increased engagement with content (and outrageous content drives clicks and makes eyeballs stick).

“So far, users have been able to appeal content to the Board which they think should be restored to Facebook or Instagram. Now, users can also appeal content to the Board which they think should be removed from Facebook or Instagram,” the FOB writes, adding that it will “use its independent judgment to decide what to leave up and what to take down”.

“Our decisions will be binding on Facebook,” it adds.

The ability to request an appeal on content Facebook wouldn’t take down has been added across all markets, per Facebook. But the tech giant said it will take some “weeks” for all users to get access as it said it’s rolling out the feature “in waves to ensure stability of the product experience”.

While the FOB can now get individual pieces of content taken down from Facebook/Instagram — i.e. if the Board believes it’s justified in reversing an earlier decision by the company not to remove content — it cannot make Facebook adopt any associated suggestions vis-a-vis its content moderation policies generally.

That’s because Facebook has never said it will be bound by the FOB’s policy recommendations; only by the final decision made per review.

That in turn limits the FOB’s ability to influence the shape of the tech giant’s approach to speech policing. And indeed the whole effort remains inextricably bound to Facebook which devised and structured the FOB — writing the Board’s charter and bylaws, and hand picking the first cohort of members. The company thus continues to exert inescapable pull on the strings linking its self-regulatory vehicle to its lucrative people-profiling and ad-targeting empire.

The FOB getting the ability to review content ‘keep ups’ (if we can call them that) is also essentially irrelevant when you consider the ocean of content Facebook has ensured the Board won’t have any say in moderating — because its limited resources/man-power mean it can only ever consider a fantastically tiny subset of cases referred to it for review.

For an oversight body to provide a meaningful limit on Facebook’s power it would need to have considerably more meaty (i.e. legal) powers; be able to freely range across all aspects of Facebook’s business (not just review user generated content); and be truly independent of the adtech mothership — as well as having meaningful powers of enforcement and sanction.

So, in other words, it needs to be a public body, functioning in the public interest.

Instead, while Facebook applies its army of in house lawyers to fight actual democratic regulatory oversight and compliance, it has splashed out to fashion this bespoke bureaucracy that can align with its speech interests — handpicking a handful of external experts to pay to perform a content review cameo in its crisis PR drama.

Unsurprisingly, then, the FOB has mostly moved the needle in a speech-maximizing direction so far — while expressing some frustration at the limited deck of cards Facebook has dealt it.

Most notably, the Board still has a decision pending on whether to reverse Facebook’s indefinitely ban on former US president Donald Trump. If it reverses that decision Facebook users won’t have any recourse to appeal the restoration of Trump’s account.

The only available route would, presumably, be for users to report future Trump content to Facebook for violating its policies — and if Facebook refuses to take that stuff down, users could try to request a FOB review. But, again, there’s no guarantee the FOB will accept any such review requests. (Indeed, if the board chooses to reinstate Trump that may make it harder for it to accept requests to review Trump content, at least in the short term (in the interests of keeping a diverse case file, so… )

How to ask for a review after content isn’t removed

To request the FOB review a piece of content that’s been left up a user of Facebook/Instagram first has to report the content to Facebook/Instagram.

If the company decides to keep the content up Facebook says the reporting person will receive an Oversight Board Reference ID (a ten-character string that begins with ‘FB’) in their Support Inbox — which they can use to appeal its ‘no takedown’ decision to the Oversight Board.

There are several hoops to jump through to make an appeal: Following on-screen instructions Facebook says the user will be taken to the Oversight Board website where they need to log in with the account to which the reference ID was issued.

They will then be asked to provide responses to a number of questions about their reasons for reporting the content (to “help the board understand why you think Facebook made the wrong decision”).

Once an appeal has been submitted, the Oversight Board will decide whether or not to review it. The board only selects a certain number of “eligible appeals” to review; and Facebook has not disclosed the proportion of requests the Board accepts for review vs submissions it receives — per case or on aggregate. So how much chance of submission success any user has for any given piece of content is an unknown (and probably unknowable) quantity.

Users who have submitted an appeal against content that was left up can check the status of their appeal via the FOB’s website — again by logging in and using the reference ID.

A further limitation is time, as Facebook notes there’s a time limit on appealing decisions to the FOB

“Bear in mind that there is a time limit on appealing decisions to the Oversight Board. Once the window to appeal a decision has expired, you will no longer be able to submit it,” it writes in its Help Center, without specifying how long users have to get their appeal in (we asked Facebook to confirm this and it’s 15 days). 

#content-moderation, #content-takedowns, #donald-trump, #facebook, #facebook-oversight-board, #freedom-of-expression, #instagram, #oversight-board, #policy, #social, #social-media, #user-generated-content

Twitter won’t let federal archivists host Trump’s tweets on Twitter

Illustration of President Trump's face and a Twitter logo

Enlarge (credit: Getty Images | NurPhoto)

The National Archives and Records Administration is a federal agency responsible for preserving historically significant federal records, including tweets from senior government officials. For example, former Trump White House spokeswoman Sarah Huckabee Sanders turned over control of her official Twitter account to NARA when she left office. Leaving tweets on Twitter makes them easily accessible by the public.

But Politico reports that Twitter won’t allow anything like this to happen with former President Donald Trump’s now-banned @realDonaldTrump account.

“Given that we permanently suspended @realDonaldTrump, the content from the account will not appear on Twitter as it did previously or as archived administration accounts do currently, regardless of how NARA decides to display the data it has preserved,” a Twitter spokesman told Politico. “Administration accounts that are archived on the service are accounts that were not in violation of the Twitter Rules.”

Read 8 remaining paragraphs | Comments

#donald-trump, #policy, #twitter

Clarence Thomas plays a poor devil’s advocate in floating First Amendment limits for tech companies

Supreme Court Justice Clarence Thomas flaunted a dangerous ignorance regarding matters digital in an opinion published today. In attempting to explain the legal difficulties of social media platforms, particularly those arising from Twitter’s ban of Trump, he makes an ill-informed, bordering on bizarre, argument as to why such companies may need their First Amendment rights curtailed.

There are several points on which Thomas seems to willfully misconstrue or misunderstand the issues.

The first is in his characterization of Trump’s use of Twitter. You may remember that several people sued after being blocked by Trump, alleging that his use of the platform amounted to creating a “public forum” in a legal sense, meaning it was unlawful to exclude anyone from it for political reasons. (The case, as it happens, was rendered moot after its appeal and dismissed by the court except as a Thomas’s temporary soapbox.)

“But Mr. Trump, it turned out, had only limited control of the account; Twitter has permanently removed the account from the platform,” writes Thomas. “[I]t seems rather odd to say something is a government forum when a private company has unrestricted authority to do away with it.”

Does it? Does it seem odd? Because a few paragraphs later, he uses the example of a government agency using a conference room in a hotel to hold a public hearing. They can’t kick people out for voicing their political opinions, certainly, because the room is a de facto public forum. But if someone is loud and disruptive, they can ask hotel security to remove that person, because the room is de jure a privately owned space.

Yet the obvious third example, and the one clearly most relevant to the situation at hand, is skipped. What if it is the government representatives who are being loud and disruptive, to the point where the hotel must make the choice whether to remove them?

It says something that this scenario, so remarkably close a metaphor for what actually happened, is not considered. Perhaps it casts the ostensibly “odd” situation and actors in too clear a light, for Thomas’s other arguments suggest he is not for clarity here but for muddying the waters ahead of a partisan knife fight over free speech.

In his best “I’m not saying, I’m just saying” tone, Thomas presents his reasoning why, if the problem is that these platforms have too much power over free speech, then historically there just happen to be some legal options to limit that power.

Thomas argues first, and worst, that platforms like Facebook and Google may amount to “common carriers,” a term that goes back centuries to actual carriers of cargo, but which is now a common legal concept that refers to services that act as simple distribution – “bound to serve all customers alike, without discrimination.” A telephone company is the most common example, in that it cannot and does not choose what connections it makes, nor what conversations happen over those connections – it moves electric signals from one phone to another.

But as he notes at the outset of his commentary, “applying old doctrines to new digital platforms is rarely straightforward.” And Thomas’s method of doing so is spurious.

“Though digital instead of physical, they are at bottom communications networks, and they ‘carry’ information from one user to another,” he says, and equates telephone companies laying cable with companies like Google laying “information infrastructure that can be controlled in much the same way.”

Now, this is certainly wrong. So wrong in so many ways that it’s hard to know where to start and when to stop.

The idea that companies like Facebook and Google are equivalent to telephone lines is such a reach that it seems almost like a joke. These are companies that have built entire business empires by adding enormous amounts of storage, processing, analysis, and other services on top of the element of pure communication. One might as easily suggest that because computers are just a simple piece of hardware that moves data around, that Apple is a common carrier as well. It’s really not so far a logical leap!

There’s no real need to get into the technical and legal reasons why this opinion is wrong, however, because these grounds have been covered so extensively over the years, particularly by the FCC — which the Supreme Court has deferred to as an expert agency on this matter. If Facebook were a common carrier (or telecommunications service), it would fall under the FCC’s jurisdiction — but it doesn’t, because it isn’t, and really, no one thinks it is. This has been supported over and over, by multiple FCCs and administrations, and the deferral is itself a Supreme Court precedent that has become doctrine.

In fact, and this is really the cherry on top, freshman Justice Kavanaugh in a truly stupefying legal opinion a few years ago argued so far in the other direction that it became wrong in a totally different way! It was Kavanaugh’s considered opinion that the bar for qualifying as a common carrier was actually so high that even broadband providers don’t qualify for it (This was all in service of taking down net neutrality, a saga we are in danger of resuming soon). As his erudite colleague Judge Srinivasan explained to him at the time, this approach too is embarrassingly wrong.

Looking at these two opinions, of two sitting conservative Supreme Court Justices, you may find the arguments strangely at odds, yet they are wrong after a common fashion.

Kavanaugh claims that broadband providers, the plainest form of digital common carrier conceivable, are in fact providing all kinds sophisticated services over and above their functionality as a pipe (they aren’t). Thomas claims that companies actually providing all kinds of sophisticated services are nothing more than pipes.

Simply stated, these men have no regard for the facts but have chosen the definition that best suits their political purposes: for Kavanaugh, thwarting a Democrat-led push for strong net neutrality rules; for Thomas, asserting control over social media companies perceived as having an anti-conservative bias.

The case Thomas uses for his sounding board on these topics was rightly rendered moot — Trump is no longer president and the account no longer exists — but he makes it clear that he regrets this extremely.

“As Twitter made clear, the right to cut off speech lies most powerfully in the hands of private digital platforms,” he concludes. “The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions. This petition, unfortunately, affords us no opportunity to confront them.”

Between the common carrier argument and questioning the form of Section 230 (of which in this article), Thomas’s hypotheticals break the seals on several legal avenues to restrict First Amendment rights of digital platforms, as well as legitimizing those (largely on one side of the political spectrum) who claim a grievance along these lines. (Slate legal commentator Mark Joseph Stern, who spotted the opinion early, goes further, calling Thomas’s argument a “paranoid Marxist delusion” and providing some other interesting context.)

This is not to say that social media and tech do not deserve scrutiny on any number of fronts — they exist in an alarming global vacuum of regulatory powers, and hardly anyone would suggest they have been entirely responsible with this freedom. But the arguments of Thomas and Kavanaugh stink of cynical partisan sophistry. This endorsement by Thomas amounts accomplishes nothing legally, but will provide valuable fuel for the bitter fires of contention — though they hardly needed it.

#clarence-thomas, #donald-trump, #facebook, #first-amendment, #google, #government, #lawsuit, #opinion, #section-230, #social-media, #supreme-court, #tc, #trump

Supreme Court tosses ruling that said Trump blocking Twitter critics was unconstitutional

The Supreme Court has vacated a previous ruling that found former President Trump violated the First Amendment by blocking his Twitter foes.

The ruling was upheld by a Manhattan federal appeals court in 2019, which deemed Trump’s actions unconstitutional. The court found that because Trump used Twitter to “conduct official business” and interact with the public that his decision to block users ran afoul of the First Amendment.

“… The First Amendment does not permit a public official who utilizes a social media account for all manner of official purposes to exclude persons from an otherwise open online dialogue because they expressed views with which the official disagrees,” a trio of judges wrote in that decision.

The Supreme Court’s decision to vacate the prior ruling isn’t a total surprise — Trump is no longer president and he’s banned from Twitter for life at this point.

What was unexpected was an accompanying opinion issued by Supreme Court Justice Clarence Thomas which pushed well beyond the issue at hand into novel criticisms of major tech platforms.

Thomas pivoted away from Trump’s Twitter behavior in the 12-page opinion, mounting an argument that the moderation powers of digital platforms like Twitter and Facebook are the real problem. “If the aim is to ensure that speech is not smothered, then the more glaring concern must perforce be the dominant digital platforms themselves,” Thomas wrote.

He went on to raise concerns about “concentrated control” of digital platforms by a handful of decision makers, arguing that digital platforms exercise too much power in making moderation decisions. “Much like with a communications utility, this concentration gives some digital platforms enormous control over speech,” Thomas wrote.

Thomas’s opinion Monday echoed his previous arguments that the protections conferred to digital platforms by Section 230 of the Communications Decency Act should be “pared back” and interpreted far more narrowly.

With Democrats at the wheel in Congress, some Republicans have shifted their criticisms of big tech away from its moderation powers and toward other issues, like how those services affect mental health. But the suite of grievances stirred up over the course of Trump’s four years in office lives on in Supreme Court Justice Clarence Thomas.

In January, Thomas’s wife Ginni Thomas, a fervent Trump supporter, faced criticism for cheering on the pro-Trump crowd that went on to violently invade the U.S. Capitol.

Thomas was not joined by other justices in his opinion, but his interest in tech’s moderation decisions is a signal that the issue is far from dead.

“We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms,” he warned.

#donald-trump, #government, #social-media, #supreme-court, #tc

Facebook lifts ban on US political advertising

Facebook lifts ban on US political advertising

Enlarge (credit: KJ Parish)

Facebook will lift a ban on political advertising imposed after the US election to curb the spread of misinformation, and it has pledged to investigate whether its political ads systems need a further overhaul.

Advertisers would be able to resume running political ads on March 4, Facebook said in a blog post on Wednesday. It said it had introduced the temporary moratorium “to avoid confusion or abuse following Election Day.”

The social media company said it had received “feedback” about its ads system during the latest election cycle, including its inability to distinguish between ads from politicians and political groups and social issue ads from advocacy groups, for example.

Read 9 remaining paragraphs | Comments

#donald-trump, #facebook, #mark-zuckerberg, #policy

Facebook’s Oversight Board already ‘a bit frustrated’ — and it hasn’t made a call on Trump ban yet

The Facebook Oversight Board (FOB) is already feeling frustrated by the binary choices it’s expected to make as it reviews Facebook’s content moderation decisions, according to one of its members who was giving evidence to a UK House of Lords committee today which is running an enquiry into freedom of expression online. 

The FOB is currently considering whether to overturn Facebook’s ban on former US president, Donald Trump. The tech giant banned Trump “indefinitely” earlier this year after his supporters stormed the US capital.

The chaotic insurrection on January 6 led to a number of deaths and widespread condemnation of how mainstream tech platforms had stood back and allowed Trump to use their tools as megaphones to whip up division and hate rather than enforcing their rules in his case.

Yet, after finally banning Trump, Facebook almost immediately referred the case to it’s self-appointed and self-styled Oversight Board for review — opening up the prospect that its Trump ban could be reversed in short order via an exceptional review process that Facebook has fashioned, funded and staffed.

Alan Rusbridger, a former editor of the British newspaper The Guardian — and one of 20 FOB members selected as an initial cohort (the Board’s full headcount will be double that) — avoided making a direct reference to the Trump case today, given the review is ongoing, but he implied that the binary choices it has at its disposal at this early stage aren’t as nuanced as he’d like.

“What happens if — without commenting on any high profile current cases — you didn’t want to ban somebody for life but you wanted to have a ‘sin bin’ so that if they misbehaved you could chuck them back off again?” he said, suggesting he’d like to be able to issue a soccer-style “yellow card” instead.

“I think the Board will want to expand in its scope. I think we’re already a bit frustrated by just saying take it down or leave it up,” he went on. “What happens if you want to… make something less viral? What happens if you want to put an interstitial?

“So I think all these things are things that the Board may ask Facebook for in time. But we have to get our feet under the table first — we can do what we want.”

“At some point we’re going to ask to see the algorithm, I feel sure — whatever that means,” Rusbridger also told the committee. “Whether we can understand it when we see it is a different matter.”

To many people, Facebook’s Trump ban is uncontroversial — given the risk of further violence posed by letting Trump continue to use its megaphone to foment insurrection. There are also clear and repeat breaches of Facebook’s community standards if you want to be a stickler for its rules.

Among supporters of the ban is Facebook’s former chief security officer, Alex Stamos, who has since been working on wider trust and safety issues for online platforms via the Stanford Internet Observatory.

Stamos was urging both Twitter and Facebook to cut Trump off before everything kicked off, writing in early January: “There are no legitimate equities left and labeling won’t do it.”

But in the wake of big tech moving almost as a unit to finally put Trump on mute, a number of world leaders and lawmakers were quick to express misgivings at the big tech power flex.

Germany’s chancellor called Twitter’s ban on him “problematic”, saying it raised troubling questions about the power of the platforms to interfere with speech. While other lawmakers in Europe seized on the unilateral action — saying it underlined the need for proper democratic regulation of tech giants.

The sight of the world’s most powerful social media platforms being able to mute a democratically elected president (even one as divisive and unpopular as Trump) made politicians of all stripes feel queasy.

Facebook’s entirely predictable response was, of course, to outsource this two-sided conundrum to the FOB. After all, that was its whole plan for the Board. The Board would be there to deal with the most headachey and controversial content moderation stuff.

And on that level Facebook’s Oversight Board is doing exactly the job Facebook intended for it.

But it’s interesting that this unofficial ‘supreme court’ is already feeling frustrated by the limited binary choices it’s asked them for. (Of, in the Trump case, either reversing the ban entirely or continuing it indefinitely.)

The FOB’s unofficial message seems to be that the tools are simply far too blunt. Although Facebook has never said it will be bound by any wider policy suggestions the Board might make — only that it will abide by the specific individual review decisions. (Which is why a common critique of the Board is that it’s toothless where it matters.)

How aggressive the Board will be in pushing Facebook to be less frustrating very much remains to be seen.

“None of this is going to be solved quickly,” Rusbridger went on to tell the committee in more general remarks on the challenges of moderating speech in the digital era. Getting to grips with the Internet’s publishing revolution could in fact, he implied, take the work of generations — making the customary reference the long tail of societal disruption that flowed from Gutenberg inventing the printing press.

If Facebook was hoping the FOB would kick hard (and thorny-in-its-side) questions around content moderation into long and intellectual grasses it’s surely delighted with the level of beard stroking which Rusbridger’s evidence implies is now going on inside the Board. (If, possibly, slightly less enchanted by the prospect of its appointees asking it if they can poke around its algorithmic black boxes.)

Kate Klonick, an assistant professor at St John’s University Law School, was also giving evidence to the committee — having written an article on the inner workings of the FOB, published recently in the New Yorker, after she was given wide-ranging access by Facebook to observe the process of the body being set up.

The Lords committee was keen to learn more on the workings of the FOB and pressed the witnesses several times on the question of the Board’s independence from Facebook.

Rusbridger batted away concerns on that front — saying “we don’t feel we work for Facebook at all”. Though Board members are paid by Facebook via a trust it set up to put the FOB at arm’s length from the corporate mothership. And the committee didn’t shy away or raising the payment point to query how genuinely independent they can be?

“I feel highly independent,” Rusbridger said. “I don’t think there’s any obligation at all to be nice to Facebook or to be horrible to Facebook.”

“One of the nice things about this Board is occasionally people will say but if we did that that will scupper Facebook’s economic model in such and such a country. To which we answer well that’s not our problem. Which is a very liberating thing,” he added.

Of course it’s hard to imagine a sitting member of the FOB being able to answer the independence question any other way — unless they were simultaneously resigning their commission (which, to be clear, Rusbridger wasn’t).

He confirmed that Board members can serve three terms of three years apiece — so he could have almost a decade of beard-stroking on Facebook’s behalf ahead of him.

Klonick, meanwhile, emphasized the scale of the challenge it had been for Facebook to try to build from scratch a quasi-independent oversight body and create distance between itself and its claimed watchdog.

“Building an institution to be a watchdog institution — it is incredibly hard to transition to institution-building and to break those bonds [between the Board and Facebook] and set up these new people with frankly this huge set of problems and a new technology and a new back end and a content management system and everything,” she said.

Rusbridger had said the Board went through an extensive training process which involved participation from Facebook representatives during the ‘onboarding’. But went on to describe a moment when the training had finished and the FOB realized some Facebook reps were still joining their calls — saying that at that point the Board felt empowered to tell Facebook to leave.

“This was exactly the type of moment — having watched this — that I knew had to happen,” added Klonick. “There had to be some type of formal break — and it was told to me that this was a natural moment that they had done their training and this was going to be moment of push back and breaking away from the nest. And this was it.”

However if your measure of independence is not having Facebook literally listening in on the Board’s calls you do have to query how much Kool Aid Facebook may have successfully doled out to its chosen and willing participants over the long and intricate process of programming its own watchdog — including to extra outsiders it allowed in to observe the set up.

The committee was also interested in the fact the FOB has so far mostly ordered Facebook to reinstate content its moderators had previously taken down.

In January, when the Board issued its first decisions, it overturned four out of five Facebook takedowns — including in relation to a number of hate speech cases. The move quickly attracted criticism over the direction of travel. After all, the wider critique of Facebook’s business is it’s far too reluctant to remove toxic content (it only banned holocaust denial last year, for example). And lo! Here’s its self-styled ‘Oversight Board’ taking decisions to reverse hate speech takedowns…

The unofficial and oppositional ‘Real Facebook Board’ — which is truly independent and heavily critical of Facebook — pounced and decried the decisions as “shocking”, saying the FOB had “bent over backwards to excuse hate”.

Klonick said the reality is that the FOB is not Facebook’s supreme court — but rather it’s essentially just “a dispute resolution mechanism for users”.

If that assessment is true — and it sounds spot on, so long as you recall the fantastically tiny number of users who get to use it — the amount of PR Facebook has been able to generate off of something that should really just be a standard feature of its platform is truly incredible.

Klonick argued that the Board’s early reversals were the result of it hearing from users objecting to content takedowns — which had made it “sympathetic” to their complaints.

“Absolute frustration at not knowing specifically what rule was broken or how to avoid breaking the rule again or what they did to be able to get there or to be able to tell their side of the story,” she said, listing the kinds of things Board members had told her they were hearing from users who had petitioned for a review of a takedown decision against them.

“I think that what you’re seeing in the Board’s decision is, first and foremost, to try to build some of that back in,” she suggested. “Is that the signal that they’re sending back to Facebook — that’s it’s pretty low hanging fruit to be honest. Which is let people know the exact rule, given them a fact to fact type of analysis or application of the rule to the facts and give them that kind of read in to what they’re seeing and people will be happier with what’s going on.

“Or at least just feel a little bit more like there is a process and it’s not just this black box that’s censoring them.”

In his response to the committee’s query, Rusbridger discussed how he approaches review decision-making.

“In most judgements I begin by thinking well why would we restrict freedom of speech in this particular case — and that does get you into interesting questions,” he said, having earlier summed up his school of thought on speech as akin to the ‘fight bad speech with more speech’ Justice Brandeis type view.

“The right not to be offended has been engaged by one of the cases — as opposed to the borderline between being offended and being harmed,” he went on. “That issue has been argued about by political philosophers for a long time and it certainly will never be settled absolutely.

“But if you went along with establishing a right not to be offended that would have huge implications for the ability to discuss almost anything in the end. And yet there have been one or two cases where essentially Facebook, in taking something down, has invoked something like that.”

“Harm as oppose to offence is clearly something you would treat differently,” he added. “And we’re in the fortunate position of being able to hire in experts and seek advisors on the harm here.”

While Rusbridger didn’t sound troubled about the challenges and pitfalls facing the Board when it may have to set the “borderline” between offensive speech and harmful speech itself — being able to (further) outsource expertise presumably helps — he did raise a number of other operational concerns during the session. Including over the lack of technical expertise among current board members (who were purely Facebook’s picks).

Without technical expertise how can the Board ‘examine the algorithm’, as he suggested it would want to, because it won’t be able to understand Facebook’s content distribution machine in any meaningful way?

Since the Board currently lacks technical expertise, it does raise wider questions about its function — and whether its first learned cohort might not be played as useful idiots from Facebook’s self-interested perspective — by helping it gloss over and deflect deeper scrutiny of its algorithmic, money-minting choices.

If you don’t really understand how the Facebook machine functions, technically and economically, how can you conduct any kind of meaningful oversight at all? (Rusbridger evidently gets that — but is also content to wait and see how the process plays out. No doubt the intellectual exercise and insider view is fascinating. “So far I’m finding it highly absorbing,” as he admitted in his evidence opener.)

“People say to me you’re on that Board but it’s well known that the algorithms reward emotional content that polarises communities because that makes it more addictive. Well I don’t know if that’s true or not — and I think as a board we’re going to have to get to grips with that,” he went on to say. “Even if that takes many sessions with coders speaking very slowly so that we can understand what they’re saying.”

“I do think our responsibility will be to understand what these machines are — the machines that are going in rather than the machines that are moderating,” he added. “What their metrics are.”

Both witnesses raised another concern: That the kind of complex, nuanced moderation decisions the Board is making won’t be able to scale — suggesting they’re too specific to be able to generally inform AI-based moderation. Nor will they necessarily be able to be acted on by the staffed moderation system that Facebook currently operates (which gives its thousand of human moderators a fantastically tiny amount of thinking time per content decision).

Despite that the issue of Facebook’s vast scale vs the Board’s limited and Facebook-defined function — to fiddle at the margins of its content empire — was one overarching point that hung uneasily over the session, without being properly grappled with.

“I think your question about ‘is this easily communicated’ is a really good one that we’re wrestling with a bit,” Rusbridger said, conceding that he’d had to brain up on a whole bunch of unfamiliar “human rights protocols and norms from around the world” to feel qualified to rise to the demands of the review job.

Scaling that level of training to the tens of thousands of moderators Facebook currently employs to carry out content moderation would of course be eye-wateringly expensive. Nor is it on offer from Facebook. Instead it’s hand-picked a crack team of 40 very expensive and learned experts to tackle an infinitesimally smaller number of content decisions.

“I think it’s important that the decisions we come to are understandable by human moderators,” Rusbridger added. “Ideally they’re understandable by machines as well — and there is a tension there because sometimes you look at the facts of a case and you decide it in a particular way with reference to those three standards [Facebook’s community standard, Facebook’s values and “a human rights filter”]. But in the knowledge that that’s going to be quite a tall order for a machine to understand the nuance between that case and another case.

“But, you know, these are early days.”

#alan-rusbridger, #censorship, #content-moderation, #donald-trump, #facebook, #facebook-oversight-board, #free-speech, #tc

TikTok’s forced sale to Oracle is put on hold

The insane saga of a potential forced sale of TikTok’s US operations is reportedly ending — another victim of the transition to methodical and rational policymaking that appears to be the boring new normal under the Presidency of Joe Biden.

Last fall, the U.S. government under President Donald Trump took a stab at “gangster capitalism” by trying to force the sale of TikTok to a group of buyers including Oracle and Walmart.

While the effort was doomed from the start, with TikTok’s parent company ByteDance winning most of the legal challenges to the government effort, a Rubicon had effectively been crossed where the U.S. government appeared willing to spend political capital to stymie the growth of a successful foreign business on its shores for the flimsiest of security reasons.

Now, The Wall Street Journal is reporting that the efforts by the U.S. government to push the deal forward “have been shelved indefinitely”, citing sources familiar with the process.

However, discussions between TikTok and U.S. national security officials are continuing because there are valid concerns around TikTok’s data collection and the potential for manipulation and censorship of content on the app.

In the meantime, the U.S. is taking a look at all of the potential threats to data privacy and security from intrusions by foreign governments or using tech developed overseas, according to Emily Horne, the spokeswoman for the National Security Council.

“We plan to develop a comprehensive approach to securing U.S. data that addresses the full range of threats we face,” Horne told the WSJ. “This includes the risk posed by Chinese apps and other software that operate in the U.S. In the coming months, we expect to review specific cases in light of a comprehensive understanding of the risks we face.”

Last year, then-President Trump ordered a ban on TikTok intending to force the sale of the Chinese-owned, short form video distribution service to a U.S.-owned investment group.

As part of that process, the Committee on Foreign Investment in the U.S. ordered ByteDance to divest of its U.S. operations.

TikTok appealed that order in court in Washington last November as the U.S. was roiled by the presidential election and its aftermath.

That case is still pending, but separate federal court rulings have blocked the U.S. government from shutting TikTok down.

#bytedance, #donald-trump, #oracle, #oracle-corporation, #president, #tc, #tiktok, #u-s-government, #walmart

China launched its national carbon trading market yesterday

Yesterday, China flipped the switch on a nationwide carbon trading market, in what could be one of the most significant steps taken to reduce greenhouse gas emissions in 2021 — if the markets can work effectively.

China is the world’s largest emitter of greenhouse gases and its share of the world’s emissions output continues to climb.

As the Chinese government works to curb its environmental impact, policies like a carbon trading system could spur the adoption of new technologies, increasing demand for goods and services from domestic startups and tech companies around the world.

Carbon markets, implemented in some parts of the U.S. and widely across Europe, put a price on industrial emissions and force companies to offset those emissions by investing in projects that would remove an equivalent portion of greenhouse gases from the atmosphere.

They’re a key component of the 2015 Paris Agreement, but they’re also a controversial one. That’s because if they’re not implemented properly and managed effectively they can be a “massive loophole” for emitters, as Gilles Dufrasne, policy officer at Carbon Markets Watch, told Time last year.

This is especially true of China. Corruption in China is endemic and the country has long sacrificed environmental policy and stewardship at the altar of economic growth. China’s not alone in making that calculus, but the decisions have happened at a scale orders of magnitude larger than almost any other nation (with the exception of the U.S.)

The efficacy of the policy is also effected by the hierarchies that exist within the bureaucracy of the Chinese Communist Party. As ChinaDialogue noted, the measures were issued by the Ministry of Ecology and Environment, which carry lower legal authority than if they came from the NDRC, the leading governing body for macroeconomic policy across China and the overseer of the nation’s major economic initiatives.

That said, no country as large as China, which accounts for 28% of the world’s greenhouse gas emissions, has ever implemented a national carbon emissions trading market.

BEIJING, CHINA – MARCH 20: Chinese President Xi Jinping delivers a speech during the closing session of the National People’s Congress (NPC) at the Great Hall of the People on March 20, 2018 in Beijing, China. (Photo by Lintao Zhang/Getty Images)

China first started testing regional emissions trading systems back in 2011 in Shenzhen, Shanghai, Beijing, Guangdong, Tianjin, Hubei, Chongqing and Fujian. Using a system that instituted caps on emissions based on carbon intensity (emissions per unit of GDP) rather than an absolute emissions cap, the Chinese government began rolling out these pilots across its power sector and to other industries.

After a restructuring in 2018, the plan, which was initially drafted under the auspices of the National Development and Reform Commission was kicked down to the Ministry of Ecology and the Environment. The devolution of China’s cap and trade emissions program came as the United States was withdrawing from the Paris Agreement amid an abdication of climate regulation or initiatives under the Presidency of Donald Trump.

Initially intended to begin with trading simulations in 2020, China’s emissions schemes were derailed by the COVID-19 pandemic and pushed back to the back half of the year with an implementation of actual trading starting yesterday.

For now, the emissions trading system covers China’s power industry and roughly 2,000 energy generation facilities. That alone represents 30% of the nation’s total emissions and over time the trading system will encompass heavy industry like cement, steel, aluminum, chemicals and oil and gas, according to ChinaDialogue.

Initially, the government is allocating emissions allowances for free and will begin auctioning allowances “at the appropriate time according to the situation.”

That kind of language, and concerns raised by state-owned enterprises and financial services firms flagging the effect carbon pricing could have on profitability and lending risk shows that the government in Beijing is still putting more weight on the economic benefits rather than environmental costs of much of its industrial growth.

That said, a survey of market participants cited by ChinaDialogue indicated that prices are expected to start at 41 yuan (US$6.3) per ton of CO2 and rise to 66 yuan per ton in 2025. The price of carbon in China is expected to hit 77 yuan by 2030.

Meanwhile, a commission on carbon prices formed in 2017 and helmed by the economists Joseph Stiglitz and Nicholas Stern indicated that carbon needed to be priced at somewhere between $40 and $80 by 2020 and somewhere in the $50 to $100 range by 2030 if the markets and prices were to have any impact on behavior.

No nation has actually hit those price targets, although the European Union has come the closest — and seen the most reduction in greenhouse gas emissions as a result.

Still, the plan from the Chinese government does include public reporting requirements for verified company-level emissions. And the existence of a market, if the government decides to put real prices in place and consequences for flouting the system, could be a huge boon for the monitoring and management equipment startups that are developing tech to track emissions.

As the analysts at ChinaDialogue note:

“The hardest part of carbon pricing is often getting it started. The moment that the Chinese government decides to increase ambition with the national ETS, it can. The mechanism is now in place, and it can be ramped up if the momentum and political will provided by President Xi’s climate ambition continues. In the coming years, this could see an absolute and decreasing cap, more sectors covered, more transparent data provision and more effective cross-government coordination. This is especially so with energy and industrial regulators who will need to see the ETS not as a threat to their turf, but as a measure with significant co-benefits for their own policy objectives.”

#articles, #beijing, #chemicals, #china, #chinese-communist-party, #donald-trump, #energy, #europe, #european-union, #greenhouse-gas-emissions, #oil-and-gas, #president, #shanghai, #shenzhen, #steel, #tc, #united-states, #xi

You can now give Facebook’s Oversight Board feedback on the decision to suspend Trump

Facebook’s “Supreme Court” is now accepting comments on one of its earliest and likely most consequential cases. The Facebook Oversight Board announced Friday that it would begin accepting public feedback on Facebook’s suspension of former President Trump.

Mark Zuckerberg announced Trump’s suspension on January 7, after the then-president of the United States incited his followers to riot at the nation’s Capitol, an event that resulted in a number of deaths and imperiled the peaceful transition of power.

In a post calling for feedback, the Oversight Board describes the two posts that led to Trump’s suspension. One is a version of the video the president shared the day of the Capitol riot in which he sympathizes with rioters and validates their claim that the “election was stolen from us.” In the second post, Trump reiterates those views, falsely bemoaning a “sacred landslide election victory” that was “unceremoniously & viciously stripped away.”

The board says the point of the public comment process is to incorporate “diverse perspectives” from third parties who wish to share research that might inform their decisions, though it seems a lot more likely the board will wind up with a tidal wave of subjective and probably not particularly useful political takes. Nonetheless, the comment process will be open for 10 days and comments will be collected in an appendix for each case. The board will issue a decision on Trump’s Facebook fate within 90 days of January 21, though the verdict could come sooner.

The Oversight Board specifically invites public comments that consider:

Whether Facebook’s decision to suspend President Trump’s accounts for an indefinite period complied with the company’s responsibilities to respect freedom of expression and human rights, if alternative measures should have been taken, and what measures should be taken for these accounts going forward.

How Facebook should assess off-Facebook context in enforcing its Community Standards, particularly where Facebook seeks to determine whether content may incite violence.

How Facebook should treat the expression of political candidates, office holders, and former office holders, considering their varying positions of power, the importance of political opposition, and the public’s right to information.

The accessibility of Facebook’s rules for account-level enforcement (e.g. disabling accounts or account functions) and appeals against that enforcement.

Considerations for the consistent global enforcement of Facebook’s content policies against political leaders, whether at the content-level (e.g. content removal) or account-level (e.g. disabling account functions), including the relevance of Facebook’s “newsworthiness” exemption and Facebook’s human rights responsibilities.

The Oversight Board’s post gets very granular on the Trump suspension, critiquing Facebook for lack of specificity when the company didn’t state exactly which part of its community standards were violated. Between this and the five recent cases, the board appears to view its role as a technical one, in which it examines each case against Facebook’s existing ruleset and then makes recommendations for future policy rather than working backward from its own broader recommendations.

The Facebook Oversight Board announced its first cluster of decisions this week, overturning the company’s own choice to remove potentially objectionable content in four of five cases. None of those cases pertained to content relevant to Trump’s account suspension, but they prove that the Oversight Board isn’t afraid to go against the company’s own thinking — at least when it comes to what gets taken down.

#capitol-riot, #donald-trump, #facebook, #social, #tc

Facebook “Supreme Court” overrules company in 4 of its first 5 decisions

Facebook CEO Mark Zuckerberg in 2017.

Enlarge / Facebook CEO Mark Zuckerberg in 2017. (credit: Mark Zuckerberg)

The Oversight Board, an independent organization set up to review Facebook’s content moderation decisions, handed down its first batch of rulings on Thursday. The rulings didn’t go well for Facebook’s moderators. Out of five substantive rulings, the board overturned four and upheld one.

Most technology platforms have unfettered discretion to remove content. By contrast, Facebook decided in 2019 to create an independent organization, the Oversight Board, to review Facebook decisions, much as the US Supreme Court reviews decisions by lower courts. The board has an independent funding stream, and its members can’t be fired by Facebook. Facebook has promised to follow the board’s decisions unless doing so would be against the law.

The board’s first batch of five substantive decisions (a sixth case became moot after the user deleted their post) illustrates the difficult challenge facing Facebook’s moderators:

Read 6 remaining paragraphs | Comments

#donald-trump, #facebook, #nazis, #oversight-board, #policy

Biden vows to electrify the federal government’s 600,000-vehicle fleet

WASHINGTON, DC: President Joe Biden speaks before signing an executive order related to American manufacturing in the South Court Auditorium of the White House complex on January 25, 2021 in Washington, DC.

Enlarge / WASHINGTON, DC: President Joe Biden speaks before signing an executive order related to American manufacturing in the South Court Auditorium of the White House complex on January 25, 2021 in Washington, DC. (credit: Drew Angerer/Getty Images)

The federal government owns more than 600,000 civilian vehicles—trucks, vans, and passenger vehicles—with a large large majority running on gasoline or diesel fuel. On Monday, Joe Biden vowed to change that.

“The federal government owns an enormous fleet of vehicles, which we’re gonna to replace with clean electric vehicles made right here in America,” Biden said at a press conference to announce a new “Buy American” initiative.

This won’t be easy. In 2019, the most recent year for which data is available, the federal government owned fewer than 3,000 battery electric vehicles—less than one half of one percent of the federal vehicle fleet.

Read 8 remaining paragraphs | Comments

#barack-obama, #battery-electric-vehicles, #cars, #donald-trump, #joe-biden, #policy, #science

Facebook calls in its Oversight Board to rule on Trump ban

Facebook logo on a street sign outside a wooded campus.

Enlarge / Facebook’s Menlo Park, California, headquarters as seen in 2017. (credit: Jason Doiy | Getty Images)

Facebook’s Oversight Board is getting its highest-profile case yet, as the company kicks its decision to boot former-President Donald Trump off its platforms to the largely untested “Supreme Court” of social media for review.

Facebook suspended Trump’s Facebook and Instagram accounts on January 7 in the immediate aftermath of the insurrectionist riots at the US Capitol. “The shocking events of the last 24 hours clearly demonstrate that President Donald Trump intends to use his remaining time in office to undermine the peaceful and lawful transition of power to his elected successor, Joe Biden,” company CEO Mark Zuckerberg said at the time. “We believe the risks of allowing the President to continue to use our service during this period are simply too great. Therefore, we are extending the block we have placed on his Facebook and Instagram accounts indefinitely and for at least the next two weeks until the peaceful transition of power is complete.”

Although that two-week period is now complete, Facebook COO Sheryl Sandberg confirmed to Reuters last week that the company expected to continue the bans indefinitely and had “no plans” to let Trump resume posting content to their platforms.

Read 8 remaining paragraphs | Comments

#content-moderation, #donald-trump, #facebook, #facebook-oversight-board, #insurrection, #oversight, #policy, #trump

Daily Crunch: Trump pardons Anthony Levandowski

On his way out the door, President Donald Trump pardons a former Googler, Jack Ma reappears and Wattpad gets acquired. This is your Daily Crunch for January 20, 2021.

The big story: Trump pardons Anthony Levandowski

Although Donald Trump is no longer president of the United States as I write this, he still held the role on Tuesday evening, when he included former Googler Anthony Levandowski (who had been sentenced to 18 months in prison for stealing trade secrets) in his final set of 73 pardons. The pardon had been supported by Founders Fund’s Peter Thiel and Oculus founder Palmer Luckey, among others.

Today, of course, Joe Biden was inaugurated, prompting immediate changes to the White House website. And Amazon wasted no time offering the new president help with rolling out the COVID-19 vaccines.

Also, we have a piece from former White House deputy chief of staff Jim Messina about the relationship between the Biden administration and Silicon Valley.

The tech giants

Alibaba shares jump on Jack Ma’s first appearance in 3 months — Alibaba’s billionaire founder resurfaced as he spoke to 100 rural teachers through a video call.

Valve and five PC games publishers fined $9.4M for illegal geo-blocking — In addition to Valve, the five sanctioned games publishers are: Bandai Namco, Capcom, Focus Home, Koch Media and ZeniMax.

TikTok’s new Q&A feature lets creators respond to fan questions using text or video — The feature is currently only available to select creators who have opted into the test.

Startups, funding and venture capital

Wattpad, the storytelling platform, is selling to South Korea’s Naver for $600M — Naver plans to incorporate at least part of the business into another of its holdings, the publishing platform Webtoon.

SpaceX delivers 60 more Starlink satellites in first launch of 2021, and sets new Falcon 9 rocket reusability record — This puts the total Starlink constellation size at almost 1,000.

Landbot closes $8M Series A for its ‘no code’ chatbot builder — The startup has grown to around 2,200 paying customers.

Advice and analysis from Extra Crunch

Fintech startups and unicorns had a stellar Q4 2020 — The fourth quarter of 2020 was as busy as you imagined.

Dear Sophie: What are Biden’s immigration changes? — The latest edition of Dear Sophie, the advice column that answers immigration-related questions about working at technology companies.

Hello, Extra Crunch community! — Tell us what else you want from Extra Crunch.

(Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

MIT develops method for lab-grown plants that eventually lead to alternatives to forestry and farming — The process would be able to produce wood and fibre in a lab environment.

Reflections on the first all-virtual CES — We’ve long questioned whether in-person trade shows are a thing of the past. This year, we put it to the test.

Remote workers are greener, but their tech still has a real carbon cost — A new study examines tentative carbon costs on the connectivity and data infrastructure that make working from home possible.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

#anthony-levandowski, #daily-crunch, #donald-trump, #policy

Twitch’s Trump ban sustained after leaving office

An icon for the Twitch app displayed on a smartphone screen.

Enlarge / Photo illustration of the Twitch logo on a smartphone. (credit: Getty Images | Thomas Trutschel)

On Wednesday, an automated alert about Twitch account bans included a somewhat surprising account name: “@DonaldTrump.” The surprise came because Twitch had already “indefinitely suspended” the former president’s official Twitch channel on January 7 in the wake of his January 6 speech inciting a seditious riot at the US Capitol.

Following this Wednesday alert, Twitch confirmed to Ars Technica that this was no accident: Trump’s account is indeed outright banned. Twitch continues to call the ban an “indefinite suspension,” but it has not offered any timeline for its return or steps that its account holders (either Trump himself or any representatives) may take to reverse the decision. Wednesday’s news lines up with a Tuesday claim by DW News reporter Dana Regev, who had hinted at Twitch waiting until after President Joe Biden’s inauguration to make a firmer ruling on the previous ban.

The service took the rare step of outlining the exact reason for the ban, a courtesy generally not reserved to those affected. This lack of clarity emerged in particular when Twitch offered no explanation for banning Guy “DrDisrespect” Beahm in the wake of spreading COVID-19 misinformation.

Read 5 remaining paragraphs | Comments

#donald-trump, #drdisrespect, #gaming-culture, #twitch

Trump pardons engineer Google accused of stealing secrets for Uber

Anthony Levandowski exits federal court in San Jose, California, on August 27, 2019.

Enlarge / Anthony Levandowski exits federal court in San Jose, California, on August 27, 2019. (credit: David Paul Morris/Bloomberg via Getty Images)

On his final full day in office, Donald Trump pardoned Anthony Levandowski, the engineer at the center of Waymo’s epic 2017 trade secret battle with Uber. Last year, Levandowski pleaded guilty to stealing a single confidential Google document; prosecutors agreed to drop other pending charges against him.

Levandowski was a key early member of Google’s self-driving car project, but he quit Google in early 2016 to found his own self-driving startup. Within months, the startup was acquired by Uber for a nine-figure sum, and Levandowski was put in charge of Uber’s self-driving efforts.

But then, Google’s self-driving unit—now known as Waymo—accused Levandowski of stealing trade secrets. According to Waymo, Levandowski had downloaded thousands of confidential documents from Google in his final days as a Google employee. Waymo says it was tipped off to the theft after Uber submitted a design for a lidar circuit board to a third-party vendor—a vendor also used by Waymo. Uber’s design looked almost identical to Waymo’s.

Read 5 remaining paragraphs | Comments

#anthony-levandowski, #cars, #donald-trump, #policy, #uber, #waymo

Snapchat permanently bans President Trump’s account

Quite a bit has happened since Snap announced last week that it was indefinitely locking President Trump’s Snapchat account. But after temporary bans from his Facebook, Instagram and YouTube accounts as well as a permanent ban from Twitter, Snap has decided that it will also be making its ban of the President’s Snapchat account permanent.

Though Trump’s social media preferences as a user are clear, Snapchat gave the Trump campaign a particularly effective platform to target young users who are active on the service. A permanent ban will undoubtedly complicate his future business and political ambitions as he finds himself removed from most mainstream social platforms.

Snap says it made the decision in light of repeated attempted violations of the company’s community guidelines that had been made over the past several months by the President’s account.

“Last week we announced an indefinite suspension of President Trump’s Snapchat account, and have been assessing what long term action is in the best interest of our Snapchat community. In the interest of public safety, and based on his attempts to spread misinformation, hate speech, and incite violence, which are clear violations of our guidelines, we have made the decision to permanently terminate his account,” a Snap spokesperson told TechCrunch.

Snap’s decision to permanently ban the President was first reported by Axios.

#computing, #donald-trump, #facebook, #instagram, #instant-messaging, #operating-systems, #president, #snap-inc, #snapchat, #spokesperson, #tc, #trump, #twitter, #vertical-video

Trump circumvents Twitter ban to decry ‘unprecedented assault on free speech’

Following a comprehensive ban from Twitter and a number of other online services following last week’s assault on the Capitol by his followers, President Trump managed to put out a tweet in the form of a video address touching on the “calamity at the Capitol”… and, of course, his deplatforming.

In the video, Trump instructs his followers to shun violence, calling it un-American. “No true supporter of mine could ever endorse political violence,” he said, days after calling rioters “great patriots” and telling them “we love you, you’re very special” as they despoiled the House and Senate.

He pivoted after a few minutes to the topic that, after his historic second impeachment, is almost certainly foremost on his mind: being banned from his chief instrument of governance, Twitter.

“I also want to say a few words about the unprecedented assault on free speech we have seen in recent days,” he said, although the bans and other actions are all due to documented breaches of the platforms’ rules. “The efforts to censor, cancel and blacklist our fellow citizens are wrong, and they are dangerous. What is needed now is for us to listen to one another, not to silence one another.”

After having his @realdonaldtrump handle suspended by Twitter, Trump attempted to sockpuppet a few other prominent accounts of allies, but was swiftly shut down. What everyone assumed must be plans to join Parler were scuttled along with the social network itself, which has warned it may be permanently taken offline after Amazon and other internet infrastructure companies refused to host it.

In case you’re wondering how Trump was able to slip this one past Twitter’s pretty decisive ban to begin with, we were curious too.

Twitter tells TechCrunch:

This Tweet is not in violation of the Twitter Rules. As we previously made clear, other official administration accounts, including @WhiteHouse, are permitted to Tweet as long as they do not demonstrably engage in ban evasion or share content that otherwise violates the Twitter Rules.

In other words, while Trump the person was banned, Trump the head of the Executive branch may still have some right, in the remaining week he holds the office, to utilize Twitter as a way of communicating matters of importance to the American people.

This gives a somewhat unfortunate impression of a power move, as Twitter has put itself in the position of determining what is a worthwhile transmission and what is a rabble-rousing incitement to violence. I’ve asked the company to clarify how it is determined whether what Trump does on this account is considered ban evasion.

Meanwhile, almost simultaneous with Trump’s surprise tweet, Twitter founder Jack Dorsey unloaded 13 tweets worth of thoughts about the situation:

I believe this was the right decision for Twitter. We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety. Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.

That said, having to ban an account has real and significant ramifications. While there are clear and obvious exceptions, I feel a ban is a failure of ours ultimately to promote healthy conversation. And a time for us to reflect on our operations and the environment around us.

Jack neither reaches any real conclusions nor illuminates any new plans, but it’s clear he is thinking real hard about this. As he notes, however, it’ll take a lot of work to establish the “one humanity working together” he envisions as a sort of stretch goal for Twitter and the internet in general.

#capitol-riots, #donald-trump, #government, #social, #twitter