The FDA should regulate Instagram’s algorithm as a drug

The Wall Street Journal on Tuesday reported Silicon Valley’s worst-kept secret: Instagram harms teens’ mental health; in fact, its impact is so negative that it introduces suicidal thoughts.

Thirty-two percent of teen girls who feel bad about their bodies report that Instagram makes them feel worse. Of teens with suicidal thoughts, 13% of British and 6% of American users trace those thoughts to Instagram, the WSJ report said. This is Facebook’s internal data. The truth is surely worse.

President Theodore Roosevelt and Congress formed the Food and Drug Administration in 1906 precisely because Big Food and Big Pharma failed to protect the general welfare. As its executives parade at the Met Gala in celebration of the unattainable 0.01% of lifestyles and bodies that we mere mortals will never achieve, Instagram’s unwillingness to do what is right is a clarion call for regulation: The FDA must assert its codified right to regulate the algorithm powering the drug of Instagram.

The FDA should consider algorithms a drug impacting our nation’s mental health: The Federal Food, Drug and Cosmetic Act gives the FDA the right to regulate drugs, defining drugs in part as “articles (other than food) intended to affect the structure or any function of the body of man or other animals.” Instagram’s internal data shows its technology is an article that alters our brains. If this effort fails, Congress and President Joe Biden should create a mental health FDA.

Researchers can study what Facebook prioritizes and the impact those decisions have on our minds. How do we know this? Because Facebook is already doing it — they’re just burying the results.

The public needs to understand what Facebook and Instagram’s algorithms prioritize. Our government is equipped to study clinical trials of products that can physically harm the public. Researchers can study what Facebook privileges and the impact those decisions have on our minds. How do we know this? Because Facebook is already doing it — they’re just burying the results.

In November 2020, as Cecilia Kang and Sheera Frenkel report in “An Ugly Truth,” Facebook made an emergency change to its News Feed, putting more emphasis on “News Ecosystem Quality” scores (NEQs). High NEQ sources were trustworthy sources; low were untrustworthy. Facebook altered the algorithm to privilege high NEQ scores. As a result, for five days around the election, users saw a “nicer News Feed” with less fake news and fewer conspiracy theories. But Mark Zuckerberg reversed this change because it led to less engagement and could cause a conservative backlash. The public suffered for it.

Facebook likewise has studied what happens when the algorithm privileges content that is “good for the world” over content that is “bad for the world.” Lo and behold, engagement decreases. Facebook knows that its algorithm has a remarkable impact on the minds of the American public. How can the government let one man decide the standard based on his business imperatives, not the general welfare?

Upton Sinclair memorably uncovered dangerous abuses in “The Jungle,” which led to a public outcry. The free market failed. Consumers needed protection. The 1906 Pure Food and Drug Act for the first time promulgated safety standards, regulating consumable goods impacting our physical health. Today, we need to regulate the algorithms that impact our mental health. Teen depression has risen alarmingly since 2007. Likewise, suicide among those 10 to 24 is up nearly 60% between 2007 and 2018.

It is of course impossible to prove that social media is solely responsible for this increase, but it is absurd to argue it has not contributed. Filter bubbles distort our views and make them more extreme. Bullying online is easier and constant. Regulators must audit the algorithm and question Facebook’s choices.

When it comes to the biggest issue Facebook poses — what the product does to us — regulators have struggled to articulate the problem. Section 230 is correct in its intent and application; the internet cannot function if platforms are liable for every user utterance. And a private company like Facebook loses the trust of its community if it applies arbitrary rules that target users based on their background or political beliefs. Facebook as a company has no explicit duty to uphold the First Amendment, but public perception of its fairness is essential to the brand.

Thus, Zuckerberg has equivocated over the years before belatedly banning Holocaust deniers, Donald Trump, anti-vaccine activists and other bad actors. Deciding what speech is privileged or allowed on its platform, Facebook will always be too slow to react, overcautious and ineffective. Zuckerberg cares only for engagement and growth. Our hearts and minds are caught in the balance.

The most frightening part of “The Ugly Truth,” the passage that got everyone in Silicon Valley talking, was the eponymous memo: Andrew “Boz” Bosworth’s 2016 “The Ugly.”

In the memo, Bosworth, Zuckerberg’s longtime deputy, writes:

So we connect more people. That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good.

Zuckerberg and Sheryl Sandberg made Bosworth walk back his statements when employees objected, but to outsiders, the memo represents the unvarnished id of Facebook, the ugly truth. Facebook’s monopoly, its stranglehold on our social and political fabric, its growth at all costs mantra of “connection,” is not de facto good. As Bosworth acknowledges, Facebook causes suicides and allows terrorists to organize. This much power concentrated in the hands of one corporation, run by one man, is a threat to our democracy and way of life.

Critics of FDA regulation of social media will claim this is a Big Brother invasion of our personal liberties. But what is the alternative? Why would it be bad for our government to demand that Facebook accounts to the public its internal calculations? Is it safe for the number of sessions, time spent and revenue growth to be the only results that matters? What about the collective mental health of the country and world?

Refusing to study the problem does not mean it does not exist. In the absence of action, we are left with a single man deciding what is right. What is the price we pay for “connection”? This is not up to Zuckerberg. The FDA should decide.

#column, #facebook, #food-and-drug-administration, #government, #health, #instagram, #joe-biden, #mark-zuckerberg, #opinion, #policy, #section-230, #sheryl-sandberg, #social, #social-media, #the-wall-street-journal

Biden nominates another Big Tech enemy, this time to lead the DOJ’s antitrust division

The Biden administration tripled down on its commitment to reining in powerful tech companies Tuesday, proposing committed Big Tech critic Jonathan Kanter to lead the Justice Department’s antitrust division.

Kanter is a lawyer with a long track record of representing smaller companies like Yelp in antitrust cases against Google. He currently practices law at his own firm, which specializes in advocacy for state and federal antitrust enforcement.

“Throughout his career, Kanter has also been a leading advocate and expert in the effort to promote strong and meaningful antitrust enforcement and competition policy,” the White House press release stated. Progressives celebrated the nomination as a win, though some of Biden’s new antitrust hawks have enjoyed support from both political parties.

The Justice Department already has a major antitrust suit against Google in the works. The lawsuit, filed by Trump’s own Justice Department, accuses the company of “unlawfully maintaining monopolies” through anti-competitive practices in its search and search advertising businesses. If successfully confirmed, Kanter would be positioned to steer the DOJ’s big case against Google.

In a 2016 NYT op-ed, Kanter argued that Google is notorious for relying on an anti-competitive “playbook” to maintain its market dominance. Kanter pointed to Google’s long history of releasing free ad-supported products and eventually restricting competition through “discriminatory and exclusionary practices” in a given corner of the market.

Kanter is just the latest high profile Big Tech critic that’s been elevated to a major regulatory role under Biden. Last month, Biden named fierce Amazon critic Lina Khan as FTC chair upon her confirmation to the agency. In March, Biden named another noted Big Tech critic, Columbia law professor Tim Wu, to the National Economic Council as a special assistant for tech and competition policy.

All signs point to the Biden White House gearing up for a major federal fight with Big Tech. Congress is working on a set of Big Tech bills, but in lieu of — or in tandem with — legislative reform, the White House can flex its own regulatory muscle through the FTC and DOJ.

In new comments to MSNBC, the White House confirmed that it is also “reviewing” Section 230 of the Communications Decency Act, a potent snippet of law that protects platforms from liability for user-generated content.

#amazon, #biden, #biden-administration, #big-tech, #chair, #columbia, #competition-law, #congress, #department-of-justice, #doj, #federal-trade-commission, #google, #government, #joe-biden, #lawyer, #lina-khan, #msnbc, #section-230, #tc, #tim-wu, #white-house, #yelp

Trump sues Twitter and Facebook for banning him, claims “trillions” in damages

Trump supporters scaling the wall of the US Capitol building during the riot on January 6, 2021.

Enlarge / A mob of Trump supporters stormed and breached the US Capitol on January 6, 2021. Trump’s incitement of the mob led to his bans from major social networks. (credit: Getty Images | The Washington Post)

Former President Donald Trump today sued Twitter, Facebook, Google subsidiary YouTube, and their CEOs, claiming that all three companies are guilty of “impermissible censorship” that violates “the First Amendment right to free speech.”

Trump’s lawsuits are almost certainly doomed. The First Amendment does not require private companies to host speech—the Constitutional amendment only imposes limits on how the government can restrict speech. In addition to the First Amendment, US law gives online platforms immunity from lawsuits over how they moderate user-submitted content. The law does so via Section 230 of the Communications Decency Act of 1996.

Despite those two titanic legal barriers, Trump’s lawsuits seek reinstatement of his social media accounts along with financial damages from the companies and from their chief executives, namely Twitter CEO Jack Dorsey, Facebook CEO Mark Zuckerberg, and Google CEO Sundar Pichai. Trump’s lawsuits seek class-action status with him as the lead plaintiff, and they claim the CEOs are liable for damages because they are “personally responsible” for their companies’ “unconstitutional censorship” of Trump and other users.

Read 23 remaining paragraphs | Comments

#facebook, #policy, #section-230, #trump, #twitter, #youtube

Trump’s new lawsuits against social media companies are going nowhere fast

Trump’s spicy trio of lawsuits against the social media platforms that he believes wrongfully banned him have succeeded in showering the former president with a flurry of media attention, but that’s likely where the story ends.

Like Trump’s quixotic and ultimately empty quest to gut Section 230 of the Communications Decency Act during his presidency, the new lawsuits are all sound and fury with little legal substance to back them up.

The suits allege that Twitter, Facebook and YouTube violated Trump’s First Amendment rights by booting him from their platforms, but the First Amendment is intended to protect citizens from censorship by the government — not private industry. The irony that Trump himself was the uppermost figure in the federal government at the time probably won’t be lost on whoever’s lap this case lands in.

In the lawsuits, which also name Twitter and Facebook chief executives Jack Dorsey and Mark Zuckerberg as well as Google CEO Sundar Pichai (Susan Wojcicki escapes notice once again!), Trump accuses the three companies of engaging in “impermissible censorship resulting from threatened legislative action, a misguided reliance upon Section 230 of the Communications Decency Act, and willful participation in joint activity with federal actors.”

The suit claims that the tech companies colluded with “Democrat lawmakers,” the CDC and Dr. Anthony Fauci, who served in Trump’s own government at the time.

The crux of the argument is that communication between the tech companies, members of Congress and the federal government somehow transforms Facebook, Twitter and YouTube into “state actors” — a leap of epic proportion:

“Defendant Twitter’s status thus rises beyond that of a private company to that of a state actor, and as such, Defendant is constrained by the First Amendment right to free speech in the censorship decisions it makes.”

Trump’s own Supreme Court appointee Brett Kavanaugh issued the court’s opinion on a relevant case two years ago. It examined whether a nonprofit running public access television channels in New York qualified as a “state actor” that would be subject to First Amendment constraints. The court ruled that running the public access channels didn’t transform the nonprofit into a government entity and that it retained a private entity’s rights to make editorial decisions.

“… A private entity… who opens its property for speech by others is not transformed by that fact alone into a state actor,” Justice Kavanaugh wrote in the decision.

It’s not likely that a court would decide that talking to the government or being threatened by the government somehow transform Twitter, YouTube and Facebook into state actors either.

Trump vs. Section 230 (again)

First Amendment aside — and there’s really not much of an argument there — social media platforms are protected by Section 230 of the Communications Decency Act, a concise snippet of law that shields them from liability not just for the user-generated content they host but for the moderation decisions they make about what content to remove.

In line with Trump’s obsessive disdain for tech’s legal shield, the lawsuits repeatedly rail against Section 230. The suits try to argue that because Congress threatened to revoke tech’s 230 protections, that forced them to ban Trump, which somehow makes social media companies part of the government and subject to First Amendment constraints.

Of course, Republican lawmakers and Trump’s own administration made frequent threats about repealing Section 230, not that it changes anything because this line of argument doesn’t make much sense anyway.

The suit also argues that Congress crafted Section 230 to intentionally censor speech that is otherwise protected by the First Amendment, ignoring that the law was born in 1996, well before ubiquitous social media, and for other purposes altogether.

For the four years of his presidency, Trump’s social media activity — his tweets in particular — informed the events of the day, both nationally and globally. While other world leaders and political figures used social media to communicate or promote their actions, Trump’s Twitter account was usually the action itself.

In the shadow of his social media bans, the former president has failed to re-establish lines of communication to the internet at large. In May, he launched a new blog, “From the Desk of Donald J. Trump,” but the site was taken down just a month later after it failed to attract much interest.

The handful of pro-Trump alternative social platforms are still struggling with app store content moderation requirements at odds with their extreme views on free speech, but that didn’t stop Gettr, the latest, from going ahead with its own rocky launch last week.

Viewed in one light, Trump’s lawsuits are a platform too, his latest method for broadcasting himself to the online world that his transgressions eventually cut him off from. In that sense, they seem to have succeeded, but in all other senses, they won’t.

#articles, #brett-kavanaugh, #ceo, #communications-decency-act, #congress, #donald-j-trump, #donald-trump, #federal-government, #google, #government, #jack-dorsey, #mark-zuckerberg, #new-york, #president, #qanon, #section-230, #social, #social-media, #social-media-platforms, #sundar-pichai, #supreme-court, #susan-wojcicki, #tc, #the-battle-over-big-tech, #twitter

Trump is suing Twitter, Facebook and Google over censorship claims

In his first press event since leaving office earlier this year, former President Donald Trump announced that he would be launching a volley of class action lawsuits against Twitter, Facebook and Google and their CEOs, claiming that the three companies violated his First Amendment rights.

“We’re demanding an end to the shadow-banning, a stop to the silencing and a stop to the blacklisting, banishing and canceling that you know so well,” Trump said at the press conference, held at his Bedminster, New Jersey golf club.

Following the January 6 attack on the Capitol, social media platforms swiftly revoked then President Trump’s posting privileges. For years, Trump tested the boundaries of platforms’ policies around misinformation and even violent threats, but his role in the events of that day crossed a line. Trump soon found himself without a megaphone with which to reach his many millions of followers across Twitter, Facebook and YouTube.

Trump’s fate on Twitter is known: the former president faces a lifetime ban there. But on Facebook and YouTube, there’s a possibility that his accounts could be restored. Facebook is currently deliberating that decision in a back-and-forth exchange with its new external policy making body, the Facebook Oversight Board.

Trump will be the lead plaintiff in the suits, which are being filed in the U.S. District Court for the Southern District of Florida. The lawsuits seek “compensatory and punitive damages” and the restoration of Trump’s social media accounts.

#donald-trump, #section-230, #social, #tc, #the-battle-over-big-tech

Florida’s ban on bans will test First Amendment rights of social media companies

Florida governor Ron DeSantis has signed into law a restriction on social media companies’ ability to ban candidates for state offices and news outlets, and in doing so offered a direct challenge to those companies’ perceived free speech rights. The law is almost certain to be challenged in court as both unconstitutional and in direct conflict with federal rules.

The law, Florida Senate Bill 7072, provides several new checks on tech and social media companies. Among other things:

  • Platforms cannot ban or deprioritize candidates for state office
  • Platforms cannot ban or deprioritize any news outlet meeting certain size requirements
  • Platforms must be transparent about moderation processes and give users notice of moderation actions
  • Users and the state will have the right to sue companies that violate the law

The law establishes rules affecting these companies’ moderation practices; that much is clear. But whether doing so amounts to censorship — actual government censorship, not the general concept of limitation frequently associated with the word — is an open question, if a somewhat obvious one, that will likely be forced by legal action against SB 7072.

While there is a great deal of circumstantial precedent and analysis, the problem of “are moderation practices of social media companies protected by the First Amendment” is as yet unsettled. Legal scholars and existing cases fall strongly on the side of “yes,” but there is no single definitive precedent that Facebook or Twitter can point to.

The First Amendment argument starts with the idea that although social media are very unlike newspapers or book publishers, they are protected in much the same way by the Constitution from government interference. “Free speech” is a term that is interpreted extremely liberally, but if a company spending money is considered a protected expression of ideas, it’s not a stretch to suggest that same company applying a policy of hosting or not hosting content should be as well. If it is, then the government is prohibited from interfering with it beyond very narrow definitions of unprotected speech (think shouting “fire” in a crowded theater). That would sink Florida’s law on constitutional grounds.

The other conflict is with federal law, specifically the much-discussed Section 230, which protects companies from being liable for content they publish (i.e. the creator is responsible instead), and also for the choice to take down content via rules of their own choice. As the law’s co-author Senator Ron Wyden (D-OR) has put it, this gives those companies both a shield and a sword with which to do battle against risky speech on their platforms.

But SB 7072 removes both sword and shield: it would limit who can be moderated, and also creates a novel cause for legal action against the companies for their remaining moderation practices.

Federal and state law are often in disagreement, and there is no handbook for how to reconcile them. On one hand, witness raids of state-legalized marijuana shops and farms by federal authorities. On the other, observe how strong consumer protection laws at the state level aren’t preempted by weaker federal ones because to do so would put people at risk.

On the matter of Section 230 it’s not straightforward who is protecting whom. Florida’s current state government claims that it is protecting “real Floridians” against the “Silicon Valley elites.” But no doubt those elites (and let us be candid — that is exactly what they are) will point out that in fact this is a clear-cut case of government overreach, censorship in the literal sense.

These strong legal objections will inform the inevitable lawsuits by the companies affected, which will probably be filed ahead of the law taking effect and aim to have it overturned.

Interestingly, two companies that will not be affected by the law are two of the biggest, most uncompromising corporations in the world: Disney and Comcast. Why, you ask? Because the law has a special exemption for any company “that owns and operates a theme park or entertainment complex” of a certain size.

That’s right, there’s a Mouse-shaped hole in this law — and Comcast, which owns Universal Studios, just happens to fit through as well. Notably this was added in an amendment, suggesting two of the largest employers in the state were unhappy at the idea of new liabilities for any of their digital properties.

This naked pandering to local corporate donors puts proponents of this law at something of an ethical disadvantage in their righteous battle against the elites, but favor may be moot in a few months’ time when the legal challenges, probably being drafted at this moment, call for an injunction against SB 7072.

#facebook, #first-amendment, #free-speech, #google, #government, #moderation, #sb-7072, #section-230, #social, #tc

Appeals court allows parents to sue Snap over 100mph car crash

Stock photo of extreme close-up of redline speedometer.

Enlarge (credit: Peter Dazeley / Getty Images)

A California federal appeals court has denied legal immunity to Snap for the 2017 death of two teens and a 20-year-old when their car crashed into a tree at 113 miles per hour (180 km/h). Parents of two of the boys sued Snap, arguing that Snapchat’s “Speed Filter” encouraged the boys to accelerate their car to more than 100 miles per hour.

The Snapchat Speed Filter in action.

The Snapchat Speed Filter in action. (credit: 9th Circuit opinion)

Last year, Snap convinced a federal trial judge that Section 230 of the Communications Decency Act shielded Snap from liability in the case. The once-obscure 1996 law has become a frequent source of controversy as technology giants have used it to disclaim responsibility for harmful content on their platforms.

Snap, maker of the popular Snapchat messaging app, argued that the law gave it immunity in the boys’ death. Snapchat pioneered the concept of image filters that has been widely copied by other apps. In 2017, Snapchat’s offerings included a Speed Filter that displayed a user’s current speed—either on its own or superimposed on the user’s photo. Users could use this filter to show their friends how fast they were moving.

Read 13 remaining paragraphs | Comments

#cars, #policy, #section-230, #snap

At social media hearing, lawmakers circle algorithm-focused Section 230 reform

Rather than a CEO-slamming sound bite free-for-all, Tuesday’s big tech hearing on algorithms aimed for more of a listening session vibe — and in that sense it mostly succeeded.

The hearing centered on testimony from the policy leads at Facebook, YouTube and Twitter rather than the chief executives of those companies for a change. The resulting few hours didn’t offer any massive revelations but was still probably more productive than squeezing some of the world’s most powerful men for their commitments to “get back to you on that.”

In the hearing, lawmakers bemoaned social media echo chambers and the ways that the algorithms pumping content through platforms are capable of completely reshaping human behavior. .

“… This advanced technology is harnessed into algorithms designed to attract our time and attention on social media, and the results can be harmful to our kids’ attention spans, to the quality of our public discourse, to our public health, and even to our democracy itself,” said Chris Coons (D-DE), chair of the Senate Judiciary’s subcommittee on privacy and tech, which held the hearing.

Coons struck a cooperative note, observing that algorithms drive innovation but that their dark side comes with considerable costs

None of this is new, of course. But Congress is crawling closer to solutions, one repetitive tech hearing at a time. The Tuesday hearing highlighted some zones of bipartisan agreement that could determine the chances of a tech reform bill passing the Senate, which is narrowly controlled by Democrats. Coons expressed optimism that a “broadly bipartisan solution” could be reached.

What would that look like? Probably changes to Section 230 of the Communications Decency Act, which we’ve written about extensively over the years. That law protects social media companies from liability for user-created content and it’s been a major nexus of tech regulation talk, both in the newly Democratic Senate under Biden and the previous Republican-led Senate that took its cues from Trump.

Lauren Culbertson, head of U.S. public policy at Twitter

Lauren Culbertson, head of U.S. public policy at Twitter Inc., speaks remotely during a Senate Judiciary Subcommittee hearing in Washington, D.C., U.S., on Tuesday, April 27, 2021. Photographer: Al Drago/Bloomberg via Getty Images

A broken business model

In the hearing, lawmakers pointed to flaws inherent to how major social media companies make money as the heart of the problem. Rather than criticizing companies for specific failings, they mostly focused on the core business model from which social media’s many ills spring forth.

“I think it’s very important for us to push back on the idea that really complicated, qualitative problems have easy quantitative solutions,” Sen. Ben Sasse (R-NE) said. He argued that because social media companies make money by keeping users hooked to their products, any real solution would have to upend that business model altogether.

“The business model of these companies is addiction,” Josh Hawley (R-MO) echoed, calling social media an “attention treadmill” by design.

Ex-Googler and frequent tech critic Tristan Harris didn’t mince words about how tech companies talk around that central design tenet in his own testimony. “It’s almost like listening to a hostage in a hostage video,” Harris said, likening the engagement-seeking business model to a gun just offstage.

Spotlight on Section 230

One big way lawmakers propose to disrupt those deeply entrenched incentives? Adding algorithm-focused exceptions to the Section 230 protections that social media companies enjoy. A few bills floating around take that approach.

One bill from Sen. John Kennedy (R-LA) and Reps. Paul Gosar (R-A) and Tulsi Gabbard (R-HI) would require platforms with 10 million or more users to obtain consent before serving users content based on their behavior or demographic data if they want to keep Section 230 protections. The idea is to revoke 230 immunity from platforms that boost engagement by “funneling information to users that polarizes their views” unless a user specifically opts in.

In another bill, the Protecting Americans from Dangerous Algorithms Act, Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) propose suspending Section 230 protections and making companies liable “if their algorithms amplify misinformation that leads to offline violence.” That bill would amend Section 230 to reference existing civil rights laws.

Section 230’s defenders argue that any insufficiently targeted changes to the law could disrupt the modern internet as we know it, resulting in cascading negative impacts well beyond the intended scope of reform efforts. An outright repeal of the law is almost certainly off the table, but even small tweaks could completely realign internet businesses, for better or worse.

During the hearing, Hawley made a broader suggestion for companies that use algorithms to chase profits. “Why shouldn’t we just remove section 230 protection from any platform that engages in behavioral advertising or algorithmic amplification?” he asked, adding that he wasn’t opposed to an outright repeal of the law.

Sen. Klobuchar, who leads the Senate’s antitrust subcommittee, connected the algorithmic concerns to anti-competitive behavior in the tech industry. “If you have a company that buys out everyone from under them… we’re never going to know if they could have developed the bells and whistles to help us with misinformation because there is no competition,” Klobuchar said.

Subcommittee members Klobuchar and Sen. Mazie Hirono (D-HI) have their own major Section 230 reform bill, the Safe Tech Act, but that legislation is less concerned with algorithms than ads and paid content.

At least one more major bill looking at Section 230 through the lens of algorithms is still on the way. Prominent big tech critic House Rep. David Cicilline (D-RI) is due out soon with a Section 230 bill that could suspend liability protections for companies that rely on algorithms to boost engagement and line their pockets.

“That’s a very complicated algorithm that is designed to maximize engagement to drive up advertising prices to produce greater profits for the company,” Cicilline told Axios last month. “…That’s a set of business decisions for which, it might be quite easy to argue, that a company should be liable for.”

#anna-eshoo, #behavioral-advertising, #biden, #communications-decency-act, #congress, #josh-hawley, #misinformation, #operating-systems, #section-230, #section-230-of-the-communications-decency-act, #senate, #senator, #social-media, #tc, #tristan-harris, #tulsi-gabbard, #twitter

Clarence Thomas blasts Section 230, wants “common-carrier” rules on Twitter

Supreme Court Justice Clarence Thomas arrives for the swearing-in of Justice Brett Kavanaugh in the East Room of the White House on October 8, 2018, in Washington, DC.

Enlarge / Supreme Court Justice Clarence Thomas arrives for the swearing-in of Justice Brett Kavanaugh in the East Room of the White House on October 8, 2018, in Washington, DC. (credit: Getty Images | Chip Somodevilla)

The US Supreme Court today vacated a 2019 appeals-court ruling that said then-President Donald Trump violated the First Amendment by blocking people on Twitter. The high court declared the case “moot” because Trump is no longer president.

For legal observers, the ruling itself was less interesting than a 12-page concurring opinion filed by Justice Clarence Thomas, who argued that Twitter and similar companies could face some First Amendment restrictions even though they are not government agencies. That’s in contrast to the standard view that the First Amendment’s free speech clause does not prohibit private companies from restricting speech on their platforms.

Thomas also criticized the Section 230 legal protections given to online platforms and argued that free-speech law shouldn’t necessarily prevent lawmakers from regulating those platforms as common carriers. He wrote that “regulation restricting a digital platform’s right to exclude [content] might not appreciably impede the platform from speaking.”

Read 23 remaining paragraphs | Comments

#clarence-thomas, #first-amendment, #policy, #section-230, #twitter

Clarence Thomas plays a poor devil’s advocate in floating First Amendment limits for tech companies

Supreme Court Justice Clarence Thomas flaunted a dangerous ignorance regarding matters digital in an opinion published today. In attempting to explain the legal difficulties of social media platforms, particularly those arising from Twitter’s ban of Trump, he makes an ill-informed, bordering on bizarre, argument as to why such companies may need their First Amendment rights curtailed.

There are several points on which Thomas seems to willfully misconstrue or misunderstand the issues.

The first is in his characterization of Trump’s use of Twitter. You may remember that several people sued after being blocked by Trump, alleging that his use of the platform amounted to creating a “public forum” in a legal sense, meaning it was unlawful to exclude anyone from it for political reasons. (The case, as it happens, was rendered moot after its appeal and dismissed by the court except as a Thomas’s temporary soapbox.)

“But Mr. Trump, it turned out, had only limited control of the account; Twitter has permanently removed the account from the platform,” writes Thomas. “[I]t seems rather odd to say something is a government forum when a private company has unrestricted authority to do away with it.”

Does it? Does it seem odd? Because a few paragraphs later, he uses the example of a government agency using a conference room in a hotel to hold a public hearing. They can’t kick people out for voicing their political opinions, certainly, because the room is a de facto public forum. But if someone is loud and disruptive, they can ask hotel security to remove that person, because the room is de jure a privately owned space.

Yet the obvious third example, and the one clearly most relevant to the situation at hand, is skipped. What if it is the government representatives who are being loud and disruptive, to the point where the hotel must make the choice whether to remove them?

It says something that this scenario, so remarkably close a metaphor for what actually happened, is not considered. Perhaps it casts the ostensibly “odd” situation and actors in too clear a light, for Thomas’s other arguments suggest he is not for clarity here but for muddying the waters ahead of a partisan knife fight over free speech.

In his best “I’m not saying, I’m just saying” tone, Thomas presents his reasoning why, if the problem is that these platforms have too much power over free speech, then historically there just happen to be some legal options to limit that power.

Thomas argues first, and worst, that platforms like Facebook and Google may amount to “common carriers,” a term that goes back centuries to actual carriers of cargo, but which is now a common legal concept that refers to services that act as simple distribution – “bound to serve all customers alike, without discrimination.” A telephone company is the most common example, in that it cannot and does not choose what connections it makes, nor what conversations happen over those connections – it moves electric signals from one phone to another.

But as he notes at the outset of his commentary, “applying old doctrines to new digital platforms is rarely straightforward.” And Thomas’s method of doing so is spurious.

“Though digital instead of physical, they are at bottom communications networks, and they ‘carry’ information from one user to another,” he says, and equates telephone companies laying cable with companies like Google laying “information infrastructure that can be controlled in much the same way.”

Now, this is certainly wrong. So wrong in so many ways that it’s hard to know where to start and when to stop.

The idea that companies like Facebook and Google are equivalent to telephone lines is such a reach that it seems almost like a joke. These are companies that have built entire business empires by adding enormous amounts of storage, processing, analysis, and other services on top of the element of pure communication. One might as easily suggest that because computers are just a simple piece of hardware that moves data around, that Apple is a common carrier as well. It’s really not so far a logical leap!

There’s no real need to get into the technical and legal reasons why this opinion is wrong, however, because these grounds have been covered so extensively over the years, particularly by the FCC — which the Supreme Court has deferred to as an expert agency on this matter. If Facebook were a common carrier (or telecommunications service), it would fall under the FCC’s jurisdiction — but it doesn’t, because it isn’t, and really, no one thinks it is. This has been supported over and over, by multiple FCCs and administrations, and the deferral is itself a Supreme Court precedent that has become doctrine.

In fact, and this is really the cherry on top, freshman Justice Kavanaugh in a truly stupefying legal opinion a few years ago argued so far in the other direction that it became wrong in a totally different way! It was Kavanaugh’s considered opinion that the bar for qualifying as a common carrier was actually so high that even broadband providers don’t qualify for it (This was all in service of taking down net neutrality, a saga we are in danger of resuming soon). As his erudite colleague Judge Srinivasan explained to him at the time, this approach too is embarrassingly wrong.

Looking at these two opinions, of two sitting conservative Supreme Court Justices, you may find the arguments strangely at odds, yet they are wrong after a common fashion.

Kavanaugh claims that broadband providers, the plainest form of digital common carrier conceivable, are in fact providing all kinds sophisticated services over and above their functionality as a pipe (they aren’t). Thomas claims that companies actually providing all kinds of sophisticated services are nothing more than pipes.

Simply stated, these men have no regard for the facts but have chosen the definition that best suits their political purposes: for Kavanaugh, thwarting a Democrat-led push for strong net neutrality rules; for Thomas, asserting control over social media companies perceived as having an anti-conservative bias.

The case Thomas uses for his sounding board on these topics was rightly rendered moot — Trump is no longer president and the account no longer exists — but he makes it clear that he regrets this extremely.

“As Twitter made clear, the right to cut off speech lies most powerfully in the hands of private digital platforms,” he concludes. “The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions. This petition, unfortunately, affords us no opportunity to confront them.”

Between the common carrier argument and questioning the form of Section 230 (of which in this article), Thomas’s hypotheticals break the seals on several legal avenues to restrict First Amendment rights of digital platforms, as well as legitimizing those (largely on one side of the political spectrum) who claim a grievance along these lines. (Slate legal commentator Mark Joseph Stern, who spotted the opinion early, goes further, calling Thomas’s argument a “paranoid Marxist delusion” and providing some other interesting context.)

This is not to say that social media and tech do not deserve scrutiny on any number of fronts — they exist in an alarming global vacuum of regulatory powers, and hardly anyone would suggest they have been entirely responsible with this freedom. But the arguments of Thomas and Kavanaugh stink of cynical partisan sophistry. This endorsement by Thomas amounts accomplishes nothing legally, but will provide valuable fuel for the bitter fires of contention — though they hardly needed it.

#clarence-thomas, #donald-trump, #facebook, #first-amendment, #google, #government, #lawsuit, #opinion, #section-230, #social-media, #supreme-court, #tc, #trump

Misinformation Isn’t Just on Facebook and Twitter

Broadcast television and talk radio are just as problematic as social media.

#antitrust-laws-and-competition-issues, #computers-and-the-internet, #fairness-doctrine, #federal-communications-commission, #freedom-of-speech-and-expression, #law-and-legislation, #news-and-news-media, #regulation-and-deregulation-of-industry, #rumors-and-misinformation, #section-230, #social-media, #television, #united-states

Big Tech companies cannot be trusted to self-regulate: We need Congress to act

It’s been two months since Donald Trump was kicked off of social media following the violent insurrection on Capitol Hill in January. While the constant barrage of hate-fueled commentary and disinformation from the former president has come to a halt, we must stay vigilant.

Now is the time to think about how to prevent Trump, his allies and other bad actors from fomenting extremism in the future. It’s time to figure out how we as a society address the misinformation, conspiracy theories and lies that threaten our democracy by destroying our information infrastructure.

As vice president at Color Of Change, my team and I have had countless meetings with leaders of multi-billion-dollar tech companies like Facebook, Twitter and Google, where we had to consistently flag hateful, racist content and disinformation on their platforms. We’ve also raised demands supported by millions of our members to adequately address these systemic issues — calls that are too often met with a lack of urgency and sense of responsibility to keep users and Black communities safe.

The violent insurrection by white nationalists and far-right extremists in our nation’s capital was absolutely fueled and enabled by tech companies who had years to address hate speech and disinformation that proliferated on their social media platforms. Many social media companies relinquished their platforms to far-right extremists, white supremacists and domestic terrorists long ago, and it will take more than an attempted coup to hold them fully accountable for their complicity in the erosion of our democracy — and to ensure it can’t happen again.

To restore our systems of knowledge-sharing and eliminate white nationalist organizing online, Big Tech must move beyond its typical reactive and shallow approach to addressing the harm they cause to our communities and our democracy. But it’s more clear than ever that the federal government must step in to ensure tech giants act.

After six years leading corporate accountability campaigns and engaging with Big Tech leaders, I can definitively say it’s evident that social media companies do have the power, resources and tools to enforce policies that protect our democracy and our communities. However, leaders at these tech giants have demonstrated time and time again that they will choose not to implement and enforce adequate measures to stem the dangerous misinformation, targeted hate and white nationalist organizing on their platforms if it means sacrificing maximum profit and growth.

And they use their massive PR teams to create an illusion that they’re sufficiently addressing these issues. For example, social media companies like Facebook continue to follow a reactive formula of announcing disparate policy changes in response to whatever public relations disaster they’re fending off at the moment. Before the insurrection, the company’s leaders failed to heed the warnings of advocates like Color Of Change about the dangers of white supremacists, far-right conspiracists and racist militias using their platforms to organize, recruit and incite violence. They did not ban Trump, implement stronger content moderation policies or change algorithms to stop the spread of misinformation-superspreader Facebook groups — as we had been recommending for years.

These threats were apparent long before the attack on Capitol Hill. They were obvious as Color Of Change and our allies propelled the #StopHateForProfit campaign last summer, when over 1,000 advertisers pulled millions in ad revenues from the platform. They were obvious when Facebook finally agreed to conduct a civil rights audit in 2018 after pressure from our organization and our members. They were obvious even before the deadly white nationalist demonstration in Charlottesville in 2017.

Only after significant damage had already been done did social media companies take action and concede to some of our most pressing demands, including the call to ban Trump’s accounts, implement disclaimers on voter fraud claims, and move aggressively remove COVID misinformation as well as posts inciting violence at the polls amid the 2020 election. But even now, these companies continue to shirk full responsibility by, for example, using self-created entities like the Facebook Oversight Board — an illegitimate substitute for adequate policy enforcement — as PR cover while the fate of recent decisions, such as the suspension of Trump’s account, hang in the balance.

Facebook, Twitter, YouTube and many other Big Tech companies kick into action when their profits, self-interests and reputation are threatened, but always after the damage has been done because their business models are built solely around maximizing engagement. The more polarized content is, the more engagement it gets; the more comments it elicits or times it’s shared, the more of our attention they command and can sell to advertisers. Big Tech leaders have demonstrated they neither have the willpower nor the ability to proactively and successfully self-regulate, and that’s why Congress must immediately intervene.

Congress should enact and enforce federal regulations to reign in the outsized power of Big Tech behemoths, and our lawmakers must create policies that translate to real-life changes in our everyday lives — policies that protect Black and other marginalized communities both online and offline.

We need stronger antitrust enforcement laws to break up big tech monopolies that evade corporate accountability and impact Black businesses and workers; comprehensive privacy and algorithmic discrimination legislation to ensure that profits from our data aren’t being used to fuel our exploitation; expanded broadband access to close the digital divide for Black and low-income communities; restored net neutrality so that internet services providers can’t charge differently based on content or equipment; and disinformation and content moderation by making it clear that Section 230 does not exempt platforms from complying with civil rights laws.

We’ve already seen some progress following pressure from activists and advocacy groups including Color Of Change. Last year alone, Big Tech companies like Zoom hired chief diversity experts; Google took action to block the Proud Boys website and online store; and major social media platforms like TikTok adopted better, stronger policies on banning hateful content.

But we’re not going to applaud billion-dollar tech companies for doing what they should and could have already done to address the years of misinformation, hate and violence fueled by social media platforms. We’re not going to wait for the next PR stunt or blanket statement to come out or until Facebook decides whether or not to reinstate Trump’s accounts — and we’re not going to stand idly by until more lives are lost.

The federal government and regulatory powers need to hold Big Tech accountable to their commitments by immediately enacting policy change. Our nation’s leaders have a responsibility to protect us from the harms Big Tech is enabling on our democracy and our communities — to regulate social media platforms and change the dangerous incentives in the digital economy. Without federal intervention, tech companies are on pace to repeat history.

#column, #congress, #disinformation, #misinformation, #opinion, #policy, #section-230, #social, #social-media, #social-media-platforms, #tc

Twitter sues Texas AG to stop “retaliatory” content-moderation probe

Twitter sues Texas AG to stop “retaliatory” content-moderation probe

Enlarge (credit: Thomas Trutschel / Getty Images)

Twitter is suing Texas Attorney General Ken Paxton, alleging that a probe Paxton launched into its business is an act of retaliation against the platform’s choice to ban the account of former US President Donald Trump.

The suit (PDF) accuses Paxton of using his office to “intimidate, harass, and target Twitter in retaliation for Twitter’s exercise of its First Amendment rights.”

The conflict all goes back to the January 6 events at the US Capitol. At the height of the chaos, while a mob was actively storming the building, Trump took to Twitter to reiterate his false claims of electoral fraud and seemingly egg on the violence. In the following hours, Twitter deleted three tweets and suspended Trump’s account for 12 hours.

Read 6 remaining paragraphs | Comments

#bad-faith, #content-moderation, #ken-paxton, #lawsuits, #policy, #section-230, #texas, #twitter

Proposed Sec. 230 rewrite could have wide-ranging consequences

Cartoon hands hold out a band-aid over the words Section 230.

Enlarge (credit: Aurich Lawson / Getty Images)

A trio of Democratic Senators has taken this administration’s first stab at Section 230 reform with a new bill that would make platforms, including giants such as Facebook and Twitter, liable for certain limited categories of dangerous content. Unfortunately, although the bill’s authors try to thread a tricky needle carefully, critics warn that bad-faith actors could nonetheless easily weaponize the bill as written against both platforms and other users.

The bill (PDF), dubbed the SAFE TECH Act, seeks not to repeal Section 230 (as some Republicans have proposed) but instead to amend it with new definitions of speakers and new exceptions from the law’s infamous liability shield.

“A law meant to encourage service providers to develop tools and policies to support effective moderation has instead conferred sweeping immunity on online providers even when they do nothing to address foreseeable, obvious and repeated misuse of their products and services to cause harm,” said Sen. Mark Warner (D-VA), who introduced the bill. “This bill doesn’t interfere with free speech—it’s about allowing these platforms to finally be held accountable for harmful, often criminal behavior enabled by their platforms to which they have turned a blind eye for too long.”

Read 24 remaining paragraphs | Comments

#bills, #congress, #mark-warner, #policy, #politics, #section-230

The Safe Tech Act is the latest Section 230 reform bill, but its critics warn of unpleasant side effects

The first major Section 230 reform proposal of the Biden era is out. In a new bill, Senate Democrats Mark Warner, Mazie Hirono and Amy Klobuchar propose changes to Section 230 of the Communications Decency Act that would fundamentally change the 1996 law widely credited with cultivating the modern internet.

Section 230 is a legal shield that protects internet companies from the user-generated content they host, from Facebook and TikTok to Amazon reviews and comments sections. The new proposed legislation, known as the SAFE TECH Act, would do a few different things to change how that works.

First, it would fundamentally alter the core language of Section 230 — and given how concise that snippet of language is to begin with, any change is a big change. Under the new language, Section 230 would no longer offer protections in situations where payment was involved.

Here’s the current version:

“No provider or user of an interactive computer service shall be treated as
the publisher or speaker of any information speech provided by another
information content provider.”

And here are the changes the SAFE TECH Act would make:

No provider or user of an interactive computer service shall be treated as
the publisher or speaker of any speech provided by another
information content provider, except to the extent the provider or user has
accepted payment to make the speech available or, in whole or in part, created
or funded the creation of the speech.

(B) (c)(1)(A) shall be an affirmative defense to a claim alleging that an interactive computer service provider is a publisher or speaker with respect to speech provided by another information content provider that an interactive computer service provider has a burden of proving by a preponderance of the evidence.

That might not sound like much, but it could be a massive change. In a tweet promoting the bill, Sen. Warner called online ads a “a key vector for all manner of frauds and scams” so homing in on platform abuses in advertising is the ostensible goal here. But under the current language, it’s possible that many other kinds of paid services could be affected, from Substack, Patreon and other kinds of premium online content to web hosting.

“A good lawyer could argue that this covers many different types of arrangements that go far beyond paid advertisements,” Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy who authored a book about Section 230, told TechCrunch. “Platforms accept payments from a wide range of parties during the course of making speech ‘available’ to the public. The bill does not limit the exception to cases in which platforms accept payments from the speaker.”

Internet companies big and small rely on Section 230 protections to operate, but some of them might have to rethink their businesses if rules proposed in the new bill come to pass. Oregon Senator Ron Wyden, one of Section 230’s original authors, noted that the new bill has some good intentions, but he issued a strong caution against the blowback its unintended consequences could cause.

“Unfortunately, as written, it would devastate every part of the open internet, and cause massive collateral damage to online speech,” Wyden told TechCrunch, likening the bill to a full repeal of the law with added confusion from a cluster of new exceptions.

“Creating liability for all commercial relationships would cause web hosts, cloud storage providers and even paid email services to purge their networks of any controversial speech,” Wyden said.

Fight for the Future Director Evan Greer echoed the sentiment that the bill is well intentioned but shared the same concerns. “… Unfortunately this bill, as written, would have enormous unintended consequences for human rights and freedom of expression,” Greer said.

“It creates a huge carveout in Section 230 that impacts not only advertising but essentially all paid services, such as web hosting and CDNs, as well as small services like Patreon, Bandcamp, and Etsy.”

Given its focus on advertising and instances in which a company has accepted payment, the bill might be both too broad and too narrow at once to offer effective reform. While online advertising, particularly political advertising, has become a hot topic in recent discussions about cracking down on platforms, the vast majority of violent conspiracies, misinformation, and organized hate is the result of organic content, not the stuff that’s paid or promoted. It also doesn’t address the role of algorithms, a particular focus of a narrow Section 230 reform proposal in the House from Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ).

New exceptions

The other part of the SAFE Tech Act, which attracted buy-in from a number of civil rights organizations including the Anti-Defamation League, the Center for Countering Digital Hate and Color Of Change, does address some of those ills. By appending 230, the new bill would open internet companies to more civil liability in some cases, allowing victims of cyber-stalking, targeted harassment, discrimination and wrongful death to the opportunity to file lawsuits against those companies rather than blocking those kinds of suits outright.

The SAFE Tech Act would also create a carve-out allowing individuals to seek court orders in cases when an internet company’s handling of material it hosts could cause “irreparable harm” as well as allowing lawsuits in U.S. courts against American internet companies for human rights abuses abroad.

In a press release, Sen. Warner said the bill was about updating the 1996 law to bring it up to speed with modern needs:

“A law meant to encourage service providers to develop tools and policies to support effective moderation has instead conferred sweeping immunity on online providers even when they do nothing to address foreseeable, obvious and repeated misuse of their products and services to cause harm,” Warner said.

There’s no dearth of ideas about reforming Section 230. Among them: the bipartisan PACT Act from Senators Brian Schatz (D-HI) and John Thune (R-SD), which focuses on moderation transparency and providing less cover for companies facing federal and state regulators, and the EARN IT Act, a broad bill from Sen. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT) that 230 defenders and internet freedom advocates regard as unconstitutional, overly broad and disastrous.

With so many proposed Section 230 reforms already floating around, it’s far from guaranteed that a bill like the SAFE TECH Act will prevail. The only thing that’s certain is we’ll be hearing a lot more about the tiny snippet of law with huge consequences for the modern internet.

#government, #section-230, #section-230-of-the-communications-decency-act, #tc

A Vast Web of Vengeance

Outrageous lies destroyed Guy Babcock’s online reputation. When he went hunting for their source, what he discovered was worse than he could have imagined.

#atas-nadire, #babcock-guy, #canada, #communications-decency-act-of-1996, #computers-and-the-internet, #cyberharassment, #england, #google-inc, #libel-and-slander, #pinterest, #ripoff-report-web-site, #rumors-and-misinformation, #section-230, #suits-and-litigation-civil, #united-states

Changing the Internet Law That Lets Twitter Ban Trump

In a special bonus episode of “The Argument,” Jane Coaston defends the law that made the internet as we know it.

#computers-and-the-internet, #facebook-inc, #freedom-of-speech-and-expression, #politics-and-government, #section-230, #social-media, #trump-donald-j, #twitter, #united-states-politics-and-government

Filing: Amazon warned Parler for months about “more than 100” violent threats

3D logo hangs from a convention center ceiling.

Enlarge / Amazon Web Services (AWS) logo displayed during the 4th edition of the Viva Technology show at Parc des Expositions Porte de Versailles on May 17, 2019, in Paris, France. (credit: Chesnot | Getty Images)

Amazon on Tuesday brought receipts in its response to seemingly defunct social networking platform Parler’s lawsuit against it, detailing AWS’ repeated efforts to get Parler to address explicit threats of violence posted to the service.

In the wake of the violent insurrection at the US Capitol last Wednesday, AWS kicked Parler off its Web-hosting platform at midnight Sunday evening. In response, Parler filed a lawsuit accusing Amazon of breaking a contract for political reasons and colluding with Twitter to drive a competitor offline.

But the ban has nothing to do with “stifling viewpoints” or a “conspiracy” to restrain a competitor, Amazon said in its response filing (PDF). Instead, Amazon said, “This case is about Parler’s demonstrated unwillingness and inability” to remove actively dangerous content, including posts that incite and plan “the rape, torture, and assassination of named public officials and private citizens… AWS suspended Parler’s account as a last resort to prevent further access to such content, including plans for violence to disrupt the impending Presidential transition.”

Read 12 remaining paragraphs | Comments

#amazon, #antitrust, #aws, #insurrection, #lawsuits, #parler, #policy, #section-230, #sedition

Facebook and Twitter could be sued for “censorship” under proposed state law

A computer keyboard with the word

Enlarge (credit: Getty Images | Peter Dazeley)

Republican state lawmakers in North Dakota want Facebook and Twitter to face lawsuits from users who have been “censored.”

A bill submitted by the six legislators last week is titled, “an Act to permit civil actions against social media sites for censoring speech.” It says that social media websites with over 1 million users would be “liable in a civil action for damages to the person whose speech is restricted, censored, or suppressed, and to any person who reasonably otherwise would have received the writing, speech, or publication.” Payouts for “censored” users would include “treble damages for compensatory, consequential, and incidental damages.”

Even if passed by the North Dakota legislature, the bill would likely have no effect due to a conflict with federal law. The proposed law “would immediately be deemed void as preempted by Section 230 [of the Communications Decency Act],” because “federal law is supreme over state law where they conflict, and this would create an express conflict,” attorney Akiva Cohen wrote in a Twitter thread about the bill.

Read 9 remaining paragraphs | Comments

#north-dakota, #policy, #section-230, #trump

I’m a free speech champion. I don’t even know what that means anymore

The president of the United States is supposedly the most powerful man in the world. He also can’t post to Twitter. Or Facebook. Or a bunch of other social networks as we discovered over the course of the past week (He still has access to the nuclear launch codes though, so that’s an interesting dynamic to chew on).

The bans last week were exceptional — but so is Trump. There may not be another president this century who pushes the line of public discourse quite like the current occupant of the White House (at least, one can only hope). If the whole Trump crisis was truly exceptional though, it could simply be ignored. Rules, even rules around free speech, have always had exceptions to handle exceptional circumstances. The president provokes a violent protest, he gets banned. A unique moment in American executive leadership, for sure. Yet, apart from the actor, it’s hardly an unusual response from the tech industry or any publisher where violent threats have been banned for decades under Supreme Court precedent.

Why then aren’t we ignoring it? I think we can all feel that something greater is underfoot. The entire information architecture of our world has changed, and that has completely upended the structure of rules around free speech that have governed America in the modern era.

Freedom of speech is deeply entwined with human progressivism, with science and rationality and positivism. The purpose of a marketplace of ideas is for arguments to be in dialogue with each other, to have their own facts and deductions checked, and for bad ideas to be washed out by better, more proven ones. Contentious at times yes, but a positive contention, one that ultimately is meant to elucidate more than provoke.

I’m a free speech “absolutist” because I believe in that human progress, and I believe that the concept of a marketplace of ideas is the best mechanism historically we have ever built as a species for exploring our world and introspecting ourselves. Yet, I also can’t witness the events that transpired last week and just pretend that our information commons is working well.

I get it — that seems contradictory. I understand the argument that I’m supporting free speech but not really supporting it. Yet, there is a reasonable pause to be taken in this moment to ask some deeper, more foundational questions, for something is wrong with the system. I’m struggling with the same context that the ACLU in its official statement is struggling with:

It’s a milquetoast response, a “we condemn but we are also concerned” sort of lukewarm mélange. It’s also a reasonable response to a rapidly changing environment around speech. In the same vein, I’m a staunch defender of the marketplace of ideas, well, a marketplace of ideas, one that unfortunately no longer exists today. Just think about everything that isn’t working:

  • There’s too much information, and it’s impossible for any reasonable human to process it all
  • Much of that flood is garbage and outright fraud, or worse, brilliant pieces of psychological propaganda designed to distract and undermine the very information system it is distributed on
  • We’ve never allowed so many people to gain access to the public square to distribute their missives, drivel and invective with such limited constraints
  • Few ideas are in dialogue anymore. Collegiality is mostly dead, as is constructivist thought. There is no marketplace anymore since the “stores” are no longer in the same public squares but in each of our own individual feeds
  • Coercive incentives from a handful of dominant, monopoly platforms drive wildly damaging communication practices, encouraging the proverbial “clickbait” over any form of careful discussion or debate
  • The vast majority of people seem to love this, given the extremely high user engagement numbers seen on tech platforms

We’ve known this event was coming for decades. Alvin Toffler’s Future Shock, about the inability of humans to process the complexity of the modern, industrialized world, came out in 1970. Cyberpunk literature and sci-fi more generally in the 1980s and 1990s has extensively grappled with this coming onslaught. As the internet expanded rapidly, books like Nicholas Carr’s The Shallows interrogated how the internet prevents us from thinking deeply. It was published a decade ago. Today, in your local bookstore (assuming you still have one and can actually still read texts longer than 1,000 words), you can find a whole wing analyzing the future of media and communications and what the internet is cognitively doing to us.

My absolute belief in “free speech” was predicated on some pretty clear assumptions about how free speech was supposed to work in the United States. Those assumptions, unfortunately, no longer apply.

We can no longer assume that there is a proverbial public square where citizens debate, perhaps even angrily, the issues that confront them. We can no longer assume that information dreck gets filtered by editors, or by publishers, or by readers themselves. We can no longer assume that the people who reach us with their messages are somewhat vetted, and speaking from truth or facts.

We can no longer assume that any part of the marketplace is frankly working at all.

That’s what makes this era so challenging for those of us who rely every day on the right to free speech in our work and in our lives. Without those underlying assumptions, the right to free speech isn’t the bastion of human progressivism and rationality that we expect it to be. Our information commons won’t ensure that the best and highest-quality ideas are going to rise to the top and propel our collective discussion.

I truly believe in free speech in its extensive, American sense. So do many friends who are similarly concerned for the perilous state of our marketplace of ideas. Yet, we all need to confront the reality that is before us: the system is really, truly broken and just screaming “Free Speech!” is not going to change that.

The way forward is to pivot the conversation around free speech to a broader question about how we improve the information architecture of our world. How do we ensure that creators and the people who generate ideas and analyze them can do so with the right economics? That means empowering writers and filmmakers and novelists and researchers and everyone else to be able to do quality work, over perhaps extended periods of time, without having to upload a new photo or insight every ten minutes to stay “top of mind” lest their income tumbles.

How can we align incentives at every layer of our communications to ensure that facts and “truth” will eventually win the day in the asymptote, if not always right away? How do you ensure that the power that comes with mass distribution of information is held by those who embody at least some notion of a public duty to accuracy and reasonableness?

Most importantly, how do we improve the ability of every reader and viewer to process the information they see, and through their independent actions drive the discussion toward rationality? No marketplace can survive without smart and diligent customers, and the market for information is no exception. If people demand lies, the world is going to supply it to them, and in spades as we have already seen.

Tech can’t solve this alone, but it absolutely can and is obligated to be part of the solution. Platform alternatives with the right incentives in place can completely change the way humanity understands our world and what is happening. That’s an extremely important and intellectually interesting problem that should be enticing to any ambitious engineer and founder to tackle.

I’ll always defend free speech, but I can’t defend the system in the state that we see it today. The only defense then is to work to rebuild this system, to buttress the components that are continuing to work and to repair or replace the ones that aren’t. I don’t believe the descent into rational hell has to be paved by misinformation. We all have the tools and power to make this system what it needs to be — what it should be.

#first-amendment, #free-speech, #government, #media, #policy, #section-230, #social-networks, #tc, #tech-platforms

Ajit Pai offers mild criticism of Trump incitement, drops Section 230 plan

Simpsons-like illustration of Ajit Pai walking backwards into a bush.

Enlarge / Ajit Pai backs slowly away from President Trump. (credit: Aurich Lawson / Photo by Gage Skidmore)

Federal Communications Commission Chairman Ajit Pai said he is dropping his plan to help President Trump impose a crackdown on social-media platforms and offered mild criticism of Trump’s incitement of a mob that stormed the US Capitol in a failed bid to overturn the election results.

In October, Pai backed Trump’s proposal to limit the Section 230 legal protections for social-media websites that block or modify content posted by users. At the time, Pai said he would open an FCC rule-making process to declare that companies like Twitter and Facebook do not have “special immunity” for their content-moderation decisions. But Pai hasn’t moved the proposal forward since Trump’s election loss and has now stated in an interview that he won’t finalize the plan.

“The status is that I do not intend to move forward with the notice of proposed rule-making [to reinterpret Section 230] at the FCC,” Pai said in an interview published yesterday by Protocol. “The reason is, in part, because given the results of the election, there’s simply not sufficient time to complete the administrative steps necessary in order to resolve the rule-making. Given that reality, I do not believe it’s appropriate to move forward.” Pai announced shortly after Trump’s election loss that he will leave the FCC on January 20, President-elect Joe Biden’s inauguration day.

Read 8 remaining paragraphs | Comments

#ajit-pai, #facebook, #policy, #section-230, #social-media, #trump, #twitter

McConnell introduces bill tying $2K stimulus checks to Section 230 repeal

Senate Majority Leader Mitch McConnell (R-KY), has thrown a wrench into the expected Congressional over-ride of President Trump's veto of the National Defense Authorization Act.

Enlarge / Senate Majority Leader Mitch McConnell (R-KY), has thrown a wrench into the expected Congressional over-ride of President Trump’s veto of the National Defense Authorization Act. (credit: Ting Shen/Bloomberg via Getty Images)

Senate Majority Leader Mitch McConnell (R-KY) has thrown a wrench into Congressional approval of an increase in government stimulus relief checks from $600 to $2,000. The House voted overwhelmingly on Monday to increase the payments, as President Trump had advocated for. Instead of voting on the House bill, however, McConnell blocked it and instead introduced a new bill tying higher stimulus payments to Section 230’s full repeal, according to Verge, which obtained a copy of the bill’s text.

It’s a tangled web, but the move is tied to Trump’s veto of the National Defense Authorization Act, which authorizes $740 billion in defense spending for the upcoming government fiscal year. “No one has worked harder, or approved more money for the military, than I have,” Trump said in a statement about the veto, claiming falsely that the military “was totally depleted” when he took office in 2017. “Your failure to terminate the very dangerous national security risk of Section 230 will make our intelligence virtually impossible to conduct without everyone knowing what we are doing at every step.”

Section 230 has nothing to do with military intelligence; it’s a 1996 law designed to protect Internet platforms. At its highest level, the short snippet of law basically does two things. First, it grants Internet service providers, including online platforms, broad immunity from being held legally liable for content third-party users share. Second, it grants those same services legal immunity from the decisions they make around content moderation—no matter how much or how little they choose to do.

Read 7 remaining paragraphs | Comments

#fact-checking, #mitch-mcconnell, #policy, #president-donald-trump, #section-230, #social-media, #twitter

Section 230 is threatened in new bill tying liability shield repeal to $2,000 checks

Tech got dragged into yet another irrelevant Congressional scuffle this week after President Trump agreed to sign a bipartisan pandemic relief package but continued to press for additional $2,000 checks that his party opposed during negotiations.

In tweets and other comments, Trump tied a push for the boosted relief payments to his entirely unrelated demand to repeal Section 230 of the Communications Decency Act, a critical but previously obscure law that protects internet companies from legal liability for user-generated content.

The political situation was complicated further after Republicans in Georgia’s two extremely high stakes runoff races sided with Trump over the additional checks rather than the majority of Republicans in Congress.

In a move that’s more a political maneuver than a real stab at tech regulation,  Senate Majority Leader Mitch McConnell introduced a new bill late Tuesday linking the $2,000 payments Republicans previously blocked to an outright repeal of Section 230 — a proposal that’s sure to be doomed in Congress.

McConnell’s bill humors the president’s eclectic cluster of demands while creating an opportunity for his party to look unified, sort of, in the face of the Georgia situation. The proposal also tosses in a study on voter fraud, not because it’s relevant but because it’s another pet issue that Trump dragged into the whole mess.

Over the course of 2020, Trump has repeatedly returned to the ideal of revoking Section 230 protections as a cudgel he can wield against tech companies, particularly Twitter when the platform’s rules result in his own tweets being downranked or paired with misinformation warnings.

 

If the latest development sounds confusing that’s because it is. Section 230 and the stimulus legislation have nothing at all to do with one another. And we were just talking about Section 230 in relation to another completely unrelated bit of legislation, a huge annual defense spending bill called the NDAA.

Last week Trump decided to veto that bill, which enjoyed broad bipartisan support because it funds the military and does other mostly uncontroversial stuff, on the grounds that it didn’t include his totally unrelated demand to strip tech companies of their Section 230 protections. Trump’s move was pretty much out of left field, but it opened the door for Democrats to leverage their cooperation in a two-thirds majority to override Trump’s veto for other stuff they want right now, namely those $2,000 stimulus checks for Americans. Sen. Bernie Sanders is attempting to do just that.

Unfortunately, McConnell’s move here is mostly a cynical one, to the detriment of Americans in financial turmoil. An outright repeal of Section 230 is a position without much if any support among Democrats. And while closely Trump-aligned Republicans have flirted with the idea of stripping online platforms of the legal shield altogether, some flavor of reform is what’s been on the table and what’s likely to get hashed out in 2021.

For lawmakers who understand the far-reaching implications of the law, reform rather than a straight up repeal was always a more likely outcome. In the extraordinarily unlikely event that Section 230 gets repealed through this week’s strange series of events, many of the websites, apps and online services that people rely on would be thrown into chaos. Without Section 230’s liability protections, websites from Yelp to Fox News would be legally responsible for any user-generated reviews and comments they host. If an end to comments sections doesn’t sound so bad, imagine an internet without Amazon reviews, tweets and many other byproducts of the social internet.

The thing is, it’s not going to happen. McConnell doesn’t want Americans to receive the additional $2,000 checks and Democrats aren’t going to be willing to secure the funds by agreeing to a totally unrelated last-minute proposal to throw out the rules of the internet, particularly with regulatory pressure on tech mounting and more serious 230 reform efforts still underway. The proposed bill is also not even guaranteed to come up for a vote in the waning days of this Congressional session.

The end result will be that McConnell humors the president by offering him what he wanted, kind of, Democrats look bad for suddenly opposing much-needed additional stimulus money and Americans in the midst of a deadly and financially devastating crisis probably don’t end up with more money in their pockets. Not great.

#government, #section-230, #section-230-of-the-communications-decency-act, #senate, #tc, #the-battle-over-big-tech, #trump-administration

Computer repairman suing Twitter for defamation, seeks $500 million

Extreme close-up image of the Twitter logo on the screen of a smartphone.

Enlarge (credit: Tom Raftery | Flickr)

The former owner of a computer repair shop in Delaware is suing Twitter for defamation, alleging that the platform’s choice to moderate a New York Post story that cited him as a source is tantamount to labeling him personally a “hacker.”

Twitter’s “actions and statements had the specific intent to communicate to the world” that John Paul Mac Isaac “is a hacker,” the suit (PDF) alleges, eventually forcing him to shut down his Delaware business. Mac Isaac is seeking $500 million in punitive damages from the suit, as well as whatever “further relief” the court deems appropriate.

The alleged defamation ties to a specific October episode in a fall that was, frankly, full of strange episodes. On October 14, the New York Post ran a story alleging that President-Elect Joe Biden’s son, Hunter Biden, had connected his father with Ukrainian energy firm Burisma in 2014. These allegations were based on emails the Post said it got from Trump attorney and former New York mayor Rudy Giliani, who in turn allegedly obtained them from a laptop that Biden’s son Hunter dropped off at Mac Isaac’s computer repair shop in 2019.

Read 6 remaining paragraphs | Comments

#defamation, #lawsuits, #policy, #section-230, #twitter

House overrides Trump veto, defying demand to repeal Section 230

House overrides Trump veto, defying demand to repeal Section 230

Enlarge (credit: Spencer Platt/Getty Images)

The House of Representatives has voted to override Donald Trump’s veto of the National Defense Authorization Act (NDAA) by a vote of 322 to 87 votes—easily exceeding the required two-thirds vote. The measure now goes to the Senate, where it must also pass by a two-to-one margin to overcome Trump’s opposition.

Every year, Congress passes the NDAA to fund the military—this year’s bill provides $740 billion for the Pentagon. Thanks to broad public support for the military, the NDAA is widely seen as a “must pass” measure. This makes it a tempting vehicle for attaching unrelated proposals that might not otherwise win Congressional approval.

In recent months, Donald Trump has been calling for Congress, the Federal Communications Commission, and other government agencies to modify or repeal Section 230 of the Communications Decency Act, a 1996 law that shields websites from liability for content uploaded by their users. Trump sees repeal of Section 230 as a way to retaliate against Facebook and Twitter for their perceived bias against him. But so far, Trump’s campaign against Section 230 has not gotten traction.

Read 5 remaining paragraphs | Comments

#donald-trump, #ndaa, #policy, #section-230

Trump vetoes $740B defense bill, citing “failure to terminate” Section 230

Marble, mostly Greek revival architecture against a deep blue sky.

Enlarge / The Washington, DC skyline, including the US Capitol, Washington Monument, and Lincoln Memorial, as seen from the Arlington, VA, side of the Potomac at night. Which is the time of day Congress is apparently going to be working until. (credit: Melodie Yvonne | Getty Images)

As was threatened, so has it come to pass: President Donald Trump has vetoed funding for the US military because the massive defense spending bill did not include a provision to repeal Section 230.

The National Defense Authorization Act authorizes $740 billion in defense spending for the upcoming government fiscal year. The NDAA usually moves through Congress with broad bipartisan support, and this year’s is no exception. Both chambers supported the bill by wide, veto-proof margins—the House approved by a vote of 335 to 78, and the Senate approved it 84 to 13.

Trump, however, said in early December he would veto the bill if it did not include an outright repeal of Section 230, and today, with the bill on his desk, he followed through on that threat.

Read 8 remaining paragraphs | Comments

#because-2020, #bills, #donald-trump, #ndaa, #policy, #section-230, #vetoes

Trump vetoes major defense bill, citing Section 230

Following through on his previous threat, President Trump has vetoed the $740 million National Defense Authorization Act (NDAA), a major bill that allocates military funds each year.

Through tweets in early December, Trump said he would sink the NDAA if it wasn’t altered to include language “terminating” Section 230 of the Communications Decency Act, an essential and previously obscure internet law that the president has had in his crosshairs for the better part of the year.

“Your failure to terminate the very dangerous national security risk of Section 230 will make our intelligence virtually impossible to conduct without everyone knowing what we are doing at every step,” Trump said in a statement on the veto.

“The Act fails even to make any meaningful changes to Section 230 of the Communications Decency Act, despite bipartisan calls for repealing that provision.” Trump also stated that Section 230 “facilitates the spread of foreign disinformation online,” a threat that the president, who frequently spreads dangerous misinformation online, has historically expressed little concern for.

Section 230 became a hot topic in 2020 as lawmakers, states and the federal government made major moves to rein in the tech industry’s biggest, most powerful companies. The law protects internet companies from liability for the content they host and is widely credited with opening the doors for internet companies big and small to grow their online business over the years.

Trump’s position on Section 230 and the NDAA was never particularly tenable. While the NDAA is a massive piece of legislation, the kind that rolls up many disparate things, altering it to somehow repeal Section 230 was never on the table. It’s also hard to overstate the unpopularity of the position the president has staked out here. The NDAA funds many parts of the military beyond combat and this year’s bill includes including pay raises for the troops and additional health support for Vietnam veterans.

Trump’s views on Section 230 are similarly extreme, even relative to many other members of his party. While there is support for changing Section 230 on both sides of the aisle, Congress is far from a consensus on what needs to change and a complex bipartisan reform effort is ongoing. Throwing Section 230 out altogether is very unlikely to be the end result of whatever kind of reform Congress comes up with in the coming year.

The House plans to convene on Monday to override the president’s exercise of veto powers, which would require a two-thirds majority in both houses of Congress. The House approved the NDAA earlier this month with a veto-proof 355-78 vote, including broad support from the vast majority of House Republicans. The Senate passed the legislation along to the president with a similarly strong bipartisan 84-13 vote in favor of the bill on December 11.

#government, #section-230, #tc, #the-battle-over-big-tech

Trump’s odd new attack on Section 230 is probably doomed

Trump’s crusade against a key internet law known as Section 230 tends to pop up in unlikely places. His Twitter feed on Thanksgiving, for one. Or at times you’d think the nation would be hearing from its leader on the matter at hand: a worsening pandemic that’s killed nearly 270,000 people in the United States.

His latest threat to the law, which is widely regarded as the foundation for the modern internet, is unlikelier still. Now, Trump wants to veto the National Defense Authorization Act (NDAA), a bill that allocates military funds each year, if it doesn’t somehow “terminate” Section 230 of the Communications Decency Act.

In a tweet, Trump mysteriously called the law a “serious threat to our National Security & Election Integrity” and claimed that only big tech companies benefit from it, which is not true. Big tech’s lobbying group made the opposite argument in response to the president’s new threat.

“Repealing Section 230 is itself a threat to national security,” Internet Association Interim President and CEO Jon Berroya said in a statement. “The law empowers online platforms to remove harmful and dangerous content, including terrorist content and misinformation.”

Section 230, which protects internet companies from liability for the content they host, is currently at the center of a complex bipartisan reform effort — one that’s nowhere near a consensus, much less an agreement that Section 230 should be scrapped outright.

President Trump’s threat to block the NDAA stakes out a deeply unpopular position. The sweeping defense budget bill includes all kinds of funding for popular programs that benefit U.S. troops and veterans, making a veto of the bill if the terms of a totally unrelated demand aren’t met a strange gamble indeed. The fact that Trump’s latest anti-230 tactic comes during a lame duck session gives his threat even less bite.

In light of that, most of Congress has gone about business as usual so far. But close Trump ally Sen. Josh Hawley (R-MO) did signal his support for Trump’s position on Wednesday. “The NDAA does NOT contain any reform to Section 230 but DOES contain Elizabeth Warren’s social engineering amendment to unilaterally rename bases & war memorials w/ no public input or process,” Hawley tweeted. “I cannot support it.”

If history is any lesson, Trump isn’t afraid to make an empty threat, eventually pivoting to something else that catches his attention. But Section 230 — previously a fairly arcane piece of legislation that attracted little mainstream attention — has rankled Trump for the better part of the year, even inspiring an executive order back in May.

That executive order gets at the real reason behind Trump’s ire: He believes that social media companies, Twitter in particular, have unfairly censored him. While Twitter has continued to allow Trump to remain on its platform even as he flaunts the rules, the company now limits the reach of his most dangerous or misleading tweets — false claims about the election results, for example — and pairs them with warning labels.

Paradoxically, if Trump got his way, an outright repeal of Section 230 would open online platforms up to an insurmountable level of legal liability, either sinking social media companies outright or forcing them to severely restrict their users’ speech.

It’s possible that the president could dig his heels in, pushing the defense spending bill into President-elect Biden’s term. But it’s more likely that Trump will back off of his unusual demand, which so far has yet to attract much support or even acknowledgement from his own party. At the moment, Congress is preoccupied with work on a second pandemic stimulus bill that would offer more financial support to the country.

Sen. Ron Wyden (D-OR), who co-authored Section 230, remains unworried that a repeal could get stuffed into the multi-hundred billion dollar defense bill in the eleventh hour.

“I’d like to start for the Blazers, but it’s not going to happen either,” Wyden told TechCrunch. “It is pathetic that Trump refuses to help unemployed workers, while he spends his time tweeting unhinged election conspiracies and demanding Congress repeal the foundation of free speech online.”

#government, #section-230, #section-230-of-the-communications-decency-act, #tc, #trump-administration

Trump to Congress: Repeal Section 230 or I’ll veto military funding

A man in a suit points from a small desk.

Enlarge / Donald Trump speaks from the White House on Thanksgiving Day. (credit: Erin Schaff – Pool/Getty Images)

President Donald Trump has long been an outspoken foe of big technology companies. And in recent months, he has focused his ire on Section 230, a provision of the 1996 Communications Decency Act that shields online platforms from liability for content posted by their users.

In May, Trump called on the Federal Communications Commission to reinterpret the law—though it’s not clear the agency has the power to do that. Since then, he has tweeted about the issue incessantly.

On Tuesday evening, Trump ratcheted up his campaign against Section 230. In a tweet, he called the law “a serious threat to our National Security & Election Integrity.” He warned that “if the very dangerous & unfair Section 230 is not completely terminated as part of the National Defense Authorization Act (NDAA), I will be forced to unequivocally VETO the Bill.”

Read 18 remaining paragraphs | Comments

#donald-trump, #policy, #section-230

Jack Dorsey and Mark Zuckerberg will face Congress again, this time about the election

After giving in to the looming threat of subpoenas, two of tech’s most high profile CEOs will again be grilled by Congress.

On Tuesday, the Senate Judiciary Committee will host Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg for what’s likely to be another multi-hour airing of assorted grievances. In this round, Republican lawmakers called the hearing to press the tech titans on “Censorship, Suppression, and the 2020 Election.” The hearing, which was scheduled before the election, was apparently inspired by the platforms’ decisions to limit the reach of a dubious New York Post story presenting leaked information purporting to implicate now President-elect Joe Biden and his son Hunter in a corrupt political influence scheme in Ukraine.

If the last hearing is any indication, and it likely is, Tuesday’s tech vs. Congress showdown will be less about cornering the two tech platform CEOs on the stated topic than it will be a far-ranging complaint session about Republicans’ ongoing complaints about anti-conservative bias punctuated by bipartisan soliloquies on lawmakers’ various pet topics. While that hearing, held last month in the Senate Commerce Committee, was ostensibly about Section 230 reform, the pressing policy issue barely came up.

Tuesday will be the first post-election Congressional appearance from social media leaders, so we can also expect a war of competing political realities. In one, President Trump, unfairly assailed by tech and the media alike, is somehow still a contender for the presidency. In the other reality (the real one), President-elect Joe Biden won the election decisively but his victory remains mired in social media misinformation. The latter scenario has played out in spite of a mixed bag of special tools and rules devised by Twitter and Facebook to rein in looming post-election conspiracies.

If you’re interested in subjecting yourself to Tuesday’s proceedings, you can watch the hearing live on the committee’s own page or on C-SPAN Tuesday at 7AM PT. If you’re not, and we can’t exactly suggest it, circle back after things are over and we’ll catch you up. But before we leave you, one question: How does YouTube’s Susan Wojcicki keep staying out of these things?

#congress, #facebook, #government, #section-230, #tc, #twitter

Court tosses constitutional challenge to Trump order on social media

Photoshopped image of Attorney General Bill Barr rolling a giant boulder labeled Section 230 up a mountain.

Enlarge (credit: Aurich Lawson / Getty Images)

A federal court in California has tossed out a lawsuit from the voting rights group Rock the Vote. The lawsuit argued that Donald Trump’s May executive order attacking social media platforms violated the group’s First Amendment rights.

Trump’s May executive order was a strange document. Trump was angry about social media companies’ treatment of him and other conservatives. But US law doesn’t actually give the president much power to directly punish private technology companies. So while the May order included a lot of overheated rhetoric, the order’s operative sections were largely toothless.

The order asked the Federal Communications Commission and the Federal Trade Commission to take actions against social media companies. However, these are independent agencies that ultimately make decisions independent of the president. The FTC has signaled it won’t take action on Trump’s suggestions. The FCC has begun a rulemaking process to rethink Section 230, which provides legal protections for sites that host third-party content. But the FCC is just at the beginning of that process. We’re far from any legally binding changes in regulations, and it’s not clear if the FCC even has the authority to re-interpret Section 230.

Read 4 remaining paragraphs | Comments

#donald-trump, #policy, #rock-the-vote, #section-230

AOL founder Steve Case, involved early in Section 230, says it’s time to change it

AOL founder Steve Case was there in Dulles, Virginia, just outside of Washington, D.C., when in 1996 the Communications Decency Act was passed as part of a major overhaul of U.S. telecommunications laws that President Bill Clinton signed into law. Soon after, in its first test, a provision of that act which states that, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” would famously save AOL’s bacon, too.

That wasn’t coincidental. In a wide-ranging call earlier today with Case — who has become an influential investor over the last 15 years through his Washington, D.C.-based firm Revolution and its early-stage, growth-stage, and seed-stage funds — he talked about his involvement in Section 230’s creation, and why the thinks it’s time to change it.

We’ll have more from our interview with Case tomorrow. In the meantime, here he talks about the related legal protections for online platforms that took center stage yesterday or, at least, were supposed to during the Senate’s latest Big Tech hearing.

In that early birthing stage of the internet, [we were all] figuring out what the rules of the road were, and the 230 provision was something I was involved in. I do think the first lawsuit related to it was related to AOL. But 25 years later, it’s fair to take a fresh look at it — [it’s] appropriate to take a fresh look at it. I’ve not recently spent enough time digging in to really have a strong point of view in terms of exactly what to change, but I think it’s fair to say that what made sense in those early days when very few people were online maybe doesn’t make as much sense now when when the entire world is online and the impact these platforms have is so significant.

At the same time, I think you have to be super careful. I think that’s what what the CEOs testifying [yesterday] were trying to emphasize. [It was] ‘We get that there’s a desire to relook at it. We also get that because of the election season, it’s become a highly politicized issue. Let’s engage in this discussion, and perhaps there are some things that need to be modified to reflect the current reality . . .let’s don’t do it just in the heat of a political moment.’

When we started AOL 35 years ago, only 3% of people are connected. They were only online about an hour a week, and it was still illegal, actually, for consumers or businesses to be on the internet [so] I spent a lot of time on commercializing the internet, opening up consumers and businesses, figuring out what the right rules of the road were in terms of things like taxes on e-commerce. And generally, we were able to convince regulators and government leaders that a light touch for the internet made sense, because it was a new idea, and it wasn’t clear exactly how it was going to develop.

But now, it’s not a new idea. And now it has a profound impact on people’s lives and our communities and countries. And so I’m not surprised that there’s more more focus on it, [though] it’s a little too bad that there’s so much attention right this moment because in an election season, things tend to get a little bit hot on both sides.

Putting that aside, I think there are legitimate issues that the policymakers need to be looking at and are starting to look at, not just in Washington, DC, but more broadly in Brussels. And I think having more of a dialogue between the innovators and the policymakers is actually going to be critical in this internet third wave, because the sectors up for grabs are most important aspects of our lives — things like health care and education and food and agriculture. And that’s really going to require not just innovation from a technology standpoint, but thoughtfulness from a a policy standpoint.

I understand entrepreneurs who get frustrated by regulations kind of slowing down the pace of information. I get that. Obviously, some of the businesses that we back have suffered from that. But at the same time, you can’t not expect the government — which is elected by the people — to serve the people, including protecting the people.”

#aol, #communications-decency-act, #revolution, #rise-of-the-rest, #section-230, #senate-hearings, #steve-case, #tc

Senate hauls Zuckerberg, Dorsey in to hearing to yell at them about tweets

A man with a massive beard talks on a flatscreen between a pair of faux columns.

Enlarge / Twitter CEO Jack Dorsey (and his COVID beard?) testifying remotely before the Senate Commerce, Science, and Transportation Committee on October 28, 2020. (credit: Michael Reynolds | Pool | Getty Images)

The Senate Commerce Committee met for a hearing Wednesday meant to probe some of the most seemingly intractable tech questions of our time: is the liability shield granted to tech firms under Section 230 of the Communications Decency Act helpful or harmful, and does it need amending?

Section 230 is a little slice of law with enormously broad implications for the entire Internet and all the communication we do online. At a basic level, it means that if you use an Internet service such as Facebook or YouTube to say something obscene or unlawful, then you, not the Internet service, are the one responsible for having said the thing. The Internet service, meanwhile, has legal immunity from whatever you said. The law also allows space for Internet services to moderate user content how they wish—heavily, lightly, or not at all.

Since Section 230 became law in 1996, the Internet has scaled up from something that perhaps 15 percent of US households could access to something that almost every teenager and adult has in their pocket. Those questions of scale and ubiquity have changed our media and communications landscape, and both Democrats and Republicans alike have questioned what Section 230 should look like going forward. What we do with the law—and where we go from here—is a matter of major import not just for big social media firms such as Facebook, Google, and Twitter, but for the future of every other platform from Reddit to Ars to your favorite cooking blog—and every nascent site, app, and platform yet to come.

Read 17 remaining paragraphs | Comments

#facebook, #google, #hearings, #policy, #politics, #section-230, #senate, #twitter

Watch Facebook, Google, and Twitter’s CEOs defend the law that made social media possible to Congress

The CEOs of Twitter, Facebook and Google will appear before the Senate Commerce Committee on Wednesday in big tech’s latest showdown with Congress.

The Senate hearing will have a narrower, more policy-centric scope than other recent high profile tech hearings, focusing specifically on Section 230 of the Communications Decency Act. That short law might sound obscure, but it’s the key legal shield that protects internet companies from liability for the user-generated content they host, from Facebook posts and tweets to Yelp reviews and comments sections.

Recent big tech hearings have meandered, seldom forcing the leaders of some of the world’s most powerful companies into revealing much. But the cumulative pressure of federal antitrust action, a high-stakes election less than a week away and a number of legislative proposals that could dismantle the law that made their businesses possible will likely set a different tone — and hopefully offer more substance.

You can follow a livestream of the hearing here (above) starting at 10:00 AM ET on Wednesday, October 28. We’ll be following the testimony and all things Section 230, so check back for our coverage of the day’s key takeaways.

#congress, #facebook, #google, #government, #section-230, #section-230-of-the-communications-decency-act, #tc, #the-battle-over-big-tech, #twitter

FCC cites Title II in defense of helping Trump’s attack on social media

A computer keyboard with the word

Enlarge (credit: Getty Images | Peter Dazeley)

The Federal Communications Commission’s top lawyer today explained the FCC’s theory of why it can grant President Donald Trump’s request for a new interpretation of a law that provides legal protection to social media platforms like Twitter and Facebook.

Critics of FCC Chairman Ajit Pai’s plan from both the left and right say the FCC has no authority to reinterpret Section 230 of the Communications Decency Act, which gives legal immunity to online platforms that block or modify content posted by users. FCC General Counsel Thomas Johnson said those critics are wrong in a blog post published on the FCC website today.

Johnson noted that the Communications Decency Act was passed by Congress as part of the Telecommunications Act of 1996, which was an update to the Communications Act of 1934 that established the FCC and provided it with regulatory authority. Johnson also pointed to Section 201(b) of the Communications Act, which gave the FCC power to “prescribe such rules and regulations as may be necessary in the public interest to carry out the provisions of this Act.”

Read 13 remaining paragraphs | Comments

#facebook, #fcc, #policy, #section-230, #social-media, #trump, #twitter

Ajit Pai says he’ll help Trump impose crackdown on Twitter and Facebook

FCC Chairman Ajit Pai.

Enlarge / FCC Chairman Ajit Pai speaking at a press conference on October 1, 2018, in Washington, DC. (credit: Getty Images | Mark Wilson )

Federal Communications Commission Chairman Ajit Pai is backing President Donald Trump’s proposal to limit legal protections for social media websites that block or modify content posted by users. Pai’s views on the matter were unknown until today when he issued a statement saying that he will open a rule-making process to clarify that the First Amendment does not give social media companies “special immunity.”

“Social media companies have a First Amendment right to free speech,” Pai said. “But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”

Trump’s attempt to punish social media websites like Twitter and Facebook for alleged anti-conservative bias landed at the FCC because Trump had the National Telecommunications and Information Administration (NTIA) petition the FCC to issue a new interpretation of Section 230 of the Communications Decency Act. This US law says that providers and users of interactive computer services shall not be held liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” The law also says that no provider or user of an interactive computer service “shall be treated as the publisher or speaker of any information provided by another information content provider.”

Read 19 remaining paragraphs | Comments

#ajit-pai, #fcc, #policy, #section-230, #trump

With ‘absurd’ timing, FCC announces intention to revisit Section 230

FCC Chairman Ajit Pai has announced his intention to pursue a reform of Section 230 of the Communications Act, which among other things limits the liability of internet platforms for content they host. Commissioner Rosenworcel described the timing — immediately after Conservative outrage at Twitter and Facebook limiting the reach of an article relating to Hunter Biden — as “absurd.” But it’s not necessarily the crackdown the Trump administration clearly desires.

In a statement, Chairman Pai explained that “members of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity set forth in Section 230,” and that there is broad support for changing the law — in fact there are already several bills under consideration that would do so.

At issue is the legal protections for platforms when they decide what content to allow and what to block. Some say they are clearly protected by the First Amendment (this is how it is currently interpreted), while others assert that some of those choices amount to violations of users’ right to free speech.

Though Pai does not mention specific recent circumstances in which internet platforms have been accused of having partisan bias in one direction or the other, it is difficult to imagine they — and the constant needling of the White House — did not factor into the decision.

A long road with an ‘unfortunate detour’

In fact the push to reform Section 230 has been progressing for years, with the limitations of the law and the FCC’s interpretation of its pertinent duties discussed candidly by the very people who wrote the original bill and thus have considerable insight into its intentions and shortcomings.

In June Commissioner Starks disparaged pressure from the White House to revisit the FCC’s interpretation of the law, saying that the First Amendment protections are clear and that Trump’s executive order “seems inconsistent with those core principles.” That said, he proposed that the FCC take the request to reconsider the law seriously.

“And if, as I suspect it ultimately will, the petition fails at a legal question of authority,” he said, “I think we should say it loud and clear, and close the book on this unfortunate detour. Let us avoid an upcoming election season that can use a pending proceeding to, in my estimation, intimidate private parties.”

The latter part of his warning seems especially prescient given the choice by the Chairman to open proceedings less than three weeks before the election, and the day after Twitter and Facebook exercised their authority as private platforms to restrict the distribution of articles which, as Twitter belatedly explained, clearly broke guidelines on publishing private information. (The New York Post article had screenshots of unredacted documents with what appeared to be Hunter Biden’s personal email and phone number, among other things.)

Commissioner Rosenworcel did not mince words, saying “The timing of this effort is absurd. The FCC has no business being the President’s speech police.” Starks echoed her, saying “We’re in the midst of an election… the FCC shouldn’t do the President’s bidding here.” (Trump has repeatedly called for the “repeal” of Section 230, which is just part of a much larger and important set of laws.)

Considering the timing and the utter impossibility of reaching any kind of meaningful conclusion before the election — rulemaking is at a minimum a months-long process — it is hard to see Pai’s announcement as anything but a pointed warning to internet platforms. Platforms which, it must be stressed, the FCC has essentially no regulatory powers over.

Foregone conclusion

The Chairman telegraphed his desired outcome clearly in the announcement, saying “Many advance an overly broad interpretation that in some cases shields social media companies from consumer protection laws in a way that has no basis in the text of Section 230… Social media companies have a First Amendment right to free speech. But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”

Whether the FCC has anything to do with regulating how these companies exercise that right remains to be seen, but it’s clear that Pai thinks the agency should, and doesn’t. With the makeup of the FCC currently 3:2 in favor of the Conservative faction, it may be said that this rulemaking is a forgone conclusion; the net neutrality debacle showed that these Commissioners are willing to ignore and twist facts in order to justify the end they choose, and there’s no reason to think this rulemaking will be any different.

The process will be just as drawn out and public as previous ones, however, which means that a cavalcade of comments may yet again indicate that the FCC ignores public opinion, experts, and lawmakers alike in its decision to invent or eliminate its roles as it sees fit. Be ready to share your feedback with the FCC, but no need to fire up the outrage just yet — chances are this rulemaking won’t even exist in draft form until after the election, at which point there may be something of a change in the urgency of this effort to reinterpret the law to the White House’s liking.