Facebook and Twitter CEOs to testify before Congress in November on how they handled the election

Shortly after voting to move forward with a pair of subpoenas, the Senate Judiciary Committee has reached an agreement that will see the CEOs of two major social platforms testify voluntarily in November. The hearing will be the second major congressional appearance by tech CEOs arranged this month.

Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg will answer questions at the hearing, set for November 17 — two weeks after election day. The Republican-led committee is chaired by South Carolina Senator Lindsey Graham, who set the agenda to include the “platforms’ censorship and suppression of New York Post articles.”

According to a new press release from the committee, lawmakers also plan to use the proceedings as a high-profile port-mortem on how Twitter and Facebook fared on and after election day — an issue that lawmakers on both sides will undoubtedly be happy to dig into.

Republicans are eager to press the tech CEOs on how their respective platforms handled a dubious story from the New York Post purporting to report on hacked materials from presidential candidate Joe Biden’s son, Hunter Biden. They view the incident as evidence of their ongoing claims of anti-conservative political bias in platform policy decisions.

While Republicans on the Senate committee led the decision to pressure Zuckerberg and Dorsey into testifying, the committee’s Democrats, who sat out the vote on the subpoenas, will likely bring to the table their own questions about content moderation, as well.

 

#congress, #facebook, #government, #jack-dorsey, #mark-zuckerberg, #policy, #tc, #twitter

0

Senate subpoenas could force Zuckerberg and Dorsey to testify on New York Post controversy

The Senate Judiciary Committee voted in favor of issuing subpoenas for Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey Thursday, meaning that there might be two big tech CEO hearings on the horizon.

Republicans in the committee declared their interest in a hearing on “the platforms’ censorship of New York Post articles” after social networks limited the reach of a dubious story purporting to contain hacked materials implicating Hunter Biden, Joe Biden’s son, in impropriety involving a Ukrainian energy firm. Fox News reportedly passed on the story due to doubts about its credibility.

Tech’s decision to take action against the New York Post story was bound to ignite Republicans in Congress, who have long claimed, with scant evidence, that social platforms deliberately censor conservative voices due to political bias. The Senate Judiciary Committee is chaired by Lindsey Graham (R-SC), a close Trump ally who is now in a much closer than expected race with Democratic challenger Jaime Harrison.

According to a motion filed by Graham, the hearing would address:

(1) the suppression and/or censorship of two news articles from the New York Post titled “Smoking-gun email reveals how Hunter Biden introduced Ukrainian businessman to VP dad” and “Emails reveal how Hunter Biden tried to cash in big on behalf of family with Chinese firm,” (2) any other content moderation policies, practices, or actions that may interfere with or influence elections for federal office, and (3) any other recent determinations to temporarily reduce distribution of material pending factchecker review and/or block and mark material as potentially unsafe.

Earlier in October, the Senate Commerce Committee successfully leveraged subpoena power to secure Dorsey, Zuckerberg and Alphabet’s Sundar Pichai for testimony for their own hearing focused on Section 230, the critical law that shields online platforms from liability for user created content.

The hearing isn’t scheduled yet, nor have the companies publicly agreed to attend. But lawmakers have now established a precedent for successfully dragging tech’s reluctant leaders under oath, making it more difficult for some of the world’s wealthiest and most powerful men to avoid Congress from here on out.

#congress, #facebook, #government, #tc, #twitter

0

Hacker says he correctly guessed Trump’s Twitter password—it was “maga2020!”

Illustration that includes a Twitter logo, President Trump's Twitter account, and a password that reads

Enlarge (credit: Aurich Lawson)

A security researcher reportedly logged in to President Trump’s Twitter account last week by guessing the password—it was “maga2020!”—and then alerted the US government that Trump needed to upgrade his Twitter security practices.

Security researcher Victor Gevers reportedly guessed Trump’s password on the fifth attempt and was dismayed that the president had not enabled two-step authentication. The news was reported today by de Volkskrant, a Dutch newspaper, and the magazine Vrij Nederland. Both reports had quotes from Gevers, while Vrij Nederland also published a screenshot that Gevers says he took when he had access to the @realdonaldtrump account.

Gevers reportedly gained access to Trump’s Twitter account on Friday last week. He says he tried passwords such as “MakeAmericaGreatAgain” and “Maga2020” before hitting on the correct password of “maga2020!” Gevers is a well-known security researcher and has been quoted in several Ars articles on other security topics going back to 2017. He is a researcher at the nonprofit GDI Foundation and chair of the Dutch Institute for Vulnerability Disclosure.

Read 12 remaining paragraphs | Comments

#biz-it, #policy, #trump, #twitter

0

FCC cites Title II in defense of helping Trump’s attack on social media

A computer keyboard with the word

Enlarge (credit: Getty Images | Peter Dazeley)

The Federal Communications Commission’s top lawyer today explained the FCC’s theory of why it can grant President Donald Trump’s request for a new interpretation of a law that provides legal protection to social media platforms like Twitter and Facebook.

Critics of FCC Chairman Ajit Pai’s plan from both the left and right say the FCC has no authority to reinterpret Section 230 of the Communications Decency Act, which gives legal immunity to online platforms that block or modify content posted by users. FCC General Counsel Thomas Johnson said those critics are wrong in a blog post published on the FCC website today.

Johnson noted that the Communications Decency Act was passed by Congress as part of the Telecommunications Act of 1996, which was an update to the Communications Act of 1934 that established the FCC and provided it with regulatory authority. Johnson also pointed to Section 201(b) of the Communications Act, which gave the FCC power to “prescribe such rules and regulations as may be necessary in the public interest to carry out the provisions of this Act.”

Read 13 remaining paragraphs | Comments

#facebook, #fcc, #policy, #section-230, #social-media, #trump, #twitter

0

Social Media and the Hunter Biden Report

In trying to insulate their platforms from the spread of dubious information, Facebook, Twitter and YouTube have ignited a different kind of firestorm.

#biden-hunter, #biden-joseph-r-jr, #facebook-inc, #new-york-post, #presidential-election-of-2020, #social-media, #twitter, #youtube-com

0

FCC trying to help Trump win election with Twitter crackdown, Democrats say

FCC Chairman Ajit Pai talking while standing in front of an FCC seal.

Enlarge / FCC Chairman Ajit Pai on December 14, 2017, in Washington, DC, the day of the FCC’s vote to repeal net neutrality rules. (credit: Getty Images | Alex Wong )

Federal Communications Commission Chairman Ajit Pai has turned the FCC into “a political appendage of President Trump’s campaign” by aiding Trump’s battle against social media websites, two House Democrats said yesterday.

“Chairman Pai’s decision to start a Section 230 rulemaking is a blatant attempt to help a flailing President Trump,” said Energy and Commerce Chairman Frank Pallone Jr. (D-N.J.) and Communications and Technology Subcommittee Chairman Mike Doyle (D-Penn.). “The timing and hurried nature of this decision makes clear it’s being done to influence social media companies’ behavior leading up to an election, and it is shocking to watch this supposedly independent regulatory agency jump at the opportunity to become a political appendage of President Trump’s campaign.”

On Thursday last week, Pai announced that he is backing President Trump’s proposal to limit legal protections for social media websites that block or modify content posted by users. Pai said he will propose a new interpretation of Section 230 of the Communications Decency Act, limiting the legal immunity websites like Facebook and Twitter are granted when they block or screen content. Trump claims the companies are biased against conservatives, and he wants to post on social media without the platforms adding fact checks or limiting the reach of posts that violate their rules.

Read 6 remaining paragraphs | Comments

#ajit-pai, #facebook, #fcc, #policy, #trump, #twitter

0

The Real Divide in America Is Between Political Junkies and Everyone Else

Most Americans view politics as two camps bickering endlessly and fruitlessly over unimportant issues.

#biden-joseph-r-jr, #democratic-party, #republican-party, #trump-donald-j, #twitter, #united-states, #united-states-politics-and-government

0

Who regulates social media?

Social media platforms have repeatedly found themselves in the United States government’s crosshairs over the last few years, as it has been progressively revealed just how much power they really wield, and to what purposes they’ve chosen to wield it. But unlike, say, a firearm or drug manufacturer, there is no designated authority who says what these platforms can and can’t do. So who regulates them? You might say everyone and no one.

Now, it must be made clear at the outset that these companies are by no means “unregulated,” in that no legal business in this country is unregulated. For instance Facebook, certainly a social media company, received a record $5 billion fine last year for failure to comply with rules set by the FTC. But not because the company violated its social media regulations — there aren’t any.

Facebook and others are bound by the same rules that most companies must follow, such as generally agreed-upon definitions of fair business practices, truth in advertising, and so on. But industries like medicine, energy, alcohol, and automotive have additional rules, indeed entire agencies, specific to them; Not so social media companies.

I say “social media” rather than “tech” because the latter is much too broad a concept to have a single regulator. Although Google and Amazon (and Airbnb, and Uber, and so on) need new regulation as well, they may require a different specialist, like an algorithmic accountability office or online retail antitrust commission. (Inasmuch as tech companies act within regulated industries, such as Google in broadband, they are already regulated as such.)

Social media can roughly defined as platforms where people sign up to communicate and share messages and media, and that’s quite broad enough already without adding in things like ad marketplaces, competition quashing and other serious issues.

Who, then, regulates these social media companies? For the purposes of the U.S., there are four main directions from which meaningful limitations or policing may emerge, but each one has serious limitations, and none was actually created for the task.

1. Federal regulators

Image Credits: Andrew Harrer/Bloomberg

The Federal Communications Commission and Federal Trade Commission are what people tend to think of when “social media” and “regulation” are used in a sentence together. But one is a specialist — not the right kind, unfortunately — and the other a generalist.

The FCC, unsurprisingly, is primarily concerned with communication, but due to the laws that created it and grant it authority, it has almost no authority over what is being communicated. The sabotage of net neutrality has complicated this somewhat, but even the faction of the Commission dedicated to the backwards stance adopted during this administration has not argued that the messages and media you post are subject to their authority. They have indeed called for regulation of social media and big tech — but are for the most part unwilling and unable to do so themselves.

The Commission’s mandate is explicitly the cultivation of a robust and equitable communications infrastructure, which these days primarily means fixed and mobile broadband (though increasingly satellite services as well). The applications and businesses that use that broadband, though they may be affected by the FCC’s decisions, are generally speaking none of the agency’s business, and it has repeatedly said so.

The only potentially relevant exception is the much-discussed Section 230 of the Communications Decency Act (an amendment to the sprawling Communications Act), which waives liability for companies when illegal content is posted to their platforms, as long as those companies make a “good faith” effort to remove it in accordance with the law.

But this part of the law doesn’t actually grant the FCC authority over those companies or define good faith, and there’s an enormous risk of stepping into unconstitutional territory, because a government agency telling a company what content it must keep up or take down runs full speed into the First Amendment. That’s why although many think Section 230 ought to be revisited, few take Trump’s feeble executive actions along these lines seriously.

The agency did announce that it will be reviewing the prevailing interpretation of Section 230, but until there is some kind of established statutory authority or Congress-mandated mission for the FCC to look into social media companies, it simply can’t.

The FTC is a different story. As watchdog over business practices at large, it has a similar responsibility towards Twitter as it does towards Nabisco. It doesn’t have rules about what a social media company can or can’t do any more than it has rules about how many flavors of Cheez-It there should be. (There are industry-specific “guidelines” but these are more advisory about how general rules have been interpreted.)

On the other hand, the FTC is very much the force that comes into play should Facebook misrepresent how it shares user data, or Nabisco overstate the amount of real cheese in its crackers. The agency’s most relevant responsibility to the social media world is that of enforcing the truthfulness of material claims.

You can thank the FTC for the now-familiar, carefully worded statements that avoid any real claims or responsibilities: “We take security very seriously” and “we think we have the best method” and that sort of thing — so pretty much everything that Mark Zuckerberg says. Companies and executives are trained to do this to avoid tangling with the FTC: “Taking security seriously” isn’t enforceable, but saying “user data is never shared” certainly is.

In some cases this can still have an effect, as in the $5 billion fine recently dropped into Facebook’s lap (though for many reasons that was actually not very consequential). It’s important to understand that the fine was for breaking binding promises the company had made — not for violating some kind of social-media-specific regulations, because again, there really aren’t any.

The last point worth noting is that the FTC is a reactive agency. Although it certainly has guidelines on the limits of legal behavior, it doesn’t have rules that when violated result in a statutory fine or charges. Instead, complaints filter up through its many reporting systems and it builds a case against a company, often with the help of the Justice Department. That makes it slow to respond compared with the lightning-fast tech industry, and the companies or victims involved may have moved beyond the point of crisis while a complaint is being formalized there. Equifax’s historic breach and minimal consequences are an instructive case:

So: While the FCC and FTC do provide important guardrails for the social media industry, it would not be accurate to say they are its regulators.

2. State legislators

States are increasingly battlegrounds for the frontiers of tech, including social media companies. This is likely due to frustration with partisan gridlock in Congress that has left serious problems unaddressed for years or decades. Two good examples of states that lost their patience are California’s new privacy rules and Illinois’s Biometric Information Privacy Act (BIPA).

The California Consumer Privacy Act (CCPA) was arguably born out the ashes of other attempts at a national level to make companies more transparent about their data collection policies, like the ill-fated Broadband Privacy Act.

Californian officials decided that if the feds weren’t going to step up, there was no reason the state shouldn’t at least look after its own. By convention, state laws that offer consumer protections are generally given priority over weaker federal laws — this is so a state isn’t prohibited from taking measures for their citizens’ safety while the slower machinery of Congress grinds along.

The resulting law, very briefly stated, creates formal requirements for disclosures of data collection, methods for opting out of them, and also grants authority for enforcing those laws. The rules may seem like common sense when you read them, but they’re pretty far out there compared to the relative freedom tech and social media companies enjoyed previously. Unsurprisingly, they have vocally opposed the CCPA.

BIPA has a somewhat similar origin, in that a particularly far-sighted state legislature created a set of rules in 2008 limiting companies’ collection and use of biometric data like fingerprints and facial recognition. It has proven to be a huge thorn in the side of Facebook, Microsoft, Amazon, Google, and others that have taken for granted the ability to analyze a user’s biological metrics and use them for pretty much whatever they want.

Many lawsuits have been filed alleging violations of BIPA, and while few have produced notable punishments like this one, they have been invaluable in forcing the companies to admit on the record exactly what they’re doing, and how. Sometimes it’s quite surprising! The optics are terrible, and tech companies have lobbied (fortunately, with little success) to have the law replaced or weakened.

What’s crucially important about both of these laws is that they force companies to, in essence, choose between universally meeting a new, higher standard for something like privacy, or establishing a tiered system whereby some users get more privacy than others. The thing about the latter choice is that once people learn that users in Illinois and California are getting “special treatment,” they start asking why Mainers or Puerto Ricans aren’t getting it as well.

In this way state laws exert outsize influence, forcing companies to make changes nationally or globally because of decisions that technically only apply to a small subset of their users. You may think of these states as being activists (especially if their attorneys general are proactive), or simply ahead of the curve, but either way they are making their mark.

This is not ideal, however, because taken to the extreme, it produces a patchwork of state laws created by local authorities that may conflict with one another or embody different priorities. That, at least, is the doomsday scenario predicted almost universally by companies in a position to lose out.

State laws act as a test bed for new policies, but tend to only emerge when movement at the federal level is too slow. Although they may hit the bullseye now and again, like with BIPA, it would be unwise to rely on a single state or any combination among them to miraculously produce, like so many simian legislators banging on typewriters, a comprehensive regulatory structure for social media. Unfortunately, that leads us to Congress.

3. Congress

Image: Bryce Durbin/TechCrunch

What can be said about the ineffectiveness of Congress that has not already been said, again and again? Even in the best of times few would trust these people to establish reasonable, clear rules that reflect reality. Congress simply is not the right tool for the job, because of its stubborn and willful ignorance on almost all issues of technology and social media, its countless conflicts of interest, and its painful sluggishness — sorry, deliberation — in actually writing and passing any bills, let alone good ones.

Companies oppose state laws like the CCPA while calling for national rules because they know that it will take forever and there’s more opportunity to get their finger in the pie before it’s baked. National rules, in addition to coming far too late, are much more likely also be watered down and riddled with loopholes by industry lobbyists. (This is indicative of the influence these companies wield over their own regulation, but it’s hardly official.)

But Congress isn’t a total loss. In moments of clarity it has established expert agencies like those in the first item, which have Congressional oversight but are otherwise independent, empowered to make rules, and kept technically — if somewhat limply — nonpartisan.

Unfortunately, the question of social media regulation is too recent for Congress to have empowered a specialist agency to address it. Social media companies don’t fit neatly into any of the categories that existing specialists regulate, something that is plainly evident by the present attempt to stretch Section 230 beyond the breaking point just to put someone on the beat.

Laws at the federal level are not to be relied on for regulation of this fast-moving industry, as the current state of things shows more than adequately. And until a dedicated expert agency or something like it is formed, it’s unlikely that anything spawned on Capitol Hill will do much to hold back the Facebooks of the world.

4. European regulators

eu gdpr 1Of course, however central it considers itself to be, the U.S. is only a part of a global ecosystem of various and shifting priorities, leaders, and legal systems. But in a sort of inside-out version of state laws punching above their weight, laws that affect a huge part of the world except the U.S. can still have a major effect on how companies operate here.

The most obvious example is the General Data Protection Regulation or GDPR, a set of rules, or rather augmentation of existing rules dating to 1995, that has begun to change the way some social media companies do business.

But this is only the latest step in a fantastically complex, decades-long process that must harmonize the national laws and needs of the E.U. member states in order to provide the clout it needs to compel adherence to the international rules. Red tape seldom bothers tech companies, which rely on bottomless pockets to plow through or in-born agility to dance away.

Although the tortoise may eventually in this case overtake the hare in some ways, at present the GDPR’s primary hindrance is not merely the complexity of its rules, but the lack of decisive enforcement of them. Each country’s Data Protection Agency acts as a node in a network that must reach consensus in order to bring the hammer down, a process that grinds slow and exceedingly fine.

When the blow finally lands, though, it may be a heavy one, outlawing entire practices at an industry-wide level rather than simply extracting pecuniary penalties these immensely rich entities can shrug off. There is space for optimism as cases escalate and involve heavy hitters like antitrust laws in efforts that grow to encompass the entire “big tech” ecosystem.

The rich tapestry of European regulations is really too complex of a topic to address here in the detail it deserves, and also reaches beyond the question of who exactly regulates social media. Europe’s role in that question of, if you will, speaking slowly and carrying a big stick promises to produce results on a grand scale, but for the purposes of this article it cannot really be considered an effective policing body.

(TechCrunch’s E.U. regulatory maven Natasha Lomas contributed to this section.)

5. No one? Really?

As you can see, the regulatory ecosystem in which social media swims is more or less free of predators. The most dangerous are the small, agile ones — state legislatures — that can take a bite before the platforms have had a chance to brace for it. The other regulators are either too slow, too compromised, or too involved (or some combination of the three) to pose a real threat. For this reason it may be necessary to introduce a new, but familiar, species: the expert agency.

As noted above, the FCC is the most familiar example of one of these, though its role is so fragmented that one could be forgiven for forgetting that it was originally created to ensure the integrity of the telephone and telegraph system. Why, then, is it the expert agency for orbital debris? That’s a story for another time.

Capitol building

Image Credit: Bryce Durbin/TechCrunch

What is clearly needed is the establishment of an independent expert agency or commission in the U.S., at the federal level, that has statutory authority to create and enforce rules pertaining to the handling of consumer data by social media platforms.

Like the FCC (and somewhat like the E.U.’s DPAs), this should be officially nonpartisan — though like the FCC it will almost certainly vacillate in its allegiance — and should have specific mandates on what it can and can’t do. For instance, it would be improper and unconstitutional for such an agency to say this or that topic of speech should be disallowed from Facebook or Twitter. But it would be able to say that companies need to have a reasonable and accessible definition of the speech they forbid, and likewise a process for auditing and contesting takedowns. (The details of how such an agency would be formed and shaped is well beyond the scope of this article.)

Even the likes of the FAA lags behind industry changes, such as the upsurge in drones that necessitated a hasty revisit of existing rules, or the huge increase in commercial space launches. But that’s a feature, not a bug. These agencies are designed not to act unilaterally based on the wisdom and experience of their leaders, but are required to perform or solicit research, consult with the public and industry alike, and create evidence-based policies involving, or at least addressing, a minimum of sufficiently objective data.

Sure, that didn’t really work with net neutrality, but I think you’ll find that industries have been unwilling to capitalize on this temporary abdication of authority by the FCC because they see that the Commission’s current makeup is fighting a losing battle against voluminous evidence, public opinion, and common sense. They see the writing on the wall and understand that under this system it can no longer be ignored.

With an analogous authority for social media, the evidence could be made public, the intentions for regulation plain, and the shareholders — that is to say, users — could make their opinions known in a public forum that isn’t owned and operated by the very companies they aim to rein in.

Without such an authority these companies and their activities — the scope of which we have only the faintest clue to — will remain in a blissful limbo, picking and choosing by which rules to abide and against which to fulminate and lobby. We must help them decide, and weigh our own priorities against theirs. They have already abused the naive trust of their users across the globe — perhaps it’s time we asked them to trust us for once.

#facebook, #fcc, #ftc, #gdpr, #government, #instagram, #regulation, #social, #social-media, #social-networks, #tc, #twitter

0

The Facebook-Twitter-Trump Wars Are Actually About Something Else

Our institutions have failed to rein in Donald Trump. So people look to Big Tech.

#censorship, #computers-and-the-internet, #facebook-inc, #rumors-and-misinformation, #social-media, #trump-donald-j, #twitter

0

Where Liberal Power Lies

And why conservatives fear the creep of authoritarianism, too.

#ahmari-sohrab, #conservatism-us-politics, #dorsey-jack, #facebook-inc, #freedom-of-speech-and-expression, #new-york-post, #presidential-election-of-2020, #social-media, #trump-donald-j, #twitter, #united-states-politics-and-government, #zuckerberg-mark-e

0

Daily Crunch: Twitter walks back New York Post decision

A New York Post story forces social platforms to make (and in Twitter’s case, reverse) some difficult choices, Sony announces a new 3D display and fitness startup Future raises $24 million. This is your Daily Crunch for October 16, 2020.

The big story: Twitter walks back New York Post decision

A recent New York Post story about a cache of emails and other data supposedly originating from a laptop belonging to Joe Biden’s son Hunter looked suspect from the start, and more holes have emerged over time. But it’s also put the big social media platform in an awkward position, as both Facebook and Twitter took steps to limit the ability of users to share the story.

Twitter, in particular, took a more aggressive stance, blocking links to and images of the Post story because it supposedly violated the platform’s “hacked materials policy.” This led to predictable complaints from Republican politicians, and even Twitter’s CEO Jack Dorsey said that blocking links in direct messages without an explanation was “unacceptable.”

As a result, the company said it’s changing the aforementioned hacked materials policy. It will no longer remove hacked content unless it’s been shared directly by hackers or those “acting in direct concert with them.” Otherwise, it will label tweets to provide context. As of today, it’s also allowing users to share links to the Post story.

The tech giants

Sony’s $5,000 3D display (probably) isn’t for you — The company is targeting creative professionals with its new Spatial Reality Display.

EU’s Google-Fitbit antitrust decision deadline pushed into 2021 — EU regulators now have until January 8, 2021 to take a decision.

Startups, funding and venture capital

Elon Musk’s Las Vegas Loop might only carry a fraction of the passengers it promised — Planning files reviewed by TechCrunch seem to show that The Boring Company’s Loop system will not be able to move anywhere near the number of people the company agreed to.

Future raises $24M Series B for its $150/mo workout coaching app amid at-home fitness boom — Future offers a pricey subscription that virtually teams users with a real-life fitness coach.

Lawmatics raises $2.5M to help lawyers market themselves — The San Diego startup is building marketing and CRM software for lawyers.

Advice and analysis from Extra Crunch

How COVID-19 and the resulting recession are impacting female founders — The sharp decline in available capital is slowing the pace at which women are founding new companies in the COVID-19 era.

Startup founders set up hacker homes to recreate Silicon Valley synergy — Hacker homes feel like a nostalgic attempt to recreate some of the synergies COVID-19 wiped out.

Private equity firms can offer enterprise startups a viable exit option — The IPO-or-acquisition question isn’t always an either/or proposition.

(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

FAA streamlines commercial launch rules to keep the rockets flying — With rockets launching in greater numbers and variety, and from more providers, it makes sense to get a bit of the red tape out of the way.

We need universal digital ad transparency now — Fifteen researchers propose a new standard for advertising disclosures.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

#daily-crunch, #policy, #social, #twitter

0

Twitter is now allowing users to share that controversial New York Post story

Twitter has taken another step back from its initial decision to block users from sharing links to or images of a New York Post story reporting on emails and other data supposedly originating on a laptop belonging to Democratic presidential nominee Joe Biden’s son Hunter.

The story, which alleged that Hunter Biden had set up meeting between a Ukrainian energy firm and his father back when Biden was vice president, looked shaky from the start, and more holes have emerged over time. Both Facebook and Twitter took action to slow its spread — but Twitter seemed to take the more aggressive stance, not just including warning labels whenever someone shared the story, but actually blocking links.

These moves have drawn a range of criticism. There have been predictable cries of censorship from Republican politicians and pundits, but there have also been suggestions that Facebook and Twitter inadvertently drew more attention to the story. And even Twitter’s CEO Jack Dorsey suggested that it was “unacceptable” to block links in DMs without an explanation.

Casey Newton, on the other hand, argued that the platforms had successfully slowed the story’s spread: “The truth had time to put its shoes on before Rudy Giuliani’s shaggy-dog story about a laptop of dubious origin made it all the way around the world.”

Twitter initially justified its approach by citing its hacked materials policy, then later said it was blocking the Post article for including “personal and private information — like email addresses and phone numbers — which violate our rules.”

The controversy did prompt Twitter to revise its hacked materials policy, so that content and links obtained through dubious means will now come with a label, rather than being removed entirely, unless it’s being shared directly by hackers or those “acting in concert with them.”

And now, as first reported by The New York Times, Twitter is also allowing users to share links to the Post story itself (something I’ve confirmed through my own Twitter account).

Why the reversal? Again, the official justification for blocking the link was to prevent the spread of private information, so the company said that the story has now spread so widely, online and in the press, that the information can no longer be considered private.

#policy, #social, #twitter

0

In Reversal, Twitter Is No Longer Blocking New York Post Article

The latest change underlined how rapidly social media platforms are shifting their positions in the days leading up to the election.

#computers-and-the-internet, #corporate-social-responsibility, #facebook-inc, #news-and-news-media, #presidential-election-of-2020, #republican-party, #rumors-and-misinformation, #social-media, #twitter, #united-states-politics-and-government

0

Twitter abruptly changes hacked-materials policy after blocking Biden story

A computer keyboard with the word

Enlarge (credit: Getty Images | Peter Dazeley)

Twitter has changed its policy on sharing hacked materials after facing criticism of its decision to block users from tweeting links to a New York Post article that contained Hunter Biden emails allegedly retrieved from a computer left at a repair shop.

On Wednesday, Twitter said it blocked links to the Post story because it included private information and violated Twitter’s hacked materials policy, which prohibits sharing links to or images of hacked content. But on late Thursday night, Twitter legal executive Vijaya Gadde wrote in a thread that the company has “decided to make changes to the [hacked materials] policy and how we enforce it” after receiving “significant feedback.”

Twitter enacted the policy in 2018 “to discourage and mitigate harms associated with hacks and unauthorized exposure of private information,” Gadde wrote. “We tried to find the right balance between people’s privacy and the right of free expression, but we can do better.” Twitter will thus change its hacked materials policy to “no longer remove hacked content unless it is directly shared by hackers or those acting in concert with them.” Twitter will also “label Tweets to provide context instead of blocking links from being shared on Twitter.”

Read 9 remaining paragraphs | Comments

#hunter-biden, #new-york-post, #policy, #twitter

0

We need universal digital ad transparency now

Dear Mr. Zuckerberg, Mr. Dorsey, Mr. Pichai and Mr. Spiegel: We need universal digital ad transparency now!

The negative social impacts of discriminatory ad targeting and delivery are well-known, as are the social costs of disinformation and exploitative ad content. The prevalence of these harms has been demonstrated repeatedly by our research. At the same time, the vast majority of digital advertisers are responsible actors who are only seeking to connect with their customers and grow their businesses.

Many advertising platforms acknowledge the seriousness of the problems with digital ads, but they have taken different approaches to confronting those problems. While we believe that platforms need to continue to strengthen their vetting procedures for advertisers and ads, it is clear that this is not a problem advertising platforms can solve by themselves, as they themselves acknowledge. The vetting being done by the platforms alone is not working; public transparency of all ads, including ad spend and targeting information, is needed so that advertisers can be held accountable when they mislead or manipulate users.

Our research has shown:

  • Advertising platform system design allows advertisers to discriminate against users based on their gender, race and other sensitive attributes.
  • Platform ad delivery optimization can be discriminatory, regardless of whether advertisers attempt to set inclusive ad audience preferences.
  • Ad delivery algorithms may be causing polarization and make it difficult for political campaigns to reach voters with diverse political views.
  • Sponsors spent more than $1.3 billion dollars on digital political ads, yet disclosure is vastly inadequate. Current voluntary archives do not prevent intentional or accidental deception of users.

While it doesn’t take the place of strong policies and rigorous enforcement, we believe transparency of ad content, targeting and delivery can effectively mitigate many of the potential harms of digital ads. Many of the largest advertising platforms agree; Facebook, Google, Twitter and Snapchat all have some form of an ad archive. The problem is that many of these archives are incomplete, poorly implemented, hard to access by researchers and have very different formats and modes of access. We propose a new standard for universal ad disclosure that should be met by every platform that publishes digital ads. If all platforms commit to the universal ad transparency standard we propose, it will mean a level playing field for platforms and advertisers, data for researchers and a safer internet for everyone.

The public deserves full transparency of all digital advertising. We want to acknowledge that what we propose will be a major undertaking for platforms and advertisers. However, we believe that the social harms currently being borne by users everywhere vastly outweigh the burden universal ad transparency would place on ad platforms and advertisers. Users deserve real transparency about all ads they are bombarded with every day. We have created a detailed description of what data should be made transparent that you can find here.

We researchers stand ready to do our part. The time for universal ad transparency is now.

Signed by:

Jason Chuang, Mozilla
Kate Dommett, University of Sheffield
Laura Edelson, New York University
Erika Franklin Fowler, Wesleyan University
Michael Franz, Bowdoin College
Archon Fung, Harvard University
Sheila Krumholz, Center for Responsive Politics
Ben Lyons, University of Utah
Gregory Martin, Stanford University
Brendan Nyhan, Dartmouth College
Nate Persily, Stanford University
Travis Ridout, Washington State University
Kathleen Searles, Louisiana State University
Rebekah Tromble, George Washington University
Abby Wood, University of Southern California

#advertising-tech, #column, #digital-advertising, #digital-marketing, #facebook, #google, #media, #online-advertising, #opinion, #snapchat, #social, #targeted-advertising, #tc, #twitter

0

Twitter changes its hacked materials policy in wake of New York Post controversy

Twitter has announced an update to its hacked materials policy — saying it will no longer remove hacked content unless it’s directly shared by hackers or those “acting in concert with them”.

Instead of blocking such content/links from being shared on its service it says it will label tweets to “provide context”.

Wider Twitter rules against posting private information, synthetic and manipulated media, and non-consensual nudity all still apply — so it could still, for example, remove links to hacked material if the content being linked to violates other policies. But just tweeting a link to hacked materials isn’t an automatic takedown anymore.

The move comes hard on the heels of the company’s decision to restrict sharing of a New York Post article this week — which reported on claims that laptop hardware left at a repair shop contained emails and other data belonging to Hunter Biden, the son of U.S. presidential candidate Joe Biden.

The decision by Twitter to restrict sharing of the Post article attracted vicious criticism from high profile Republican voices — with the likes of senator Josh Hawley tweeting that the company is “now censoring journalists”.

Twitter’s hacked materials policy do explicitly allow “reporting on a hack, or sharing press coverage of hacking” but the company subsequently clarified that it had acted because the Post article contained “personal and private information — like email addresses and phone numbers — which violate our rules”. (Plus the Post wasn’t reporting on a hack; but rather on the claim of the discovery of a cache of emails and the emails themselves.)

At the same time the Post article itself is highly controversial. The scenario of how the data came to be in the hands of a random laptop repair shop which then chose to hand it over to a key Trump ally stretches credibility — bearing the hallmarks of an election-targeting disops operation, as we explained on Wednesday.

Given questions over the quality of the Post’s fact-checking and journalistic standards in this case, Twitter’s decision to restrict sharing of the article actually appears to have helped reduce the spread of disinformation — even as it attracted flak to the company for censoring ‘journalism’.

(It has also since emerged that the harddrive in question was manufactured shortly before the laptop was claimed to have been dropped off at the shop. So the most likely scenario is Hunter Biden’s iCloud was hacked and doctored emails planted on the drive where the data could be ‘discovered’ and leaked to the press in a ham-fisted attempt to influence the U.S. presidential election. But Twitter is clearly uncomfortable that enforcing its policy led to accusations of censoring journalists.)

In a tweet thread explaining the change to its policy, Twitter’s legal, policy and trust & safety lead, Vijaya Gadde, writes: “We want to address the concerns that there could be many unintended consequences to journalists, whistleblowers and others in ways that are contrary to Twitter’s purpose of serving the public conversation.”

She also notes that when the hacked materials policy was first introduced, in 2018, Twitter had fewer tools for policy enforcement than it does now, saying: “We’ve recently added new product capabilities, such as labels to provide people with additional context. We are no longer limited to Tweet removal as an enforcement action.”

Twitter began adding contextual labels to policy-breaching tweets by US president Donald Trump earlier this year, rather than remove his tweets altogether. It has continued to expand usage of these contextual signals — such as by adding fact-checking labels to certain conspiracy theory tweets — giving itself a ‘more speech to counteract bad speech’ enforcement tool vs the blunt instrument of tweet takedowns/account bans (which it has also applied recently to the toxic conspiracy theory group, QAnon).

“We believe that labeling Tweets and empowering people to assess content for themselves better serves the public interest and public conversation. The Hacked Material Policy is being updated to reflect these new enforcement capabilities,” Gadde also says, adding: “Content moderation is incredibly difficult, especially in the critical context of an election. We are trying to act responsibly & quickly to prevent harms, but we’re still learning along the way.”

The updated policy is clearly not a free-for-all, given all other Twitter Rules against hacked material apply (such as doxxing). Though there’s a question of whether tweets linking to the Post article would still be taken down under the updated policy if the story did indeed contain personal info (which remains against Twitter’s policy).

At the same time, the new ‘third way’ policy for hacked materials does leave Twitter’s platform to be a conduit for the spread of political disinformation (just with a little contextual friction) — in instances where it’s been credulously laundered by the press. (Albeit, Twitter can justifiably point the finger of blame at poor journalist standards at that point.)

The new policy also raises the question of how Twitter will determine whether or not a person is working ‘in concert’ with hackers? Just spitballing here but if — say — on the poll’s eve, Trump were to share some highly dubious information that smeared his key political rival and which he said he’d been handed by Russian president, Vladimir Putin, would Twitter step in and remove it?

We can only hope we don’t have to find out.

#content-moderation, #disinformation, #hacking, #new-york-post, #policy, #security, #social, #twitter

0

Facebook and Twitter Dodge a 2016 Repeat, and Ignite a 2020 Firestorm

The companies have said they would do more to stop misinformation and hacked materials from spreading. This is what that effort looks like.

#biden-hunter, #censorship, #computers-and-the-internet, #cyberattacks-and-hackers, #facebook-inc, #freedom-of-speech-and-expression, #new-york-post, #news-and-news-media, #presidential-election-of-2020, #rumors-and-misinformation, #social-media, #twitter, #united-states-politics-and-government

0

Twitter is investigating widespread outage reports

If you’re reading this, you probably didn’t get here from Twitter . The service has been experiencing widespread reports of outages for at least an hour. The issue has impact a range of different activities on the site, ranging from newsfeeds to the ability to tweet. The company has acknowledged the on-going problem, noting that it is investigating things on its official status page,

Update – We are continuing to monitor as our teams investigate. More updates to come.
Oct 15, 22:31 UTC
Investigating – We are currently investigating this issue. More updates to come.
Oct 15, 21:56 UTC

Twitter responded to our request for comment, stating, “We know people are having trouble Tweeting and using Twitter. We’re working to fix this issue as quickly as possible. We’ll share more when we have it and Tweet from @TwitterSupport when we can – stay tuned.”

We’ll update as we hear more.

#apps, #outage, #twitter

0

As Twitter and Facebook Clamp Down, Republicans Claim ‘Election Interference’

Conservatives said they would subpoena the chief executives of the social networks, which had blocked an unsubstantiated New York Post article.

#biden-hunter, #blackburn-marsha, #cruz-ted, #dorsey-jack, #facebook-inc, #freedom-of-speech-and-expression, #hawley-josh-d-1979, #house-committee-on-the-judiciary, #jordan-jim-1964, #new-york-post, #news-and-news-media, #presidential-election-of-2020, #social-media, #trump-donald-j, #twitter, #zuckerberg-mark-e

0

With ‘absurd’ timing, FCC announces intention to revisit Section 230

FCC Chairman Ajit Pai has announced his intention to pursue a reform of Section 230 of the Communications Act, which among other things limits the liability of internet platforms for content they host. Commissioner Rosenworcel described the timing — immediately after Conservative outrage at Twitter and Facebook limiting the reach of an article relating to Hunter Biden — as “absurd.” But it’s not necessarily the crackdown the Trump administration clearly desires.

In a statement, Chairman Pai explained that “members of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity set forth in Section 230,” and that there is broad support for changing the law — in fact there are already several bills under consideration that would do so.

At issue is the legal protections for platforms when they decide what content to allow and what to block. Some say they are clearly protected by the First Amendment (this is how it is currently interpreted), while others assert that some of those choices amount to violations of users’ right to free speech.

Though Pai does not mention specific recent circumstances in which internet platforms have been accused of having partisan bias in one direction or the other, it is difficult to imagine they — and the constant needling of the White House — did not factor into the decision.

A long road with an ‘unfortunate detour’

In fact the push to reform Section 230 has been progressing for years, with the limitations of the law and the FCC’s interpretation of its pertinent duties discussed candidly by the very people who wrote the original bill and thus have considerable insight into its intentions and shortcomings.

In June Commissioner Starks disparaged pressure from the White House to revisit the FCC’s interpretation of the law, saying that the First Amendment protections are clear and that Trump’s executive order “seems inconsistent with those core principles.” That said, he proposed that the FCC take the request to reconsider the law seriously.

“And if, as I suspect it ultimately will, the petition fails at a legal question of authority,” he said, “I think we should say it loud and clear, and close the book on this unfortunate detour. Let us avoid an upcoming election season that can use a pending proceeding to, in my estimation, intimidate private parties.”

The latter part of his warning seems especially prescient given the choice by the Chairman to open proceedings less than three weeks before the election, and the day after Twitter and Facebook exercised their authority as private platforms to restrict the distribution of articles which, as Twitter belatedly explained, clearly broke guidelines on publishing private information. (The New York Post article had screenshots of unredacted documents with what appeared to be Hunter Biden’s personal email and phone number, among other things.)

Commissioner Rosenworcel did not mince words, saying “The timing of this effort is absurd. The FCC has no business being the President’s speech police.” Starks echoed her, saying “We’re in the midst of an election… the FCC shouldn’t do the President’s bidding here.” (Trump has repeatedly called for the “repeal” of Section 230, which is just part of a much larger and important set of laws.)

Considering the timing and the utter impossibility of reaching any kind of meaningful conclusion before the election — rulemaking is at a minimum a months-long process — it is hard to see Pai’s announcement as anything but a pointed warning to internet platforms. Platforms which, it must be stressed, the FCC has essentially no regulatory powers over.

Foregone conclusion

The Chairman telegraphed his desired outcome clearly in the announcement, saying “Many advance an overly broad interpretation that in some cases shields social media companies from consumer protection laws in a way that has no basis in the text of Section 230… Social media companies have a First Amendment right to free speech. But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”

Whether the FCC has anything to do with regulating how these companies exercise that right remains to be seen, but it’s clear that Pai thinks the agency should, and doesn’t. With the makeup of the FCC currently 3:2 in favor of the Conservative faction, it may be said that this rulemaking is a forgone conclusion; the net neutrality debacle showed that these Commissioners are willing to ignore and twist facts in order to justify the end they choose, and there’s no reason to think this rulemaking will be any different.

The process will be just as drawn out and public as previous ones, however, which means that a cavalcade of comments may yet again indicate that the FCC ignores public opinion, experts, and lawmakers alike in its decision to invent or eliminate its roles as it sees fit. Be ready to share your feedback with the FCC, but no need to fire up the outrage just yet — chances are this rulemaking won’t even exist in draft form until after the election, at which point there may be something of a change in the urgency of this effort to reinterpret the law to the White House’s liking.

#facebook, #fcc, #government, #policy, #section-230, #social-media, #twitter

0

YouTube bans videos promoting conspiracy theories like QAnon that target individuals

YouTube today joined social media platforms like Facebook and Twitter in taking more direct action to prohibit the distribution of conspiracy theories like QAnon.

The company announced that it is expanding its hate and harassment policies to ban videos “that [target] an individual or group with conspiracy theories that have been used to justify real-world violence,” according to a statement.

YouTube specifically pointed to videos that harass or threaten someone by claiming they are complicit in the false conspiracy theories promulgated by adherents to QAnon.

YouTube isn’t going as far as either of the other major social media outlets in an establishing an outright ban on videos or articles that promote the outlandish conspiracies, instead focusing on the material that targets individuals.

“As always, context matters, so news coverage on these issues or content discussing them without targeting individuals or protected groups may stay up,” the company said in a statement. “We will begin enforcing this updated policy today, and will ramp up in the weeks to come.”

It’s the latest step in social media platforms efforts to combat the spread of disinformation and conspiracy theories that are increasingly linked to violence and terrorism in the real world.

In 2019, the FBI for the first time identified fringe conspiracy theories like QAnon as a domestic terrorist threat and adherents to the conspiracy theory that falsely claims famous celebrities and Democratic politicians are part of a secret, Satanic, child-molesting cabal plotting to undermine Donald Trump.

In July, Twitter banned 7,000 accounts associated with the conspiracy theory, and last week Facebook announced a ban on the distribution of QAnon related materials or propaganda across its platforms.

These actions by the social media platforms may be too little, too late, considering how widely the conspiracy theories have spread… and the damage they’ve already done thanks to incidents like the attack on a pizza parlor in Washington DC that landed the gunman in prison.

The recent steps at YouTube followed earlier efforts to stem the distribution of conspiracy theories by making changes to its recommendation algorithm to avoid promoting conspiracy related materials.

However as TechCrunch noted previously, it was over the course of 2018 and the last year that QAnon conspiracies really took root.

As TechCrunch noted previously, it’s now a shockingly mainstream political belief system that has its own Congressional candidates.

So much for YouTube’s vaunted 70% drop in views coming from the company’s search and discovery systems. The company said that when it looked at QAnon content, it saw the number of views coming from non-subscribed recommendations dropping by over 80% since January 2019.

YouTube noted that it may take additional steps going forward as it loowks to combat conspiracy theories that lead to real-world violence.

“Due to the evolving nature and shifting tactics of groups promoting these conspiracy theories, we’ll continue to adapt our policies to stay current and remain committed to taking the steps needed to live up to this responsibility,” the company said.

#donald-trump, #facebook, #federal-bureau-of-investigation, #social-media-platforms, #tc, #twitter, #youtube

0

Trump’s Tweets on Troop Withdrawals Unnerve Pentagon

The president’s demands to draw down forces in Afghanistan, Somalia and Syria seek to fulfill a campaign promise. But officials warn rapid troop reductions could bolster adversaries.

#afghanistan, #al-qaeda, #china, #defense-department, #islamic-state-in-iraq-and-syria-isis, #milley-mark-a, #obrien-robert-c-1952, #presidential-election-of-2020, #russia, #shabab, #somalia, #syria, #targeted-killings, #terrorism, #trump-donald-j, #twitter, #united-states-africa-command, #united-states-defense-and-military-forces, #united-states-international-relations, #united-states-politics-and-government

0

Pew: Most prolific Twitter users tend to be Democrats, but majority of users still rarely tweet

A new study from Pew Research Center, released today, digs into the different ways that U.S. Democrats and Republicans use Twitter. Based on data collected between Nov. 11, 2019 and Sept. 14, 2020, the study finds that members of both parties tweet fairly infrequently, but a majority of Twitter’s most prolific users tend to swing left.

The report updates Pew’s 2019 study with similar findings. At that time, Pew found that 10% of U.S. adults on Twitter were responsible for 80% of all tweets from U.S. adults.

Today, those figures have changed. During the study period, the most active 10% of users produced 92% of all tweets by U.S. adults.

And of these highly active users, 69% identify as Democrats or Democratic-leaning independents.

In addition, the 10% most active Democrats typically produce roughly twice the number of tweets per month (157) compared with the most active Republicans (79).

Image Credits: Pew Research Center

These highly-active users don’t represent how most Twitter users tweet, however.

Regardless of party affiliation, the majority of Twitter users post very infrequently, Pew found.

The median U.S. adult Twitter user posted just once per month during the time of the study. The median Democrat posts just once per month, while the median Republican posts even less often than that.

The typical adult also has very few followers, with the median
Democrat having 32 followers while the median Republican has 21. Democrats, however, tend to follow more accounts than Republicans do, at 126 vs. 71, respectively.

Image Credits: Pew Research Center

The new study additionally examined other differences in how members of the two parties use the platforms, beyond frequency of tweeting.

For starters, it found 60% of the Democrats on Twitter would describe themselves as very or somewhat liberal, compared with 43% of Democrats who don’t use Twitter. Self-identified conservatives on Twitter vs. conservatives not on the platform had closer shares, at 60% and 62%, respectively.

Pew also found that the two Twitter accounts followed by the largest share of U.S. adults were those belonging to former President Barack Obama (@BarackObama) and President Donald Trump
(@RealDonaldTrump).

Not surprisingly, more Democrats followed Obama — 42% of Democrats did, vs. just 12% of Republicans. Trump, meanwhile, was followed by 35% of Republicans and just 13% of Democrats.

Other top political accounts saw similar trends. For instance, Rep. Alexandria Ocasio-Cortez (@AOC) is followed by 16% of Democrats and 3% of Republicans. Fox News personalities Tucker Carlson (@TuckerCarlson) and Sean Hannity (@seanhannity), meanwhile, are both followed by 12% of Republicans but just 1% of Democrats.

Image Credits:

This is perhaps a more important point than Pew’s study indicates, as it demonstrates that even though Twitter’s original goal was to build a “public town square” of sorts, where conversations could take place in the open, Twitter users have built the same isolated bubbles around themselves as they have elsewhere on social media.

Because Twitter’s main timeline only shows tweets and retweets from people you follow, users are only hearing their side of the conversation amplified back to them.

This problem is not unique to Twitter, of course. Facebook, for years, has been heavily criticized for delivering two different versions of reality to its users. An article from The WSJ in 2016 demonstrated how stark this contrast could be, when it showed a “blue” feed and “red” feed, side-by-side.

The problem is being exacerbated even more in recent months, as users from both parties are now exiting mainstream platforms, like Twitter, an isolating themselves even more. On the conservative side, users fled to free speech-favoring and fact check-eschewing platforms like Gab and Parler. The new social network Telepath, on the other hand, favors left-leaning users by aggressively blocking misinformation — often that from conservative news outlets — and banning identity-based attacks.

One other area Pew’s new study examined was the two parties’ use of hashtags on Twitter.

It found that no one hashtag was used by more than 5% of U.S. adults on Twitter during the study period. But there was a bigger difference when it came to the use of the #BlackLivesMatter hashtag, which was tweeted by 4% of Democrats on Twitter and just 1% of Republicans.

Other common hashtags used across both parties included #covid10, #coronavirus, @mytwitteranniversary, #newprofilepic, #sweepstakes, #contest, and #giveaway.

Image Credits: Pew Research Center

It’s somewhat concerning, too, that hashtags were used in such a small percentage of tweets.

While their use has fallen out of favor somewhat — using a hashtag can seem “uncool” — the idea with hashtags was to allow users a quick way to tap into the global conversation around a given topic. But this decline in user adoption indicates there are now fewer tweets that can connect users to an expanded array of views.

Twitter today somewhat addresses this problem through its “Explore” section, highlighting trends, and users can investigate tweets using its keyword search tools. But if Twitter really wants to burst users’ bubbles, it may need to develop a new product — one that offers a different way to connect users to the variety a conversations taking place around a term, whether hashtagged or not.

 

 

 

#hashtag, #pew-research-center, #social, #social-media, #twitter, #united-states, #us-politics

0

Hunter Biden Allegation Prompts Pushback from Facebook, Twitter

Joe Biden’s campaign rejected assertions made in a published report that were based on unverified material from Trump allies. Facebook and Twitter found the story dubious enough to limit access to it on their platform.

#bannon-stephen-k, #biden-hunter, #biden-joseph-r-jr, #computers-and-the-internet, #facebook-inc, #giuliani-rudolph-w, #new-york-post, #news-and-news-media, #presidential-election-of-2020, #trump-ukraine-whistle-blower-complaint-and-impeachment-inquiry, #trump-donald-j, #twitter, #united-states-politics-and-government

0

Facebook, Twitter limit controversial story about Joe Biden’s son

Facebook, Twitter limit controversial story about Joe Biden’s son

Enlarge (credit: Thomas Trutschel / Getty Images)

Facebook and Twitter today are facing criticism from all sides after taking rare action to suppress an apparent attempt at blatant disinformation being spread three weeks before the election.

Both social media platforms are deprecating or outright blocking the sharing of a link to a story the New York Post published this morning about Democratic presidential candidate Joe Biden. Although Twitter and Facebook have both acted in the past to deplatform fringe actors, today’s action marks one of the extremely rare times either has taken action against a story from a relatively mainstream outlet.

The story

The story at the root of all the drama appears to be an attempt to duplicate the effect the Comey memo had on the 2016 presidential election by suggesting there’s a scandal in the Biden camp. The New York Post claimed to have received copies of emails that were obtained from a laptop that Biden’s son Hunter dropped off at a Delaware computer repair shop in 2019. These emails, which the Post called a “smoking gun,” allegedly indicate that Hunter Biden connected his father with Ukrainian energy firm Burisma in 2014.

Read 11 remaining paragraphs | Comments

#disinformation, #elections, #facebook, #joe-biden, #misinformation, #policy, #rudy-giuliani, #twitter

0

Twitter hack probe leads to call for cybersecurity rules for social media giants

An investigation into this summer’s Twitter hack by the New York State Department of Financial Services (NYSDFS) has ended with a stinging rebuke for how easily Twitter let itself be duped by a “simple” social engineering technique — and with a wider call for key social media platforms to be regulated on security.

In the report, the NYSDFS points, by way of contrasting example, to how quickly regulated cryptocurrency companies acted to prevent the Twitter hackers scamming even more people — arguing this demonstrates that tech innovation and regulation aren’t mutually exclusive.

Its point is that the biggest social media platforms have huge societal power (with all the associated consumer risk) but no regulated responsibilities to protect users.

The report concludes this is a problem U.S. lawmakers need to get on and tackle stat — recommending that an oversight council be established (to “designate systemically important social media companies”) and an “appropriate” regulator appointed to ‘monitor and supervise’ the security practices of mainstream social media platforms.

“Social media companies have evolved into an indispensable means of communications: more than half of Americans use social media to get news, and connect with colleagues, family, and friends. This evolution calls for a regulatory regime that reflects social media as critical infrastructure,” the NYSDFS writes, before going on to point out there is still “no dedicated state or federal regulator empowered to ensure adequate cybersecurity practices to prevent fraud, disinformation, and other systemic threats to social media giants”.

“The Twitter Hack demonstrates, more than anything, the risk to society when systemically important institutions are left to regulate themselves,” it adds. “Protecting systemically important social media against misuse is crucial for all of us — consumers, voters, government, and industry. The time for government action is now.”

We’ve reached out to Twitter for comment on the report

Among the key findings from the Department’s investigation are that the hackers broke into Twitter’s systems by calling employees and claiming to be from Twitter’s IT department — through which simple social engineering method they were able to trick four employees into handing over their log-in credentials. From there they were able to access the Twitter accounts of high profile politicians, celebrities, and entrepreneurs, including Barack Obama, Kim Kardashian West, Jeff Bezos, Elon Musk, and a number of cryptocurrency companies — using the hijacked accounts to tweet out a crypto scam to millions of users.

Twitter has previously confirmed that a “phone spear phishing” attack was used to gain credentials.

Per the report, the hackers’ “double your bitcoin” scam messages, which contained links to make a payment in bitcoins, enabled them to steal more than $118,000 worth of bitcoins from Twitter users.

Although a considerably larger sum was prevented from being stolen as a result of swift action taken by regulated crypto companies — namely: Coinbase, Square, Gemini Trust Company and Bitstamp — who the Department said blocked scores of attempted transfers by the fraudsters.

“This swift action blocked over 6,000 attempted transfers worth approximately $1.5 million to the Hackers’ bitcoin addresses,” the report notes.

Twitter is also called out for not having a cybersecurity chief in post at the time of the hack — after failing to replace Michael Coates, who left in March. (Last month it announced Rinki Sethi had been hired as CISO).

“Despite being a global social media platform boasting over 330 million average monthly users in 2019, Twitter lacked adequate cybersecurity protection,” the NYSDFS writes. “At the time of the attack, Twitter did not have a chief information security officer, adequate access controls and identity management, and adequate security monitoring — some of the core measures required by the Department’s first-in-the-nation cybersecurity regulation.”

European Union data protection law already bakes in security requirements as part of a comprehensive privacy and security framework (with major penalties possible for security breaches). However an investigation by the Irish DPC of a 2018 Twitter security incident is still yet to conclude after a draft decision failed to gain the backing of the other EU data watchdogs this August — triggering a further delay to the pan-EU regulatory process.

#crypto, #hack, #policy, #regulation, #security, #social, #social-media, #twitter

0

Riled Up: Misinformation Stokes Calls for Violence on Election Day

Baseless claims are circulating online about a Democrat-led coup, inflaming tensions in an already turbulent election season.

#biden-joseph-r-jr, #computers-and-the-internet, #demonstrations-protests-and-riots, #facebook-inc, #fringe-groups-and-movements, #presidential-election-of-2020, #rumors-and-misinformation, #social-media, #trump-donald-j, #twitter, #united-states-defense-and-military-forces, #united-states-politics-and-government, #youtube-com

0

WordPress can now turn blog posts into tweetstorms automatically

Earlier this year, WordPress .com introduced an easier way to post your Twitter threads, also known as tweetstorms, to your blog with the introduction of “unroll” option for Twitter embeds. Today, the company is addressing the flip side of tweetstorm publication — it’s making it possible to turn your existing WordPress blog post into a tweetstorm with just a couple of clicks.

The new feature will allow you to tweet out every word of your post, as well as the accompanying images and videos, the company says. These will be automatically inserted into the thread where they belong alongside your text.

To use the tweetstorm feature, a WordPress user will first click on the Jetpack icon on the top right of the page, then connect their Twitter account to their WordPress site, if that hadn’t been done already.

Image Credits: WordPress.com

 

The option also supports multiple Twitter accounts, if you want to post your tweetstorms in several places.

Once Twitter is connected, you’ll select the account or accounts where you want to tweet, then choose the newly added option to share the post as a Twitter thread instead of a single post with a link.

Image Credits: WordPress.com

In the box provided, you’ll write an introductory message for your tweetstorm, so Twitter users will know what your Twitter thread will be discussing.

When you then click on the “publish” button, the blog post will be shared as a tweetstorm automatically.

Image Credits: WordPress.com

The feature was also designed with a few thoughtful touches to make the tweetstorm feel more natural, as if it had been written directly on Twitter.

For starters, WordPress says it will pay attention to the blog post’s formatting in order to determine where to separate the tweets. Instead of packing the first tweet with as many words as possible, it places the break at the end of the first sentence, for example. And when a paragraph is too long for a single tweet, it’s automatically split out into as many tweets as needed, instead of being cut off. A list block, meanwhile, will be formatted as a list on Twitter.

To help writers craft a blog post that will work as a tweetstorm, you can choose to view where the tweets will be split in the social preview feature. This allows WordPress users to better shape the post to fit Twitter’s character limit as they write.

Image Credits: WordPress.com

At the end of the published tweetstorm, Twitter followers will be able to click a link to read the post on the WordPress site.

This addresses a common complaint with Twitter threads. While it’s useful to have longer thoughts posted to social media for attention, reading through paragraphs of content directly on Twitter can be difficult. But as tweetstroms grew in popularity, tools to solve this problem emerged. The most popular is a Twitter bot called @ThreadReaderApp, which lets users read a thread in a long-form format by mentioning the account by name within the thread along with the keyword “unroll.”

With the launch of the new WordPress feature, however, Twitter users won’t have to turn to third-party utilities — they can just click through on the link provided to read the content as a blog post. This, in turn, could help turn Twitter followers into blog subscribers, allowing the WordPress writer to increase their overall reach.

WordPress’ plans to introduce the tweetstorm feature had been announced last month as coming in the Jetpack 9.0 release, arriving in early October.

The feature is now publicly available, the company says.

#automattic, #blog, #social, #social-media, #tweetstorm, #twitter, #wordpress

0

Changing how retweets work, Twitter seeks to slow down election misinformation

Twitter announced a major set of changes to the way its platform would work Friday as the social network braces for the most contentious, uncertain and potentially high stakes election in modern U.S. history.

In what will likely be the most noticeable change, Twitter will try a new tactic to discourage users from retweeting posts without adding their own commentary. Starting on October 20 in a “global” change, the platform will prompt anyone who goes to retweet something to share a quote tweet instead. The change will stay in place through the “end of election week,” when Twitter will decide if the change needs to stick around for longer.

Gif via Twitter

“Though this adds some extra friction for those who simply want to Retweet, we hope it will encourage everyone to not only consider why they are amplifying a Tweet, but also increase the likelihood that people add their own thoughts, reactions and perspectives to the conversation,” Twitter said of the change, which some users may see on the Twitter for the web starting on Friday.

Twitter has been experimenting with changes that add friction to the platform in recent months. Last month, the company announced that it would roll out a test feature prompting users to click through a link before retweeting it to the platform at large. The change marks a major shift in thinking for social platforms, which grew aggressively by prioritizing engagement above all other measures.

The company also clarified its policy on election results, and now a candidate for office “may not claim an election win before it is authoritatively called.” Twitter will look to state election officials or projected results from at least two national news sources to make that determination.

Twitter stopped short of saying it will remove those posts, but said that It will add a misleading information label pointing users toward its hub for vetted election information to any content claiming premature victory. The company does plan to remove any tweets “meant to incite interference with the election process or with the implementation of election results” including ones that incite violence.

Next week, Twitter will also implement new restrictions on misleading tweets it labels, showing users a pop-up prompt linking to credible information when they go to view the tweet. Twitter applies these labels to tweets that spread misinformation about COVID-19, elections and voting, and anything that contains manipulated media, like deepfakes or otherwise misleading edited videos.

The company will also take additional measures against misleading tweets that get a label when they’re from a U.S. political figure, candidate or campaign. To see a tweet with one of its labels, a user will have to tap through a warning. Labeled tweets will have likes, normal retweets and replies disabled.

These new measures will also apply to labeled tweets from anyone with more than 100,000 followers or tweets that are getting viral traction. “We expect this will further reduce the visibility of misleading information, and will encourage people to reconsider if they want to amplify these Tweets,” Twitter said in its announcement.

Twitter warning on labeled tweet

Image via Twitter

Twitter will also turn off recommendations in the timeline in an effort to “slow down” how fast tweets can reach people from accounts they don’t follow. The company calls the decision a “worthwhile sacrifice to encourage more thoughtful and explicit amplification.” The company will also only allow trending content that comes with additional context to show up in the “for you” recommendation tab in an effort to slow the spread of misinformation.

The company acknowledges that it plays a “critical role” in protecting the U.S. election, adding that it had staffed up dedicated teams to monitoring the platform and “respond rapidly” on election night and in the potentially uncertain period of time until authoritative election results are clear.

#2020-election, #misinformation, #tc, #twitter

0

Twitter Will Turn Off Some Features to Fight Election Misinformation

The platform is trying to address growing concern that falsehoods could lead to instability. Most of the changes will start on Oct. 20.

#dorsey-jack, #facebook-inc, #google-inc, #online-advertising, #rumors-and-misinformation, #social-media, #trump-donald-j, #twitter, #united-states-politics-and-government

0

Take a Social Media Break Until You’ve Voted

Democracy would be better off for it.

#absentee-voting, #facebook-inc, #google-inc, #news-and-news-media, #political-advertising, #rumors-and-misinformation, #social-media, #twitter, #united-states, #united-states-politics-and-government, #youtube-com

0

Twitter tests a new way to find accounts to follow

Twitter is testing a new way to follow accounts. The company announced today it’s rolling out a new feature, “Suggested Follows,” that will pop up a list of other accounts you may want to follow on the profile page of someone you had just followed. The feature will be tested on Android devices, for the time being.

The feature offers a tweak to how following currently works on mobile. At present, when you tap “Follow” on a user’s profile page, you’re presented with a small list of suggested accounts you may also want to follow.

Twitter explains that its accounts suggestions are based on a number of factors, and are often personalized. But in the case of suggested follows, it uses algorithms to determine what accounts may be related to the profile you’ve just visited, or if people who follow that user tend to follow certain other users.

That’s why, for example, when you follow someone whose profiles notes they work at a particular company, the Suggested follows may then include others who also work there. Or why when you follow a celebrity of some sort, you may be presented with other high-profile accounts as suggestions.

Before, however, you would have to tap on the suggestions one by one if you wanted to follow them. Twitter’s new test instead, groups a larger number of suggestions that you can follow with just one tap. You can then opt to remove those accounts you may not want to follow after first adding the full group.

This could make it easier for users to gain follows, if their account is related somehow to another account that’s seeing a high number of follows. It could also help Twitter newcomers to build out their networks, while helping existing users expand their own.

Some Twitter users may have already seen the feature in action, before today.

Twitter says the test is taking place on Android. The company didn’t note if or when the feature would expand to iOS.

 

#tc, #twitter

0

Trump is already breaking platform rules again with false claim that COVID-19 is ‘far less lethal’ than the flu

Facebook and Twitter took action against a post from President Trump Tuesday that claimed that COVID-19 is “far less lethal” than the flu. Trump made the tweet and posted the same message to Facebook just hours after arriving back at the White House following a multi-day stay at Walter Reed medical center, where the president was treated after testing positive for COVID-19.

Facebook took down Trump’s post outright Tuesday, stating that it “[removes] incorrect information about the severity of COVID-19, and have now removed this post.” Twitter hid the tweet behind a warning saying that it broke the platform’s rules about spreading misleading or harmful COVID-19 misinformation.

“We placed a public interest notice on this Tweet for violating our COVID-19 Misleading Information Policy by making misleading health claims about COVID-19,” a Twitter spokesperson said.

Taking down one of the president’s posts is rare but it wasn’t a first for Facebook. In August, Facebook removed a video Trump shared in which he claimed that children are “almost immune” to COVID-19. The clip originally aired on Fox News.

On twitter, Trump’s tweet will have “significantly limited” engagement, meaning that it can’t be retweeted without quoting, liked or replied to, but it will remain up because it’s in the public interest. By the time Twitter took action on the tweet it had more than 59,000 retweets and 186,000 likes.

Facebook and Twitter both created new policies to address the spread of pandemic-related misinformation earlier this year. In the pandemic’s earlier days, the false claim that COVID is comparable to the flu was a common refrain from Trump and his allies, who wished to downplay the severity of the virus.  But after months of the virus raging through communities around the U.S., the claim that COVID-19 is like the flu is an even more glaring lie.

While much remains not understood about the virus, it can follow an aggressive and unpredictable trajectory in patients, attacking vital organs beyond the lungs and leaving people who contracted it with long-lasting health effects that are not yet thoroughly studied or understood. Trump’s own physician has said the president “may not be out of the woods yet” in his own fight with the virus.

In recent months, the president’s social media falsehoods had shifted more toward lies about the safety of vote-by-mail, the system many Americans will rely on to cast votes as the pandemic rages on.

But less than a day out of a multi-day stay at the hospital where he was given supplemental oxygen and three experimental treatments, it’s clear Trump’s own diagnosis with the virus doesn’t mean he intends to treat the health threat that’s upended the economy and claimed more than 200,000 lives with any seriousness at all.

Instead, Trump is poised to continue waging a political war against platforms like Twitter and Facebook — if the results of the election give him the chance. Trump has already expressed interest in dismantling Section 230, a key legal provision that protects platforms from liability for user-generated content. He tweeted “REPEAL SECTION 230!!!” Tuesday after Twitter and Facebook took action against his posts saying the flu is worse than COVID-19.

#2020-election, #coronavirus, #covid-19, #donald-trump, #facebook, #misinformation, #pandemic, #tc, #trump-administration, #twitter

0

The next big tech hearing is scheduled for October 28

A day after the Senate Commerce Committee moved forward with plans to subpoena the CEOs of Twitter, Facebook and Google, it looks like some of the most powerful leaders in tech will testify willingly.

Twitter announced late Friday that Jack Dorsey would appear virtually before the committee on October 28, just days before the U.S. election. While Twitter is the only company that’s openly agreed to the hearing so far, Politico reports that Sundar Pichai and Mark Zuckerberg also plan to appear.

Members of both parties on the committee planned to use the hearings to examine Section 230, the key legal shield that protects online platforms from liability from the content their users create.

As we’ve discussed previously, the political parties are approach Section 230 from very different perspectives. Democrats see threatening changes to Section 230 as a way to force platforms to take toxic content like misinformation and harassment more seriously.

Many Republicans believe tech companies should be stripped of Section 230 protections because platforms have an anti-conservative bias — a claim that the facts don’t bear out.

Twitter had some choice words about that perspective, calling claims of political bias an “unsubstantiated allegation that we have refuted on many occasions to Congress” and noting that those accusations have been “widely disproven” by researchers.

“We do not enforce our policies on the basis of political ideology,” the company added.

It sounds like the company and members of the Senate have very different agendas. Twitter indicated that it plans to use the hearing’s timing to steer the conversation toward the election. Politico also reports that the scope of the hearing will be broadened to include “data privacy and media consolidation” — not just Section 230.

A spokesperson tweeting on the company’s public policy account insisted that the hearing “must be constructive,” addressing how tech companies can protect the integrity of the vote.

“At this critical time, we’re committed to keeping our focus squarely on what matters the most to our company: joint efforts to protect our shared democratic conversation from harm — from both foreign and domestic threats,” a Twitter spokesperson wrote.

Regardless of the approach, dismantling Section 230 could prove potentially catastrophic for the way the internet as we know it works, so the stakes are high, both for tech companies and for regular internet users.

#congress, #regulation, #section-230, #section-230-of-the-communications-decency-act, #senate-hearings, #tc, #twitter

0

Daily Crunch: Twitter confronts image-cropping concerns

Twitter addresses questions of bias in its image-cropping algorithms, we take a look at Mario Kart Live and the stock market takes a hit after President Trump’s COVID-19 diagnosis. This is your Daily Crunch for October 2, 2020.

The big story: Twitter confronts image-cropping concerns

Last month, (white) PhD student Colin Madland highlighted potential algorithmic bias on Twitter and Zoom — in Twitter’s case, because its automatic image cropping seemed to consistently highlight Madland’s face over that of a Black colleague.

Today, Twitter said it has been looking into the issue: “While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm.”

Does that mean it will stop automatically cropping images? The company said it’s “exploring different options” and added, “We hope that giving people more choices for image cropping and previewing what they’ll look like in the tweet composer may help reduce the risk of harm.”

The tech giants

Nintendo’s new RC Mario Kart looks terrific — Mario Kart Live (with a real-world race car) makes for one hell of an impressive demo.

Tesla delivers 139,300 vehicles in Q3, beating expectations — Tesla’s numbers in the third quarter marked a 43% improvement from the same period last year.

Zynga completes its acquisition of hyper-casual game maker Rollic — CEO Frank Gibeau told me that this represents Zynga’s first move into the world of hyper-casual games.

Startups, funding and venture capital

Elon Musk says an update for SpaceX’s Starship spacecraft development program is coming in 3 weeks —  Starship is a next-generation, fully reusable spacecraft that the company is developing with the aim of replacing all of its launch vehicles.

Paired picks up $1M funding and launches its relationship app for couples — Paired combines audio tips from experts with “fun daily questions and quizzes” that partners answer together.

With $2.7M in fresh funding, Sora hopes to bring virtual high school to the mainstream — Long before the coronavirus, Sora was toying with the idea of live, virtual high school.

Advice and analysis from Extra Crunch

Spain’s startup ecosystem: 9 investors on remote work, green shoots and 2020 trends — While main hubs Madrid and Barcelona bump heads politically, tech ecosystems in each city have been developing with local support.

Which neobanks will rise or fall? — Neobanks have led the $3.6 billion in venture capital funding for consumer fintech startups this year.

Asana’s strong direct listing lights alternative path to public market for SaaS startups — Despite rising cash burn and losses, Wall Street welcomed the productivity company.

Everything else

American stocks drop in wake of president’s COVID-19 diagnosis — The news is weighing heavily on all major American indices, but heaviest on tech shares.

Digital vote-by-mail applications in most states are inaccessible to people with disabilities — According to an audit by Deque, most states don’t actually have an accessible digital application.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

#daily-crunch, #social, #twitter

0

Twitter is building ‘Birdwatch,’ a system to fight misinformation by adding more context to tweets

Twitter is developing a new product called “Birdwatch,” which the company confirms is an attempt at addressing misinformation across its platform by providing more context for tweets, in the form of notes. Tweets can be added to “Birdwatch” — meaning flagged for moderation — from the tweet’s drop-down menu, where other blocking and reporting tools are found today. A small binoculars icon will also appear on tweets published to the Twitter Timeline. When the button is clicked, users are directed to a screen where they can view the tweet’s history of notes.

Based on screenshots of Birdwatch unearthed through reverse engineering techniques, a new tab called “Birdwatch Notes” will be added to Twitter’s sidebar navigation, alongside other existing features like Lists, Topics, Bookmarks and Moments.

This section will allow you to keep track of your own contributions, aka your “Birdwatch Notes.”

The feature was first uncovered this summer in early stages of development by reverse engineer Jane Manchun Wong, who found the system through Twitter’s website. At the time, Birdwatch didn’t have a name, but it clearly showed an interface for flagging tweets, voting on whether or not the tweet was misleading, and adding a note with further explanations.

Twitter updated its web app a few days after her discovery, limiting further investigation.

This week, however, a very similar interface was again discovered in Twitter’s code, this time on iOS.

According to social media consultant Matt Navarra, who tweeted several more screenshots of the feature on mobile, Birdwatch allows users to attach notes to a tweet. These notes can be viewed when clicking on the binoculars button on the tweet itself.

In other words, additional context about the statements made in the tweet would be open to the public.

What’s less clear is whether everyone on Twitter will be given access to annotate tweets with additional context, or whether this permission will require approval, or only be open to select users or fact checkers.

Twitter early adopter and hashtag inventor Chris Messina openly wondered if Birdwatch could be some sort of “citizen’s watch” system for policing disinformation on Twitter. It turns out, he was right.

According to line items he found within Twitter’s code, these annotations — the “Birdwatch Notes” — are referred to as “contributions,” which does seem to imply a crowdsourced system. (After all, a user would contribute to a shared system, not to a note they were writing for only themselves to see.)

Image Credits: Chris Messina

Crowdsourcing moderation wouldn’t be new to Twitter. For several years, Twitter’s live-streaming app Periscope has relied on crowdsourcing techniques to moderate comments on its real-time streams in order to clamp down on abuse.

There is still much we don’t know about how Birdwatch will work from a non-technical perspective, however. We don’t know if everyone will have the same abilities to annotate tweets, how attempts to troll this system will be handled, or what would happen to a tweet if it got too many negative dings, for example.

In more recent months, Twitter has tried to take a harder stance on tweets that contain misleading, false or incendiary statements. It has even gone so far as to apply fact-check labels to some of Trump’s tweets and has hidden others behind a notice warning users that the tweet has violated Twitter’s rules. But scaling moderation across all of Twitter is a task the company has not been well-prepared for, as it built for scale first, then tried to figure out policies and procedures around harmful content after the fact.

Reached for comment, Twitter declined to offer details regarding its plans for Birdwatch, but did confirm the feature was designed to combat the spread of misinformation.

“We’re exploring a number of ways to address misinformation and provide more context for tweets on Twitter,” a Twitter spokesperson told TechCrunch. “Misinformation is a critical issue and we will be testing many different ways to address it,” they added.

 

#misinformation, #social, #tc, #twitter

0

Twitter may let users choose how to crop image previews after bias scrutiny

In an interesting development in the wake of a bias controversy over its cropping algorithm Twitter has said it’s considering giving users decision making power over how tweet previews look, saying it wants to decrease its reliance on machine learning-based image cropping.

Yes, you read that right. A tech company is affirming that automating certain decisions may not, in fact, be the smart thing to do — tacitly acknowledging that removing human agency can generate harm.

As we reported last month, the microblogging platform found its image cropping algorithm garnering critical attention after Ph.D. student Colin Madland noticed the algorithm only showed his own (white male) image in preview — repeatedly cropping out the image of a black faculty member.

Ironically enough he’d been discussing a similar bias issue with Zoom’s virtual backgrounds.

Twitter responded to the criticism at the time by saying it had tested for bias before shipping the machine learning model and had “not found evidence of racial or gender bias”. But it added: “It’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate.”

It’s now followed up with additional details about its testing processes in a blog post where it suggests it could move away from using an algorithm for preview crops in the future.

Twitter also concedes it should have published details of its bias testing process before launching the algorithmic cropping tool — in order that its processes could have been externally interrogated. “This was an oversight,” it admits.

Explaining how the model works, Twitter writes: “The image cropping system relies on saliency, which predicts where people might look first. For our initial bias analysis, we tested pairwise preference between two demographic groups (White-Black, White-Indian, White-Asian and male-female). In each trial, we combined two faces into the same image, with their order randomized, then computed the saliency map over the combined image. Then, we located the maximum of the saliency map, and recorded which demographic category it landed on. We repeated this 200 times for each pair of demographic categories and evaluated the frequency of preferring one over the other.”

“While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm. We should’ve done a better job of anticipating this possibility when we were first designing and building this product. We are currently conducting additional analysis to add further rigor to our testing, are committed to sharing our findings, and are exploring ways to open-source our analysis so that others can help keep us accountable,” it adds.

On the possibility of moving away from algorithmic image cropping in favor of letting humans have a say, Twitter says it’s “started exploring different options to see what will work best across the wide range of images people tweet every day”.

“We hope that giving people more choices for image cropping and previewing what they’ll look like in the tweet composer may help reduce the risk of harm,” it adds, suggesting tweet previews could in future include visual controls for users.

Such a move, rather than injecting ‘friction’ into the platform (which would presumably be the typical techie concern about adding another step to the tweeting process), could open up new creative/tonal possibilities for Twitter users by providing another layer of nuance that wraps around tweets. Say by enabling users to create ‘easter egg’ previews that deliberately conceal a key visual detail until someone clicks through; or which zero-in on a particular element to emphasize a point in the tweet.

Given the popularity of joke ‘half and half’ images that play with messaging app WhatsApp’s preview crop format — which requires a click to predictably expand the view — it’s easy to see similar visual jokes and memes being fired up on Twitter, should it provide users with the right tools.

The bottom line is that giving humans more agency means you’re inviting creativity — and letting diversity override bias. Which should be a win-win. So it’s great to see Twitter entertaining the idea of furloughing one of its algorithms. (Dare we suggest the platform also takes a close and critical look at the algorithmic workings around ‘top tweets’, ‘trending tweets’, and the ‘popular/relevant’ content its algos sometimes choose to inject, unasked, into users’ timelines, all of which can generate a smorgasbord of harms.)

Returning to image cropping, Twitter says that as a general rule it will be committed to “the ‘what you see is what you get’ principles of design” — aka, “the photo you see in the tweet composer is what it will look like in the tweet” — while warning there will likely still be some exceptions, such as for images that aren’t a standard size.

In those cases it says it will experiment with how such images are presented, aiming to do so in a way that “doesn’t lose the creator’s intended focal point or take away from the integrity of the photo”. Again, it will do well to show any algorithmic workings in public.