What the public wants in COVID news vs. what the press provides

Image of dice shaped cubes with letters spelling out fake/fact.

Enlarge (credit: Anton Melnyk)

Misinformation posing as news has been a problem that only got worse with the ease of publishing on the Internet. But the COVID-19 pandemic seems to have raised it to new levels, driving lots of attention to rumors, errors, and outright falsehoods. Given the magnitude of the threat, there would seem to be a premium placed on ensuring the accuracy of any pandemic information. But that doesn’t seem to be the case.

It’s unlikely there will be a single explanation for why that was the case. But researchers based in Paris have looked into the dynamics of pandemic news and found a potential contributor: Unreliable news sources were better at producing content that matched what readers were looking for.

Supply and demand

The researchers behind the new work treated the news ecosystem as a function of supply and demand. The audience—in this case, the Italian public—is interested in obtaining answers to specific questions or details on a topic. News sources attempt to satisfy that demand. Complicating this relationship, the news ecosystem includes organizations that don’t produce quality information. Poor reporting can be due to carelessness or to satisfy an agenda separate from providing news.

Read 9 remaining paragraphs | Comments

#behavioral-science, #media, #misinformation, #science

Musk has “huge responsibility” to fight health misinfo on Twitter, WHO says

Tesla and SpaceX CEO Elon Musk presents a vaccine production device during a meeting September 2, 2020 in Berlin, Germany. Musk met with vaccine maker CureVac, with which Tesla has a cooperation to build devices for producing RNA vaccines.

Enlarge / Tesla and SpaceX CEO Elon Musk presents a vaccine production device during a meeting September 2, 2020 in Berlin, Germany. Musk met with vaccine maker CureVac, with which Tesla has a cooperation to build devices for producing RNA vaccines. (credit: Getty | Filip Singer)

Elon Musk has a “huge responsibility” to combat dangerous, potentially life-threatening health misinformation on Twitter, the World Health Organization said Tuesday.

The United Nation’s health agency commented on Monday’s news that the tech billionaire has struck a deal to purchase Twitter for $44 billion. WHO officials stressed how damaging misinformation and disinformation could be when it’s widely spread in digital spaces like Twitter.

“In cases like this pandemic, good information is life-saving,” Mike Ryan, executive director of the WHO’s Health Emergencies Programme, said. “In some cases, [it’s] more life-saving than having a vaccine in the sense that bad information sends you to some very, very bad places.”

Read 9 remaining paragraphs | Comments

#covid-19, #covid-19-vaccines, #disinformation, #elon-musk, #misinformation, #public-health, #science, #twitter, #vaccines, #who

Largest trial to date finds ivermectin is worthless against COVID

A box of Ivermectina medicine manufactured by Vitamedic in Brazil.

Enlarge / A box of Ivermectina medicine manufactured by Vitamedic in Brazil. (credit: Getty | SOPA Images)

The largest clinical trial to date on the use of the antiparasitic drug ivermectin against COVID-19 concluded that the drug is completely ineffective at treating the pandemic disease, according to results published in The New England Journal of Medicine late Wednesday.

The double-blind, randomized, placebo-controlled clinical trial was primarily designed to test if ivermectin could reduce the need for hospitalization among 1,358 COVID-19 patients who were at high risk of severe disease. Ivermectin did not, according to the international team of researchers behind the trial, dubbed TOGETHER. “We did not find a significantly or clinically meaningful lower risk of medical admission to a hospital or prolonged emergency department observation with ivermectin,” the researchers reported.

The folks with TOGETHER also found that the drug failed to reduce all other secondary outcomes of COVID-19, including the time to recovery, time to viral clearance on PCR test, time spent in the hospital, the need for mechanical ventilation, the duration of mechanical ventilation, death, or the time to death. “We found no important effects of treatment with ivermectin on the secondary outcomes,” the researchers wrote.

Read 9 remaining paragraphs | Comments

#anti-parasitic, #clinical-trial, #covid-19, #fda, #infectious-disease, #ivermectin, #misinformation, #pandemic, #sars-cov-2, #science

Anti-vaccine doctor behind COVID misinfo pleads guilty to Jan. 6 riot charge

Pro-Trump supporters storm the US Capitol following a rally with President Donald Trump on January 6, 2021, in Washington, DC.

Enlarge / Pro-Trump supporters storm the US Capitol following a rally with President Donald Trump on January 6, 2021, in Washington, DC. (credit: Getty | Samuel Corum)

Dr. Simone Gold, a prominent anti-vaccine doctor who founded a group notorious for widely peddling COVID-19 misinformation, pleaded guilty on Thursday to joining the insurrectionists who violently attacked the US Capitol building on January 6, 2021.

Gold is the founder of America’s Frontline Doctors (AFLDS) and has spent the pandemic downplaying COVID-19, promoting unproven treatments, such as hydroxychloroquine and ivermectin, and casting doubt on the safety and effectiveness of COVID-19 vaccines.

According to her guilty plea, Gold entered a restricted area around the Capitol on January 6, joining part of the mob outside the East Rotunda door. There she stood directly in front of a law enforcement officer as the officer was assaulted and dragged to the ground, the plea notes. Shortly after, she entered the Rotunda with rioters and began giving a speech against COVID-19 vaccine mandates and government-imposed lockdowns, while co-defendant John Strand video-recorded her remarks. Multiple law enforcement officers had to intervene before Gold stopped her speech, then she and Strand left the area.

Read 5 remaining paragraphs | Comments

#americas-frontline-doctors, #covid-19, #guilty, #insurrection, #january-6, #misinformation, #riot, #science, #simone-gold

Kansas lawmakers attack medical board for probing ivermectin cases

A jar of medicine sits next to the box it came in.

Enlarge / Ivermectin tablets arranged in Jakarta, Indonesia, on Thursday, Sept. 2, 2021. The US Food and Drug Administration warned Americans against taking ivermectin, a drug usually used on animals, as a treatment or prevention for COVID-19. (credit: Getty | Bloomberg)

The Kansas medical board is facing attacks from state lawmakers for investigating doctors who have prescribed the antiparasitic drug ivermectin to treat or prevent COVID-19. The drug, which is most often used in animals as a dewormer, is both unproven and unrecommended for use against COVID-19 in people.

Nevertheless, state lawmakers proposed a budget amendment that would strip the state medical board of funds to conduct such investigations. For now, the budget committee has settled on language that the medical board should “proceed with caution” in any such investigations—language intended to have a chilling effect. But the committee has signaled that it could revisit the plan to defund investigations, depending on the fate of a separate Senate bill.

That Senate bill is SB 381, which would specifically authorize doctors to prescribe off-label and unproven COVID-19 treatments—namely hydroxychloroquine sulfate and ivermectin. And it would force pharmacists to dispense the drugs, even if doing so is against their professional judgement. Additionally, the proposed legislation would bar medical and pharmacy boards from investigating doctors and pharmacists for the practice and require the boards to review any prior disciplinary actions that are related.

Read 8 remaining paragraphs | Comments

#covid-19, #hydroxychloroquine, #ivermectin, #kansas, #medical-board, #misinformation, #off-label, #prescribing, #science

Joe “just conversations” Rogan defends misinformation like a classic grifter

A buff guy in a tee-shirt is bathed in purple lighting.

Enlarge / Joe Rogan on July 9, 2021, in Las Vegas, NV. (credit: Getty | Icon Sportswire)

Long before the pandemic took the lives of more than 5.6 million people and created a lucrative market for COVID grifts, misinformation, and snake oil, there was Goop.

The aspirational lifestyle brand and its lustrous “contextual commerce” products are helmed by actor Gwyneth Paltrow, who has used her fame, wealth, and enviable genetics to peddle all manner of wellness pseudoscience and quackery. With the manipulative mantra of “empowering” women to seize control of their health and destinies, Paltrow’s Goop has touted extremely questionable—if not downright dangerous—products. Perhaps the most notorious is the jade egg, a $66 egg-shaped rock Goop advised women to shove up their vaginas while claiming it could treat medical conditions, “detox” lady bits, and invigorate mystical life forces (of course).

But let’s not forget the $135 “Implant O’Rama” enema device intended to squirt scalding coffee into your colon, the $90 luxury vitamins that almost certainly do nothing, or the $85 “medicine bag” of small, polished rocks that Goop suggests have magical wellness properties. Then there was the bee-sting therapy—no, not therapy for bee stings but therapy imparted from bee stings. Paltrow personally endorsed the practice, which was blamed for the death of a 55-year-old Spanish woman in 2018.

Read 19 remaining paragraphs | Comments

#covid-19, #joe-rogan, #misinformation, #paltrow, #science

Spotify publicly posts content policy as Rogan responds

Joe Rogan.

Enlarge / Joe Rogan. (credit: Dylan Buell/Getty Images)

Spotify publicly posted its platform policies for the first time on Sunday following artists’ outrage over COVID-related episodes of Joe Rogan’s podcast.

The policies, which previously weren’t known to the public, offer podcasters and musicians wide latitude over what they can stream on Spotify. They’re similar to the approaches used by other platforms. Spotify does not allow hatred and incitement of violence, deception, graphic depictions of violence, sexually explicit material, and illegal content. The streaming service also says it forbids “content that promotes dangerous false or dangerous deceptive medical information that may cause offline harm or poses a direct threat to public health.”

“These are rules of the road to guide all of our creators—from those we work with exclusively to those whose work is shared across multiple platforms,” CEO Daniel Ek said in a blog post.

Read 8 remaining paragraphs | Comments

#content-policy, #covid, #joe-rogan, #misinformation, #platforms, #policy, #spotify

Spotify support buckles under complaints from angry Neil Young fans

Neil Young's fans aren't happy that the rocker's music is no longer available on Spotify.

Enlarge / Neil Young’s fans aren’t happy that the rocker’s music is no longer available on Spotify. (credit: Dave J Hogan/Getty Images)

Neil Young was mad. Now his fans are, too, and they’re telling Spotify about it.

Earlier this week, Young had asked the music-streaming service to remove his music from its library in response to COVID misinformation aired on Joe Rogan’s podcast, which is available only on Spotify. “I want you to let Spotify know immediately TODAY that I want all my music off their platform,” Young wrote on his website. “They can have Rogan or Young. Not both.”

Spotify complied with the request, which ultimately came from Warner Brothers, Young’s label. Though the loss of Young’s music likely represents a small percentage of overall streams on Spotify, Young pointed out that “Spotify represents 60% of the streaming of my music to listeners around the world.” 

Read 6 remaining paragraphs | Comments

#covid-19, #joe-rogan, #misinformation, #neil-young, #policy, #spotify

Tracking Facebook connections between parent groups and vaccine misinfo

Tracking Facebook connections between parent groups and vaccine misinfo

Enlarge (credit: Getty | Joe Amon)

Misinformation about the pandemic and the health measures that are effective against SARS-CoV-2 has been a significant problem in the US. It’s led to organized resistance against everything from mask use to vaccines and has undoubtedly ended up killing people.

Plenty of factors have contributed to this surge of misinformation, but social media clearly helps enable its spread. While the companies behind major networks have taken some actions to limit the spread of misinformation, internal documents indicate that a lot more could be done.

Taking more effective action, however, would benefit from more clearly identifying what the problems are. And, to that end, a recent analysis of the network of vaccine misinformation provides information that might be helpful. It finds that most of the worst misinformation sources are probably too small to stand out as being in need of moderation. The analysis also shows that the pandemic has brought mainstream parenting groups noticeably closer to groups devoted to conspiracy theories.

Read 15 remaining paragraphs | Comments

#computer-science, #medicine, #misinformation, #pandemic, #science, #vaccines

Doctors are fighting back against fringe doctors pushing COVID misinformation

Close-up photograph of hand rooting around medicine cabinet.

Enlarge / A box and container of ivermectin. (credit: Getty | Bloomberg)

More doctors across the country are pushing back against fringe members of the medical community for spreading COVID-19 misinformation and touting unproven treatments.

Over the weekend, nearly 100 physicians in Alaska signed onto a letter urging the state medical board to investigate doctors in the state who have promoted vaccine skepticism and pushed unproven treatments, namely the antiparasitic drug ivermectin and the antimalaria drug hydroxychloroquine.

Merijeanne Moore, a private-practice psychiatrist in Anchorage, told Anchorage Daily News that she wrote the letter in response to an event last month called Alaska Early Treatment Medical Summit. The event featured prominent out-of-state vaccine skeptics as well as at least two Anchorage doctors steeped in vaccine skepticism and misinformation.

Read 8 remaining paragraphs | Comments

#covid-19, #covid-19-vaccines, #infectious-disease, #ivermectin, #medical-license, #misinformation, #pandemic, #public-health, #science

38% of US adults believe government is faking COVID-19 death toll

A man walks through "In America: Remember," a public art installation commemorating all the Americans who have died due to COVID-19, on the National Mall September 21, 2021, in Washington, DC.

Enlarge / A man walks through “In America: Remember,” a public art installation commemorating all the Americans who have died due to COVID-19, on the National Mall September 21, 2021, in Washington, DC. (credit: Getty Images / Drew Angerer)

From the very beginning, misinformation has plagued the global response to the COVID-19 pandemic, undermining efforts to stop the spread of the disease and save lives. New survey data from the Kaiser Family Foundation (KFF) spotlights just how monstrous the problem of misinformation is.

Among a nationally representative sample of US adults, 78 percent reported that they had heard at least one of eight common COVID-19 falsehoods and either said the falsehood is true or said they’re not sure if it’s true or false.

The most common falsehood that people marked as true was that “the government is exaggerating the number of COVID-19 deaths.” Thirty-eight percent of respondents said they had heard this falsehood and that it is true. An additional 22 percent said they had heard it but weren’t sure if it is true or false.

Read 7 remaining paragraphs | Comments

#covid-19, #fox, #infectious-disease, #misinformation, #newsmax, #oan, #pandemic, #public-health, #science

Rodgers is wrong—NFL says league docs never talked to him about vaccine

Quarterback Aaron Rodgers of the Green Bay Packers trots off the field following the NFL game at State Farm Stadium on October 28, 2021, in Glendale, Arizona.

Enlarge / Quarterback Aaron Rodgers of the Green Bay Packers trots off the field following the NFL game at State Farm Stadium on October 28, 2021, in Glendale, Arizona. (credit: Getty | Christian Petersen)

Fallout continues for NFL MVP Aaron Rodgers, who tossed out a smorgasbord of COVID-19 vaccine misinformation and nearly every line from the 2021 anti-vaccine playbook in the course of a single 45-minute interview Friday.

Since then, the Green Bay Packers’ quarterback has lost his position as a spokesperson for Wisconsin-based healthcare organization Prevea Health. Insurance giant State Farm has also significantly cut back on ads that include him. And, now, the NFL is disputing his claim that league doctors provided him with bunk vaccine information.

Rodgers—who is unvaccinated and tested positive for COVID-19 last week—appeared on the The Pat McAfee Show Friday afternoon to address the growing scandal around his vaccination status. He also took the opportunity to rail against COVID-19 vaccines, NFL health policies, and the “woke mob.”

Read 8 remaining paragraphs | Comments

#aaron-rodgers, #covid-19, #misinformation, #nfl, #science, #vaccine

Brazil’s Bolsonaro accused of “crimes against humanity” over COVID response

A man in a suit covers his mouth with a fist, not his elbow.

Enlarge / BRASILIA, BRAZIL – OCTOBER 19: President of Brazil Jair Bolsonaro coughs during a press conference. (credit: Getty | Andressa Anholete)

A Brazilian Senate committee investigating the country’s response to the COVID-19 pandemic has recommended that President Jair Bolsonaro face nine criminal charges, including “crimes against humanity,” for his role in the public health crisis.

In a lengthy report released Wednesday, the 11-member committee said that Bolsonaro allowed the pandemic coronavirus to spread freely through the country in a failed attempt to achieve herd immunity, leading to the deaths of hundreds of thousands of people. The report also took aim at Bolsonaro’s promotion of ineffective treatments, such as hydroxychloroquine. The committee blames the president’s policies for the deaths of more than 300,000 Brazilians.

In addition to crimes against humanity, the committee accused Bolsonaro of quackery, malfeasance, inciting crime, improper use of public funds, and forgery. In all, the committee called for indictments of 66 people, including Bolsonaro and three of his sons, as well as two companies.

Read 6 remaining paragraphs | Comments

#brazil, #covid-19, #crimes-against-humanity, #misinformation, #science

Anti-vaccine school in Florida tells kids to stay home if they get a COVID shot

Centner Academy private school building is seen in Miami's Design District in Miami on April 27, 2021.

Enlarge / Centner Academy private school building is seen in Miami’s Design District in Miami on April 27, 2021. (credit: Getty | CHANDAN KHANNA)

An anti-vaccine private school in Miami, Florida, is requiring students who receive a COVID-19 vaccine to stay home for 30 days after each shot, according to local news outlet WSVN.

In a letter to parents, the school once again spread vaccine misinformation, falsely claiming that COVID-19 vaccines can cause “potential transmission or shedding onto others.” No COVID-19 vaccines in use in the US include a live virus; there are only mRNA-based vaccines (Pfizer and Moderna) and a nonreplicating viral vector-based vaccine (J&J) in use here. These COVID-19 vaccines do not cause “shedding” or pose any risk of transmission of SARS-CoV-2 to others.

The school, the Centner Academy, is well-known for its anti-vaccine rhetoric and vaccine misinformation. The academy notes that it is against all vaccine mandates and does not require any immunizations for its students, citing “freedom of choice.” Without evidence, it links routine, safe, and life-saving childhood vaccinations to the rise of a variety of health conditions such as diabetes and offers to help parents obtain exemptions from state vaccine requirements. Like many anti-vaccine groups, Centner plays up fears of harms and falsely suggests that there have been insufficient safety studies on vaccines. Centner’s tuition ranges from $15,000 to $30,000 per year.

Read 4 remaining paragraphs | Comments

#anti-vaccine, #centner, #covid-19, #florida, #infectious-disease, #misinformation, #pandemic, #public-health, #science

“Hacker X”—the American who built a pro-Trump fake news empire—unmasks himself

A shadowy figure holds a mask of Donald Trump.

Enlarge (credit: Aurich Lawson | Getty Images)

This is the story of the mastermind behind one of the largest “fake news” operations in the US.

For two years, he ran websites and Facebook groups that spread bogus stories, conspiracy theories, and propaganda. Under him was a dedicated team of writers and editors paid to produce deceptive content—from outright hoaxes to political propaganda—with the supreme goal of tipping the 2016 election to Donald Trump.

Through extensive efforts, he built a secret network of self-reinforcing sites from the ground up. He devised a strategy that got prominent personalities—including Trump—to retweet misleading claims to their followers. And he fooled unwary American citizens, including the hacker’s own father, into regarding fake news sources more highly than the mainstream media.

Read 74 remaining paragraphs | Comments

#2016-election, #biz-it, #donald-trump, #fake-news, #features, #hacker-x, #hillary-clinton, #misinformation, #policy, #politics, #tech, #vaccine-misinformation

There will soon be no more ads denying climate change on Google

A fire engine drives into air thick with smoke along Juniper Hills Road as the Bobcat Fire advances North into the Antelope Valley in 2020.

Enlarge / A fire engine drives into air thick with smoke along Juniper Hills Road as the Bobcat Fire advances North into the Antelope Valley in 2020. (credit: Robert Gauthier | Getty Images)

Late Thursday, Google announced that it is demonetizing content that makes misleading or false claims about climate change. As a result, content that calls into question or denies the scientific consensus around anthropogenic climate change will not have Google advertising alongside it. In addition, Google will no longer run any advertising that “contradicts well-established scientific consensus around the existence and causes of climate change.”

With a worldwide public-health crisis taking place in the midst of natural disasters that are fueled, at least in part, by human-caused climate change, the amount of misinformation and outright falsehoods is not only frustrating but dangerous. Google has been criticized for its role in the spread of misinformation, and lots of people are unhappy with it. People paying for ads don’t want their ads appearing alongside misinformation-filled videos, and content producers don’t want to see their product interrupted by error-filled ads.

Message received, says Google. “We’ve heard directly from a growing number of our advertising and publisher partners who have expressed concerns about ads that run alongside or promote inaccurate claims about climate change,” Google said. “Advertisers simply don’t want their ads to appear next to this content. And publishers and creators don’t want ads promoting these claims to appear on their pages or videos.”

Read 1 remaining paragraphs | Comments

#advertising, #climate-change, #google, #misinformation, #policy, #science

Fact-checking works to undercut misinformation in many countries

The word

Enlarge (credit: Gordon Jolly / Flickr)

In the wake of the flood of misinformation that’s drowning the US, lots of organizations have turned to fact-checks. Many newsrooms set up dedicated fact-check groups, and some independent organizations were formed to provide the service. We get live fact-checking of political debates, and Facebook will now tag material it deems misinformation with links to a fact-check.

Obviously, given how many people are still afraid of COVID-19 vaccines, there are limits to how much fact-checking can accomplish. But might it be effective outside the overheated misinformation environment in the US? A new study tests out the efficacy of fact-checking in a set of countries that are both geographically and culturally diverse, and it finds that fact-checking is generally more effective at shaping public understanding than misinformation is.

Checking in with different countries

The two researchers behind the new work, Ethan Porter and Thomas Wood, identified three countries that are outside the usual group of rich, industrialized nations where most population surveys occur. These were Argentina, Nigeria, and South Africa. As a bit of a control for the typical surveys, they also ran their study in the UK. All four of these countries have professional fact-checking organizations that assisted with the work and were able to recruit 2,000 citizens for the study.

Read 11 remaining paragraphs | Comments

#behavioral-science, #fact-checking, #human-behavior, #misinformation, #science

Reddit’s teach-the-controversy stance on COVID vaccines sparks wider protest

Photo illustration with a hand holding a mobile phone and a Reddit logo in the background.

Enlarge (credit: Getty Images | SOPA Images )

Over 135 subreddits have gone dark this week in protest of Reddit’s refusal to ban communities that spread misinformation about the COVID pandemic and vaccines.

Subreddits that went private include two with 10 million or more subscribers, namely r/Futurology and r/TIFU. The PokemonGo community is one of 15 other subreddits with at least 1 million subscribers that went private; another 15 subreddits with at least 500,000 subscribers also went private. They’re all listed in a post on “r/VaxxHappened” which has been coordinating opposition to Reddit management’s stance on pandemic misinformation. More subreddits are being added as they join the protest.

“Futurology has gone private to protest Reddit’s inaction on COVID-19 misinformation,” a message on that subreddit says. “Reddit won’t enforce their policies against misinformation, brigading, and spamming. Misinformation subreddits such as NoNewNormal and r/conspiracy must be shut down. People are dying from misinformation.”

Read 12 remaining paragraphs | Comments

#covid, #misinformation, #policy, #reddit, #vaccine

Facebook will reportedly launch its own advisory group for election policy decisions

Facebook is looking to create a standalone advisory committee for election-related policy decisions, according to a new report from The New York Times. The company has reportedly approached a number of policy experts and academics it is interested in recruiting for the group, which could give the company cover for some of its most consequential choices.

The group, which the Times characterizes as a commission, would potentially be empowered to weigh in on issues like election misinformation and political advertising — two of Facebook’s biggest policy headaches. Facebook reportedly plans for the commission to be in place for the 2022 U.S. midterm elections and could announce its formation as soon as this fall.

Facebook’s election commission could be modeled after the Oversight Board, the company’s first experiment in quasi-independent external decision making. The Oversight Board began reviewing cases in October of last year, but didn’t gear up in time to impact the flood of election misinformation that swept the platform during the U.S. presidential election. Initially, the board could only make policy rulings based on material that was already removed from Facebook.

The company touts the independence of the Oversight Board, and while it does operate independently, Facebook created the group and appointed its four original co-chairs. The Oversight Board is able to set policy precedents and make binding per-case moderation rulings, but ultimately its authority comes from Facebook itself, which at any point could decide to ignore the board’s decisions.

A similar external policy-setting body focused on elections would be very politically useful for Facebook. The company is a frequent target for both Republicans and Democrats, with the former claiming Facebook censors conservatives disproportionately and the latter calling attention to Facebook’s long history of incubating conspiracies and political misinformation.

Neither side was happy when Facebook decided to suspend political advertising after the election — a gesture that failed to address the exponential spread of organic misinformation. Facebook asked the Oversight Board to review its decision to suspend former President Trump, though the board ultimately kicked its most controversial case back to the company itself.

#content-moderation, #facebook, #facebook-oversight-board, #misinformation, #oversight-board, #political-advertising, #presidential-election, #social, #social-media, #tc, #united-states

YouTube has removed 1 million videos for dangerous COVID-19 misinformation

YouTube has removed 1 million videos for dangerous COVID-19 misinformation since February 2020, according to YouTube’s Chief Product Officer Neal Mahon.

Mahon shared the statistic in a blog post outlining how the company approaches misinformation on its platform. “Misinformation has moved from the marginal to the mainstream,” he wrote. “No longer contained to the sealed-off worlds of Holocaust deniers or 9-11 truthers, it now stretches into every facet of society, sometimes tearing through communities with blistering speed.”

At the same time, the Youtube executive argued that “bad content” accounts for only a small percentage of YouTube content overall. “Bad content represents only a tiny percentage of the billions of videos on YouTube (about .16-.18% of total views turn out to be content that violates our policies),” Mahon wrote. He added that YouTube removes almost 10 million videos each quarter, “the majority of which don’t even reach 10 views.”

Facebook recently made a similar argument about content on its platform. The social network published a report last week that claimed that the most popular posts are memes and other non-political content. And, faced with criticism over its handling of COVID-19 and vaccine misinformation, the company has argued that vaccine misinformation isn’t representative of the kind of content most users see.

Both Facebook and YouTube have come under particular scrutiny for their policies around health misinformation during the pandemic. Both platforms have well over a billion users, which means that even a small fraction of content can have a far-reaching impact. And both platforms have so far declined to disclose details about how vaccine and health misinformation spreads or how many users are encountering it. Mahon also said that removing misinformation is only one aspect of the company’s approach. YouTube is also working on “ratcheting up information from trusted sources and reducing the spread of videos with harmful misinformation.”

Editor’s note: This post originally appeared on Engadget.

#column, #covid-19, #misinformation, #tc, #tceng, #youtube

A mathematician walks into a bar (of disinformation)

Disinformation, misinformation, infotainment, algowars — if the debates over the future of media the past few decades have meant anything, they’ve at least left a pungent imprint on the English language. There’s been a lot of invective and fear over what social media is doing to us, from our individual psychologies and neurologies to wider concerns about the strength of democratic societies. As Joseph Bernstein put it recently, the shift from “wisdom of the crowds” to “disinformation” has indeed been an abrupt one.

What is disinformation? Does it exist, and if so, where is it and how do we know we are looking at it? Should we care about what the algorithms of our favorite platforms show us as they strive to squeeze the prune of our attention? It’s just those sorts of intricate mathematical and social science questions that got Noah Giansiracusa interested in the subject.

Giansiracusa, a professor at Bentley University in Boston, is trained in mathematics (focusing his research in areas like algebraic geometry) but he’s also had a penchant of looking at social topics through a mathematical lens, such as connecting computational geometry to the Supreme Court. Most recently, he’s published a book called How Algorithms Create and Prevent Fake News to explore some of the challenging questions around the media landscape today and how technology is exacerbating and ameliorating those trends.

I hosted Giansiracusa on a Twitter Space recently, and since Twitter hasn’t made it easy to listen to these talks afterwards (ephemerality!), I figured I’d pull out the most interesting bits of our conversation for you and posterity.

This interview has been edited and condensed for clarity.

Danny Crichton: How did you decide to research fake news and write this book?

Noah Giansiracusa: One thing I noticed is there’s a lot of really interesting sociological, political science discussion of fake news and these types of things. And then on the technical side, you’ll have things like Mark Zuckerberg saying AI is going to fix all these problems. It just seemed like, it’s a little bit difficult to bridge that gap.

Everyone’s probably heard this recent quote of Biden saying, “they’re killing people,”in regards to misinformation on social media. So we have politicians speaking about these things where it’s hard for them to really grasp the algorithmic side. Then we have computer science people that are really deep in the details. So I’m kind of sitting in between, I’m not a real hardcore computer science person. So I think it’s a little easier for me to just step back and get the bird’s eye view.

At the end of the day, I just felt I kind of wanted to explore some more interactions with society where things get messy, where the math is not so clean.

Crichton: Coming from a mathematical background, you’re entering this contentious area where a lot of people have written from a lot of different angles. What are people getting right in this area and what have people perhaps missed some nuance?

Giansiracusa: There’s a lot of incredible journalism, I was blown away at how a lot of journalists really were able to deal with pretty technical stuff. But I would say one thing that maybe they didn’t get wrong, but kind of struck me was, there’s a lot of times when an academic paper comes out, or even an announcement from Google or Facebook or one of these tech companies, and they’ll kind of mention something, and the journalist will maybe extract a quote, and try to describe it, but they seem a little bit afraid to really try to look and understand it. And I don’t think it’s that they weren’t able to, it really seems like more of an intimidation and a fear.

One thing I’ve experienced a ton as a math teacher is people are so afraid of saying something wrong and making a mistake. And this goes for journalists who have to write about technical things, they don’t want to say something wrong. So it’s easier to just quote a press release from Facebook or quote an expert.

One thing that’s so fun and beautiful about pure math, is you don’t really worry about being wrong, you just try ideas and see where they lead and you see all these interactions. When you’re ready to write a paper or give a talk, you check the details. But most of math is this creative process where you’re exploring, and you’re just seeing how ideas interact. My training as a mathematician you think would make me apprehensive about making mistakes and to be very precise, but it kind of had the opposite effect.

Second, a lot of these algorithmic things, they’re not as complicated as they seem. I’m not sitting there implementing them, I’m sure to program them is hard. But just the big picture, all these algorithms nowadays, so much of these things are based on deep learning. So you have some neural net, doesn’t really matter to me as an outsider what architecture they’re using, all that really matters is, what are the predictors? Basically, what are the variables that you feed this machine learning algorithm? And what is it trying to output? Those are things that anyone can understand.

Crichton: One of the big challenges I think of analyzing these algorithms is the lack of transparency. Unlike, say, the pure math world which is a community of scholars working to solve problems, many of these companies can actually be quite adversarial about supplying data and analysis to the wider community.

Giansiracusa: It does seem there’s a limit to what anyone can deduce just by kind of being from the outside.

So a good example is with YouTube, teams of academics wanted to explore whether the YouTube recommendation algorithm sends people down these conspiracy theory rabbit holes of extremism. The challenge is that because this is the recommendation algorithm, it’s using deep learning, it’s based on hundreds and hundreds of predictors based on your search history, your demographics, the other videos you’ve watched and for how long — all these things. It’s so customized to you and your experience, that all the studies I was able to find use incognito mode.

So they’re basically a user who has no search history, no information and they’ll go to a video and then click the first recommended video then the next one. And let’s see where the algorithm takes people. That’s such a different experience than an actual human user with a history. And this has been really difficult. I don’t think anyone has figured out a good way to algorithmically explore the YouTube algorithm from the outside.

Honestly, the only way I think you could do it is just kind of like an old school study where you recruit a whole bunch of volunteers and sort of put a tracker on their computer and say, “Hey, just live life the way you normally do with your histories and everything and tell us the videos that you’re watching.” So it’s it’s been difficult to get past this fact that a lot of these algorithms, almost all of them, I would say, are so heavily based on your individual data. We don’t know how to study that in the aggregate.

And it’s not just that me or anyone else on the outside who has trouble because we don’t have the data. It’s even people within these companies who built the algorithm and who know how the algorithm works on paper, but they don’t know how it’s going to actually behave. It’s like Frankenstein’s monster: they built this thing, but they don’t know how it’s going to operate. So the only way I think you can really study it is if people on the inside with that data go out of their way and spend time and resources to study it.

Crichton: There are a lot of metrics used around evaluating misinformation and determining engagement on a platform. Coming from your mathematical background, do you think those measures are robust?

Giansiracusa: People try to debunk misinformation. But in the process, they might comment on it, they might retweet it or share it, and that counts as engagement. So a lot of these measurements of engagement, are they really looking at positive or just all engagement? You know, it kind of all gets lumped together?

This happens in academic research, too. Citations are the universal metric of how successful researches is. Well, really bogus things like Wakefield’s original autism and vaccines paper got tons of citations, a lot of them were people citing it because they thought it’s right, but a lot of it was scientists who were debunking it, they cite it in their paper to say, we demonstrate that this theory is wrong. But somehow a citation is a citation. So it all counts towards the success metric.

So I think that’s a bit of what’s happening with engagement. If I post something on my comments saying, “Hey, that’s crazy,” how does the algorithm know if I’m supporting it or not? They could use some AI language processing to try but I’m not sure if they are, and it’s a lot of effort to do so.

Crichton: Lastly, I want to talk a bit about GPT-3 and the concern around synthetic media and fake news. There’s a lot of fear that AI bots will overwhelm media with disinformation — how scared or not scared should we be?

Giansiracusa: Because my book really grew out of a class from experience, I wanted to try to stay impartial, and just kind of inform people and let them reach their own decisions. I decided to try to cut through that debate and really let both sides speak. I think the newsfeed algorithms and recognition algorithms do amplify a lot of harmful stuff, and that is devastating to society. But there’s also a lot of amazing progress of using algorithms productively and successfully to limit fake news.

There’s these techno-utopians, who say that AI is going to fix everything, we’ll have truth-telling, and fact-checking and algorithms that can detect misinformation and take it down. There’s some progress, but that stuff is not going to happen, and it never will be fully successful. It’ll always need to rely on humans. But the other thing we have is kind of irrational fear. There’s this kind of hyperbolic AI dystopia where algorithms are so powerful, kind of like singularity type of stuff that they’re going to destroy us.

When deep fakes were first hitting the news in 2018, and GPT-3 had been released a couple years ago, there was a lot of fear that, “Oh shit, this is gonna make all our problems with fake news and understanding what’s true in the world much, much harder.” And I think now that we have a couple of years of distance, we can see that they’ve made it a little harder, but not nearly as significantly as we expected. And the main issue is kind of more psychological and economic than anything.

So the original authors of GPT-3 have a research paper that introduces the algorithm, and one of the things they did was a test where they pasted some text in and expanded it to an article, and then they had some volunteers evaluate and guess which is the algorithmically-generated one and which article is the human-generated one. They reported that they got very, very close to 50% accuracy, which means barely above random guesses. So that sounds, you know, both amazing and scary.

But if you look at the details, they were extending like a one line headline to a paragraph of text. If you tried to do a full, The Atlantic-length or New Yorker-length article, you’re gonna start to see the discrepancies, the thought is going to meander. The authors of this paper didn’t mention this, they just kind of did their experiment and said, “Hey, look how successful it is.”

So it looks convincing, they can make these impressive articles. But here’s the main reason, at the end of the day, why GPT-3 hasn’t been so transformative as far as fake news and misinformation and all this stuff is concerned. It’s because fake news is mostly garbage. It’s poorly written, it’s low quality, it’s so cheap and fast to crank out, you could just pay your 16-year-old nephew to just crank out a bunch of fake news articles in minutes.

It’s not so much that math helped me see this. It’s just that somehow, the main thing we’re trying to do in mathematics is to be skeptical. So you have to question these things and be a little skeptical.

#algorithms, #artificial-intelligence, #deep-learning, #disinformation, #facebook, #government, #gpt-3, #machine-learning, #media, #misinformation, #policy, #social-media, #youtube

Facebook releases a glimpse of its most popular posts, but we don’t learn much

Facebook is out with a new report collecting the most popular posts on the platform, responding to critics who believe the company is deliberately opaque about its top-performing content.

Facebook’s new “widely viewed content reports” will come out quarterly, reflecting most viewed top News Feed posts in the U.S. every three months — not exactly the kind of real-time data monitoring that might prove useful for observing emerging trends.

With the new data set, Facebook hopes to push back against criticism that its algorithms operate within a black box. But like its often misleading blogged rebuttals and the other sets of cherry-picked data it shares, the company’s latest gesture at transparency is better than nothing, but not particularly useful.

So what do we learn? According to the new data set, 87% of posts that people viewed in the U.S. during Q2 of this year didn’t include an outside link. That’s notable but not very telling since Facebook still has an incredibly massive swath of people sharing and seeing links on a daily basis.

YouTube is predictably the top domain by Facebook’s chosen metric of “content viewers,” which it defines as any account that saw a piece of content on the News Feed, though we don’t get anything in the way of potentially helpful granular data there. Amazon, Gofundme, TikTok and others also in the top 10, no surprises there either.

Things get weirder when Facebook starts breaking down its most viewed links. The top five links include a website for alumni of the Green Bay Packers football team, a random online CBD marketplace and reppnforchrist.com, an apparently prominent portal for Christianity-themed graphic T-shirts. The subscription page for the Epoch Times, a site well known for spreading pro-Trump conspiracies and other disinformation, comes in at No 10, though it was beaten by a Tumblr link to two cats walking with their tails intertwined.

Image Credits: Facebook

Yahoo and ABC News are the only prominent national media outlets that make the top 20 when the data is sliced and diced in this particular way. Facebook also breaks down which posts the most people viewed during the period with a list of mostly benign if odd memes, including one that reads “If your VAGINA [cat emoji] or PENIS [eggplant emoji] was named after the last TV show/Move u watched what would it be.”

If you’re wondering why Facebook chose to collect and present this set of data in this specific way, it’s because the company is desperately trying to prove a point: That its platform isn’t overrun by the political conspiracies and controversial right-wing personalities that make headlines.

The dataset is Facebook’s latest argument in its long feud with New York Times reporter Kevin Roose, who created a Twitter account that surfaces Facebook’s most engaging posts on a daily basis, as measured through the Facebook-owned social monitoring tool CrowdTangle.

By the metric of engagement, Facebook’s list of top-performing posts in the U.S. are regularly dominated by far-right personalities and sites like Newsmax, which pushes election conspiracies that Facebook would prefer to distance itself from.

The company argues that Facebook posts with the most interactions don’t accurately represent the top content on the platform. Facebook insists that reach data, which measures how many people see a given post, is a superior metric, but there’s no reason that engagement data isn’t just as relevant if not more so.

“The content that’s seen by the most people isn’t necessarily the content that also gets the most engagement,” Facebook wrote, in a dig clearly aimed at Roose.

The platform wants to de-emphasize political content across the board, which isn’t surprising given its track record of amplifying Russian disinformation, violent far-right militias and the Stop the Steal movement, which culminated in deadly violence at the U.S. Capitol in January.

As The New York Times previously reported, Facebook actually scrapped plans to make its reach data widely available through a public dashboard over fears that even that version of its top-performing posts wouldn’t reflect well on the company.

Instead, the company opted to offer a taste of that data in a quarterly report and the result shows plenty of junk content, but less in the way of politics. Facebook’s cursory gesture of transparency notwithstanding, it’s worth remembering that nothing is stopping the company from letting people see a leaderboard of its most popular content at any given time — in real-time even! — beyond the its own fear of bad press.

#amazon, #computing, #facebook, #misinformation, #new-york-times, #news-feed, #social, #social-media, #software, #tc, #the-new-york-times, #united-states

Twitter asks users to flag COVID-19 and election misinformation

Twitter introduced a new test feature Tuesday that allows users to report misinformation they run into on the platform, flagging it to the company as “misleading.” The test will roll out starting today to most users in the U.S., Australia and South Korea.

In the new test, Twitter users will be able to expand the three dot contextual menu in the upper right corner of a tweet to select “report tweet” where they’ll be met with the new option to flag a misleading tweet. The next menu offers users a choice to specify that a tweet is misleading about “politics,” “health” or “something else.” If they select politics, they can specify if the misleading political tweet pertains to elections and if they choose health they can flag a misleading tweet about COVID-19 specifically.

Twitter has added a way for users to report election-related misinformation before, though previously those options were temporary features linked to global elections. Back in 2019, the platform rolled out the option to report misleading tweets about voting to help safeguard elections in Europe and India.

The intention is to give users a way to surface tweets that violate Twitter’s existing policies around election and pandemic-related misinformation, two topics it focuses policy and enforcement efforts around. The user reporting system will work in tandem with Twitter’s proactive systems for identifying potentially dangerous misinformation, which rely on a combination of human and automated moderation. For now, users won’t receive any updates from the company on what happens to misleading tweets they report, though those updates could be added in the future.

While the new reporting feature will be available very broadly, the company describes the test as an “experiment,” not a finished feature. Twitter will observe how people on the platform use the new misinformation reporting tool to see if user reporting can be an effective tool for identifying potentially harmful misleading tweets, though the company isn’t on a set timeline for when to fully implement or remove the test feature.

For now, Twitter doesn’t seem very worried about users abusing the feature, since the new user reporting option will plug directly into its established moderation system. Still, the idea of users pointing the company toward “misleading” tweets is sure to spark new cries of censorship from corners of the platform already prone to spreading misinformation.

While the option to flag tweets as misleading is new, the feature will feed reported tweets into Twitter’s existing enforcement flow, where its established rules around health and political misinformation are implemented through a blend of human and algorithmic moderation.

That process will also sort reported tweets for review based on priority. Tweets from users with large followings or tweets generating an unusually high level of engagement will go to the front of the review line, as will tweets that pertain to elections and COVID-19, Twitter’s two areas of emphasis when it comes to policing misinformation.

The new test is Twitter’s latest effort to lean more on its own community to identify misinformation. Twitter’s most ambitious experiment along those lines is Birdwatch, a crowdsourced way for users to append contextual notes and fact-checks to tweets that can be upvoted or downvoted, Reddit-style. For now, Birdwatch is just a pilot program, but it’s clear the company is interested in decentralizing moderation — an experiment far thornier than just adding a new way to report tweets.

#australia, #covid-19, #disinformation, #election-misinformation, #misinformation, #pandemic, #political-misinformation, #social, #social-media, #south-korea, #tc, #twitter, #united-states

Senators press Facebook for answers about why it cut off misinformation researchers

Facebook’s decision to close accounts connected to a misinformation research project last week prompted a broad outcry from the company’s critics — and now Congress is getting involved.

A handful of lawmakers criticized the decision at the time, slamming Facebook for being hostile toward efforts to make the platform’s opaque algorithms and ad targeting methods more transparent. Researchers believe that studying those hidden systems is crucial work for gaining insight on the flow of political misinformation.

The company specifically punished two researchers with NYU’s Cybersecurity for Democracy project who work on Ad Observer, an opt-in browser tool that allows researchers to study how Facebook targets ads to different people based on their interests and demographics.

In a new letter, embedded below, a trio of Democratic senators are pressing Facebook for more answers. Senators Amy Klobuchar (D-MN), Chris Coons (D-DE) and Mark Warner (D-VA) wrote to Facebook CEO Mark Zuckerberg asking for a full explanation on why the company terminated the researcher accounts and how they violated the platform’s terms of service and compromised user privacy. The lawmakers sent the letter on Friday.

“While we agree that Facebook must safeguard user privacy, it is similarly imperative that Facebook allow credible academic researchers and journalists like those involved in the Ad Observatory project to conduct independent research that will help illuminate how the company can better tackle misinformation, disinformation, and other harmful activity that is proliferating on its platforms,” the senators wrote.

Lawmakers have long urged the company to be more transparent about political advertising and misinformation, particularly after Facebook was found to have distributed election disinformation in 2016. Those concerns were only heightened by the platform’s substantial role in spreading election misinformation leading up to the insurrection at the U.S. Capitol, where Trump supporters attempted to overturn the vote.

In a blog post defending its decision, Facebook cited compliance with FTC as one of the reason the company severed the accounts. But the FTC called Facebook’s bluff last week in a letter to Zuckerberg, noting that nothing about the agency’s guidance for the company would preclude it from encouraging research in the public interest.

“Indeed, the FTC supports efforts to shed light on opaque business practices, especially around surveillance-based advertising,” Samuel Levine, the FTC’s acting director for the Bureau of Consumer Protection, wrote.

#amy-klobuchar, #computing, #congress, #facebook, #federal-trade-commission, #mark-zuckerberg, #misinformation, #nyu, #political-advertising, #privacy, #social, #social-media, #software, #tc, #technology, #trump

Deep dive into stupid: Meet the growing group that rejects germ theory

This thriving Facebook group says viruses don't cause disease and the pandemic isn't real.

Enlarge / This thriving Facebook group says viruses don’t cause disease and the pandemic isn’t real. (credit: Facebook)

Listen up, sheeple: COVID-19 doesn’t exist. Viruses don’t cause disease, and they aren’t contagious. Those doctors and health experts who say otherwise don’t know what they’re talking about; the real experts are on Facebook. And they’re saying it loud and clear: The pandemic is caused by your own deplorable life choices, like eating meat or pasta. Any “COVID” symptoms you might experience are actually the result of toxic lifestyle exposures—and you have only yourself to blame.

As utterly idiotic and abhorrent as all of the above is, it’s not an exaggeration of the messages being spread by a growing group of Darwin-award finalists on the Internet—that is, germ theory denialists. Yes, you read that correctly: Germ theory denialists—also known as people who don’t believe that pathogenic viruses and bacteria can cause disease.

As an extension of their rejection of basic scientific and clinical data collected over centuries, they deny the existence of the devastating pandemic that has sickened upwards of 200 million people worldwide, killing more than 4 million.

Read 20 remaining paragraphs | Comments

#facebook, #germ-theory, #misinformation, #public-health, #science, #vaccines

Twitter partners with AP and Reuters to address misinformation on its platform

Twitter announced today it’s partnering with news organizations The Associated Press (AP) and Reuters to expand its efforts focused on highlighting reliable news and information on its platform. Through the new agreements, Twitter’s Curation team will be able to leverage the expertise of the partnered organizations to add more context to the news and trends that circulate across Twitter, as well as aid with the company’s use of public service announcements during high-visibility events, misinformation labels and more.

Currently, the Curation team works to add additional information to content that includes Top Trends and other news on Twitter’s Explore tab. The team is also involved with how certain search results are ranked, to ensure that content from high-quality searches appear at the top of search results when certain keywords or hashtags are searched for on Twitter.

The team may also be involved with the prompts that appear in the Explore tab on the Home Timeline related to major events, like public health emergencies (such as the pandemic) or other events, like elections. And they may help with the misinformation labels that appear on tweets that are allowed to remain visible on Twitter, but are labeled with informative context from authoritative sources. These include tweets that violate Twitter’s rules around manipulated media, election integrity, or COVID-19.

However, the team operates separately from Twitter’s Trust and Safety team, which determines when tweets violate Twitter’s guidelines and punitive action, like removal or bans, must be taken, Twitter confirmed that neither the AP nor Reuters will be involved in those sorts of enforcement decisions.

Image Credits: Twitter

By working more directly with AP and Reuters, who also partner with Facebook on fact checks, Twitter says it will be able to increase the speed and scale to which it’s able to add this additional information to tweets and elsewhere on its platform. In particular, that means in times where news is breaking and when facts are in dispute as a story emerges, Twitter’s own team will be able to quickly turn to these more trusted sources to improve how contextual information is added to the conversations taking place on Twitter.

This could also be useful in stopping misinformation from going viral, instead of waiting until after the fact to correct misleading tweets.

Twitter’s new crowdsourced fact-checking system Birdwatch will also leverage feedback from AP and Reuters to help determine the quality of information shared by Birdwatch participants.

The work will see the Curation team working with the news organizations not just to add context to stories and conversations, but also to help identify which stories need context added, Twitter told us. This added context could appear in many different places on Twitter, including on tweets, search, in Explore, and in curated selections, called Twitter Moments.

Twitter has often struggled with handling misinformation on its platform due its real-time nature and use by high-profile figures, who attempt to manipulate the truth for their own ends. To date, it has experimented with many features to slow or stop the spread of misinformation from disabling one-click retweets, to adding fact checks, to banning accounts, and more. Birdwatch is the latest effort to add context to tweets, but the system is a decentralized attempt at handling misinformation — not one that relies on trusted partners.

“AP has a long history of working closely with Twitter, along with other platforms, to expand the reach of factual journalism,” noted Tom Januszewski, vice president of Global Business Development at AP, in a statement about the new agreement. “This work is core to our mission. We are particularly excited about leveraging AP’s scale and speed to add context to online conversations, which can benefit from easy access to the facts,” he said.

“Trust, accuracy and impartiality are at the heart of what Reuters does every day, providing billions of people with the information they need to make smart decisions,” added Hazel Baker, the head of UGC Newsgathering at Reuters. “Those values also drive our commitment to stopping the spread of misinformation. We’re excited to partner with Twitter to leverage our deep global and local expertise to serve the public conversation with reliable information,” Baker said.

Initially, the collaborations will focus on English-language content on Twitter, but the company says it expects the work to grow over time to support more languages and timezones. We’re told that, during this initial phase, Twitter will evaluate new opportunities to onboard collaborators that can support additional languages.

#ap-news, #misinformation, #reuters, #tc, #the-associated-press, #twitter

ActiveFence comes out of the shadows with $100M in funding and tech that detects online harm

Online abuse, disinformation, fraud and other malicious content is growing and getting more complex to track. Today, a startup called ActiveFence, which has quietly built a tech platform to suss out threats as they are being formed and planned, to make it easier for trust and safety teams to combat them on platforms, is coming out of the shadows to announce significant funding on the back of a surge of large organizations using its services.

The startup, co-headquartered in New York and Tel Aviv, has raised $100 million, funding that it will use to continue developing its tools and to continue expanding its customer base. To date, ActiveFence says that its customers include companies in social media, audio and video streaming, file sharing, gaming, marketplaces and other technologies — it has yet to disclose any specific names but says that its tools collectively cover “billions” of users. Governments and brands are two other categories that it is targeting as it continues to expand. It has been around since 2018 and is growing at around 100% annually.

The $100 million being announced today actually covers two rounds: its most recent Series B led by CRV and Highland Europe, as well as a Series A it never announced led by Grove Ventures and Norwest Venture Partners. Vintage Investment Partners, Resolute Ventures and other unnamed backers also participated. It’s not disclosing valuation but I understand it’s between $300 million and $400 million. (I’ll update this if we learn more.)

The increase presence of social media and online chatter on other platforms has put a strong spotlight on how those forums are used by bad actors to spread malicious content. ActiveFence’s particular approach is a set of algorithms that tap into innovations in AI (natural language processing) and to map relationships between conversations. It crawls all of the obvious, and less obvious and harder-to-reach parts of the internet to pick up on chatter that is typically where a lot of the malicious content and campaigns are born — some 3 million sources in all — before they become higher-profile issues.  It’s built both on the concept of big data analytics as well as understanding that the long tail of content online has a value if it can be tapped effectively.

“We take a fundamentally different approach to trust, safety and content moderation,” Noam Schwartz, the co-founder and CEO, said in an interview. “We are proactively searching the darkest corners of the web and looking for bad actors in order to understand the sources of malicious content. Our customers then know what’s coming. They don’t need to wait for the damage, or for internal research teams to identify the next scam or disinformation campaign. We work with some of the most important companies in the world, but even tiny, super niche platforms have risks.”

The insights that ActiveFence gathers are then packaged up in an API that its customers can then feed into whatever other systems they use to track or mitigate traffic on their own platforms.

ActiveFence is not the only company building technology to help platform operators, governments and brands to have a better picture of what is going on in the wider online world. Factmata has built algorithms to better understand and track sentiments online; Primer (which also recently raised a big round) also uses NLP to help its customers track online information, with its customers including government organizations that used its technology to track misinformation during election campaigns; Bolster (formerly called RedMarlin) is another.

Some of the bigger platforms have also gotten more proactive in bringing tracking technology and talent in-house: Facebook acquired Bloomsbury AI several years ago for this purpose; Twitter has acquired Fabula (and is working on a bigger efforts like Birdwatch to build better tools), and earlier this year Discord picked up Sentropy, another online abuse tracker. In some cases, companies that more regularly compete against each other for eyeballs and dollars are even teaming up to collaborate on efforts.

Indeed, may well be that ultimately there will exist multiple efforts and multiple companies doing good work in this area, not unlike other corners of the world of security, which might need more than one hammer thrown at problems to crack them. In this particular case, the growth of the startup to date, and its effectiveness in identifying early warning signs, is one reason why investors have been interested in ActiveFence.

“We are pleased to support ActiveFence in this important mission” commented Izhar Armony, the lead investor from CRV, in a statement. “We believe they are ready for the next phase of growth and that they can maintain leadership in the dynamic and fast growing trust and safety market.”

“ActiveFence has emerged as a clear leader in the developing online trust and safety category. This round will help the company to accelerate the growth momentum we witnessed in the past few years,” said Dror Nahumi, general partner at Norwest Venture Partners, in a statement.

#big-data, #enterprise, #europe, #funding, #government, #misinformation, #security, #tc

How much COVID misinformation is on Facebook? Its execs don’t want to know

How much COVID misinformation is on Facebook? Its execs don’t want to know

Enlarge (credit: KJ Parish)

For years, misinformation has flourished on Facebook. Falsehoods, misrepresentations, and outright lies posted on the site have shaped the discourse on everything from national politics to public health.

But despite their role in facilitating communications for billions of people, Facebook executives refused to commit resources to understand the extent to which COVID-19-related misinformation pervaded its platform, according to a report in The New York Times.

Early in the pandemic, a group of data scientists at Facebook met with executives to propose a project that would determine how many users saw misleading or false information about COVID. It wasn’t a small task—they estimated that the process could take up to a year or more to complete—but it would give the company a solid understanding of the extent to which misinformation spread on its platform.

Read 7 remaining paragraphs | Comments

#covid-19, #covid-19-vaccine, #facebook, #misinformation, #policy, #vaccine-misinformation

Tennessee has gone “anti-vaccine,” state vaccine chief says after being fired

Grown women comfort a masked child with a rolled up sleeve.

Enlarge / US first lady Jill Biden (L) comforts Adriana Lyttle, 12, as she receives her vaccine at a COVID-19 vaccination site at Ole Smoky Distillery in Nashville, Tennessee. (credit: Getty | Tom Brenner)

The Tennessee state government on Monday fired its top vaccination official, Dr. Michelle Fiscus, who says that state leaders have “bought into the anti-vaccine misinformation campaign.”

In a fiery statement published late Monday by The Tennessean, Fiscus warns that as the delta variant continues to spread in the undervaccinated state, more Tennesseans “will continue to become sick and die from this vaccine-preventable disease because they choose to listen to the nonsense spread by ignorant people.”

Fiscus is just the latest public health official to quit or lose their position amid the devastating pandemic, many aspects of which have become tragically politicized. Fiscus wrote that, as the now-former medical director for vaccine-preventable diseases and immunization programs at the Tennessee Department of Health, she is the 25th immunization director to leave their position amid the pandemic. With only 64 territorial immunization directors in the country, her firing brings the nationwide turnover in immunization directors to nearly 40 percent during the health crisis.

Read 8 remaining paragraphs | Comments

#anti-vaccine, #covid-19, #infectious-disease, #misinformation, #public-health, #science, #tennessee, #vaccines

Twitter tests more attention-grabbing misinformation labels

Twitter is considering changes to the way it contextualizes misleading tweets that the company doesn’t believe are dangerous enough to be removed from the platform outright.

The company announced the test in a tweet Thursday with an image of the new misinformation labels. Within the limited test, those labels will appear with color-coded backgrounds now, making them much more visible in the feed while also giving users a way to quickly parse the information from visual cues. Some users will begin to see the change this week.

Tweets that Twitter deems “misleading” will get a red background with a short explanation and a notice that users can’t reply to, like or share the content. Yellow labels will appear on content that isn’t as actively misleading. In both cases, Twitter has made it more clear that you can click the labels to find verified information about the topic at hand (in this case, the pandemic).

“People who come across the new labels as a part of this limited test should expect a more communicative impact from the labels themselves both through copy, symbols and colors used to distill clear context about not only the label, but the information or content they are engaging with,” a Twitter spokesperson told TechCrunch.

Image Credits: Twitter

Twitter found that even tiny shifts in design could impact how people interacted with labeled tweets. In a test the company ran with a pink variation of the label, users clicked through to the authoritative information that Twitter provided more but they also quote-tweeted the content itself more, furthering its spread. Twitter says that it tested many variations on the written copy, colors and symbols that made their way into the new misinformation labels.

The changes come after a long public feedback period that convinced the company that misinformation labels needed to stand out better in a sea of tweets. Facebook’s own misinformation labels have also faced criticism for blending in too easily and failing to create much friction for potentially dangerous information on the platform.

Twitter first created content labels as a way to flag “manipulated media” — photos and videos altered to deliberately mislead people, like the doctored deepfake of Nancy Pelosi that went viral back in 2019. Last May, Twitter expanded its use of labels to address the wave of Covid-19 misinformation that swept over social media early in the pandemic.

A month ago, the company rolled out new labels specific to vaccine misinformation and introduced a strike-based system into its rules. The idea is for Twitter to build a toolkit it can use to respond in a proportional way to misinformation depending on the potential for real-world harm.

“… We know that even within the space of our policies, not all misleading claims are equally harmful,” a Twitter spokesperson said. “For example, telling someone to drink bleach in order to cure COVID is a more immediate and severe harm than sharing a viral image of a shark swimming on a flooded highway and claiming that’s footage from a hurricane. (That’s a real thing that happens every hurricane season.)”

Labels are just one of the content moderation options that Twitter developed over the course of the last couple of years, along with warnings that require a click-through and pop-up messages designed to subtly steer people away from impulsively sharing inflammatory tweets.

When Twitter decides not to remove content outright, it turns to an a la carte menu of potential content enforcement options:

  • Apply a label and/or warning message to the Tweet
  • Show a warning to people before they share or like the Tweet;
  • Reduce the visibility of the Tweet on Twitter and/or prevent it from being recommended;
  • Turn off likes, replies, and Retweets; and/or
  • Provide a link to additional explanations or clarifications, such as in a curated landing page or relevant Twitter policies.

In most scenarios, the company will opt for all of the above.

“While there is no single answer to addressing the unique challenges presented by the range of types of misinformation, we believe investing in a multi-prong approach will allow us to be nimble and shift with the constantly changing dynamic of the public conversation,” the spokesperson said.

#disinformation, #misinformation, #social, #social-media, #tc, #twitter

Twitter starts rolling out Birdwatch fact checks inside tweets

Twitter is looking to crowdsource its way out of misinformation woes with its new product Birdwatch which taps a network of engaged tweeters to add notes to misleading tweets. Today, Twitter announced that they are starting to roll out the Birdwatch notes to pilot participants across iOS, Android and desktop.

The company launched a pilot version of the program back in January, describing the effort as a way to add context to misinformation in real time.

“We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable” Product VP Keith Coleman wrote in a blog post at the time. “Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.”

That time is apparently now for an early set of Birdwatch pilot participants.

Twitter says that once Birdwatch notes are added to a tweet, users will have the opportunity to rate whether the feedback is helpful or not. If none of the replies are deemed helpful, the Birdwatch card itself will disappear, but if any notes are deemed helpful they’ll pop up directly inside the tweet.

There have been an awful lot of questions about how and whether Birdwatch will work inside the current social media framework. Using community feedback differs from more centralized efforts used by platforms like Facebook that have tapped independent fact-checking organizations. Twitter is clearly aiming to decentralize this effort as much as it can and put power in the hands of Birdwatch contributors, but with audiences of individual tweeters currently responsible for deeming the helpfulness and visibility of fact checks, it’s clear this is going to be a pretty messy solution at times.

#android, #computing, #crowdsource, #crowdsourcing, #deception, #keith-coleman, #misinformation, #operating-systems, #social, #social-media, #software, #twitter

Dunning-Kruger meets fake news

A silhouetted figure goes fishing in a complex collage.

Enlarge (credit: Aurich Lawson | Getty Images)

The Dunning-Kruger effect is perhaps both one of the the most famous biases in human behavior—and the most predictable. It posits that people who don’t understand a topic also lack sufficient knowledge to recognize that they don’t understand it. Instead, they know just enough to convince themselves they’re completely on top of the topic, with results ranging from hilarious to painful.

Inspired by the widespread sharing of news articles that are blatantly false, a team of US-based researchers looked into whether Dunning-Kruger might be operating in the field of media literacy. Not surprisingly, people do, in fact, overestimate their ability to identify misleading news. But the details are complicated, and there’s no obvious route to overcoming this bias in any case.

Evaluating the news

Media literacy has the potential to limit the rapid spread of misinformation. Assuming people care about the accuracy of the things they like or share—something that’s far from guaranteed—a stronger media literacy would help people evaluate if something was likely to be accurate before pressing that share button. Evaluating the credibility of sources is an essential part of that process.

Read 13 remaining paragraphs | Comments

#behavioral-science, #biology, #dunning-kruger, #misinformation, #science

Indivisible is training an army of volunteers to neutralize political misinformation

The grassroots Democratic organization Indivisible is launching its own team of stealth fact-checkers to push back against misinformation — an experiment in what it might look like to train up a political messaging infantry and send them out into the information trenches.

Called the “Truth Brigade,” the corps of volunteers will learn best practices for countering popular misleading narratives on the right. They’ll coordinate with the organization on a biweekly basis to unleash a wave of progressive messaging that aims to drown out political misinformation and boost Biden’s legislative agenda in the process.

Considering the scope of the misinformation that remains even after social media’s big January 6 cleanup, the project will certainly have its work cut out for it.

“This is an effort to empower volunteers to step into a gap that is being created by very irresponsible behavior by the social media platforms,” Indivisible co-founder and co-executive director Leah Greenberg told TechCrunch. “It is absolutely frustrating that we’re in this position of trying to combat something that they ultimately have a responsibility to address.”

Greenberg co-founded Indivisible with her husband following the 2016 election. The organization grew out of the viral success the pair had when they and two other former House staffers published a handbook to Congressional activism. The guide took off in the flurry of “resist”-era activism on the left calling on Americans to push back on Trump and his agenda.

Indivisible’s Truth Brigade project blossomed out of a pilot program in Colorado spearheaded by Jody Rein, a senior organizer concerned about what she was seeing in her state. Since that pilot began last fall, the program has grown into 2,500 volunteers across 45 states.

The messaging will largely center around Biden’s ambitious legislative packages: the American Rescue plan, the voting rights bill HR1 and the forthcoming infrastructure package. Rather than debunking political misinformation about those bills directly, the volunteer team will push back with personalized messages promoting the legislation and dispelling false claims within their existing social spheres on Facebook and Twitter.

The coordinated networks at Indivisible will cross-promote those pieces of semi-organic content using tactics parallel to what a lot of disinformation campaigns do to send their own content soaring (In the case of groups that make overt efforts to conceal their origins, Facebook calls this “coordinated inauthentic behavior.”) Since the posts are part of a volunteer push and not targeted advertising, they won’t be labeled, though some might contain hashtags that connect them back to the Truth Brigade campaign.

Volunteers are trained to serve up progressive narratives in a “truth sandwich” that’s careful to not amplify the misinformation it’s meant to push back against. For Indivisible, training volunteers to avoid giving political misinformation even more oxygen is a big part of the effort.

“What we know is that actually spreads disinformation and does the work of some of these bad actors for them,” Greenberg said. “We are trying to get folks to respond not by engaging in that fight — that’s really doing their work for them — but by trying to advance the kind of narrative that we actually want people to buy into.”

She cites the social media outrage cycle perpetuated by Georgia Rep. Marjorie Taylor Greene as a harbinger of what Democrats will again be up against in 2022. Taylor Greene is best known for endorsing QAnon, getting yanked off of her Congressional committee assignments and comparing mask requirements to the Holocaust — comments that inspired some Republicans to call for her ouster from the party.

Political figures like Greene regularly rile up the left with outlandish claims and easily debunked conspiracies. Greenberg believes that political figures like Greene who regularly rile up the online left suck up a lot of energy that could be better spent resisting the urge to rage-retweet and spreading progressive political messages.

“It’s not enough to just fact check [and] it’s not enough to just respond, because then fundamentally we’re operating from a defensive place,” Greenberg said.

“We want to be proactively spreading positive messages that people can really believe in and grab onto and that will inoculate them from some of this.”

For Indivisible, the project is a long-term experiment that could pave the way for a new kind of online grassroots political campaign beyond targeted advertising — one that hopes to boost the signal in a sea of noise.

#articles, #biden, #disinformation, #energy, #government, #misinformation, #operating-systems, #policy, #president, #social-media, #social-media-platforms, #tc, #trump, #twitter

Facebook changes misinfo rules to allow posts claiming Covid-19 is man-made

Facebook made a few noteworthy changes to its misinformation policies this week, including the news that the company will now allow claims that Covid was created by humans — a theory that contradicts the previously prevailing assumption that humans picked up the virus naturally from animals.

“In light of ongoing investigations into the origin of COVID-19 and in consultation with public health experts, we will no longer remove the claim that COVID-19 is man-made from our apps,” a Facebook spokesperson told TechCrunch. “We’re continuing to work with health experts to keep pace with the evolving nature of the pandemic and regularly update our policies as new facts and trends emerge.”

The company is adjusting its rules about pandemic misinformation in light of international investigations legitimating the theory that the virus could have escaped from a lab. While that theory clearly has enough credibility to be investigated at this point, it is often interwoven with demonstrably false misinformation about fake cures, 5G towers causing Covid and most recently the false claim that the AstraZeneca vaccine implants recipients with a bluetooth chip.

Earlier this week, President Biden ordered a multi-agency intelligence report evaluating if the virus could have accidentally leaked out of a lab in Wuhan, China. Biden called this possibility one of two “likely scenarios.”

“… Shortly after I became President, in March, I had my National Security Advisor task the Intelligence Community to prepare a report on their most up-to-date analysis of the origins of COVID-19, including whether it emerged from human contact with an infected animal or from a laboratory accident,” Biden said in an official White House statement, adding that there isn’t sufficient evidence to make a final determination.

Claims that the virus was man-made or lab-made have circulated widely since the pandemic’s earliest days, even as the scientific community largely maintained that the virus probably made the jump from an infected animal to a human via natural means. But many questions remain about the origins of the virus and the U.S. has yet to rule out the possibility that the virus emerged from a Chinese lab — a scenario that would be a bombshell for international relations.

Prior to the Covid policy change, Facebook announced that it would finally implement harsher punishments against individuals who repeatedly peddle misinformation. The company will now throttle the News Feed reach of all posts from accounts that are found to habitually share known misinformation, restrictions it previously put in place for Pages, Groups, Instagram accounts and websites that repeatedly break the same rules.

#astrazeneca, #biden, #china, #covid-19, #facebook, #government, #misinformation, #president, #social, #tc, #united-states, #white-house

Facebook is testing pop-up messages telling people to read a link before they share it

Years after popping open a pandora’s box of bad behavior, social media companies are trying to figure out subtle ways to reshape how people use their platforms.

Following Twitter’s lead, Facebook is trying out a new feature designed to encourage users to read a link before sharing it. The test will reach 6 percent of Facebook’s Android users globally in a gradual rollout that aims to encourage “informed sharing” of news stories on the platform.

Users can still easily click through to share a given story, but the idea is that by adding friction to the experience, people might rethink their original impulses to share the kind of inflammatory content that currently dominates on the platform.

Twitter introduced prompts urging users to read a link before retweeting it last June and the company quickly found the test feature to be successful, expanding it to more users.

Facebook began trying out more prompts like this last year. Last June, the company rolled out pop-up messages to warn users before they share any content that’s more than 90 days old in an an effort to cut down on misleading stories taken out of their original context.

At the time, Facebook said it was looking at other pop-up prompts to cut down on some kinds of misinformation. A few months later, Facebook rolled out similar pop-up messages that noted the date and the source of any links they share related to COVID-19.

The strategy demonstrates Facebook’s preference for a passive strategy of nudging people away from misinformation and toward its own verified resources on hot button issues like COVID-19 and the 2020 election.

While the jury is still out on how much of an impact this kind of gentle behavioral shaping can make on the misinformation epidemic, both Twitter and Facebook have also explored prompts that discourage users from posting abusive comments.

Pop-up messages that give users a sense that their bad behavior is being observed might be where more automated moderation is headed on social platforms. While users would probably be far better served by social media companies scrapping their misinformation and abuse-ridden existing platforms and rebuilding them more thoughtfully from the ground up, small behavioral nudges will have to do.

#android, #facebook, #misinformation, #social, #social-media, #tc, #twitter

At social media hearing, lawmakers circle algorithm-focused Section 230 reform

Rather than a CEO-slamming sound bite free-for-all, Tuesday’s big tech hearing on algorithms aimed for more of a listening session vibe — and in that sense it mostly succeeded.

The hearing centered on testimony from the policy leads at Facebook, YouTube and Twitter rather than the chief executives of those companies for a change. The resulting few hours didn’t offer any massive revelations but was still probably more productive than squeezing some of the world’s most powerful men for their commitments to “get back to you on that.”

In the hearing, lawmakers bemoaned social media echo chambers and the ways that the algorithms pumping content through platforms are capable of completely reshaping human behavior. .

“… This advanced technology is harnessed into algorithms designed to attract our time and attention on social media, and the results can be harmful to our kids’ attention spans, to the quality of our public discourse, to our public health, and even to our democracy itself,” said Chris Coons (D-DE), chair of the Senate Judiciary’s subcommittee on privacy and tech, which held the hearing.

Coons struck a cooperative note, observing that algorithms drive innovation but that their dark side comes with considerable costs

None of this is new, of course. But Congress is crawling closer to solutions, one repetitive tech hearing at a time. The Tuesday hearing highlighted some zones of bipartisan agreement that could determine the chances of a tech reform bill passing the Senate, which is narrowly controlled by Democrats. Coons expressed optimism that a “broadly bipartisan solution” could be reached.

What would that look like? Probably changes to Section 230 of the Communications Decency Act, which we’ve written about extensively over the years. That law protects social media companies from liability for user-created content and it’s been a major nexus of tech regulation talk, both in the newly Democratic Senate under Biden and the previous Republican-led Senate that took its cues from Trump.

Lauren Culbertson, head of U.S. public policy at Twitter

Lauren Culbertson, head of U.S. public policy at Twitter Inc., speaks remotely during a Senate Judiciary Subcommittee hearing in Washington, D.C., U.S., on Tuesday, April 27, 2021. Photographer: Al Drago/Bloomberg via Getty Images

A broken business model

In the hearing, lawmakers pointed to flaws inherent to how major social media companies make money as the heart of the problem. Rather than criticizing companies for specific failings, they mostly focused on the core business model from which social media’s many ills spring forth.

“I think it’s very important for us to push back on the idea that really complicated, qualitative problems have easy quantitative solutions,” Sen. Ben Sasse (R-NE) said. He argued that because social media companies make money by keeping users hooked to their products, any real solution would have to upend that business model altogether.

“The business model of these companies is addiction,” Josh Hawley (R-MO) echoed, calling social media an “attention treadmill” by design.

Ex-Googler and frequent tech critic Tristan Harris didn’t mince words about how tech companies talk around that central design tenet in his own testimony. “It’s almost like listening to a hostage in a hostage video,” Harris said, likening the engagement-seeking business model to a gun just offstage.

Spotlight on Section 230

One big way lawmakers propose to disrupt those deeply entrenched incentives? Adding algorithm-focused exceptions to the Section 230 protections that social media companies enjoy. A few bills floating around take that approach.

One bill from Sen. John Kennedy (R-LA) and Reps. Paul Gosar (R-A) and Tulsi Gabbard (R-HI) would require platforms with 10 million or more users to obtain consent before serving users content based on their behavior or demographic data if they want to keep Section 230 protections. The idea is to revoke 230 immunity from platforms that boost engagement by “funneling information to users that polarizes their views” unless a user specifically opts in.

In another bill, the Protecting Americans from Dangerous Algorithms Act, Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) propose suspending Section 230 protections and making companies liable “if their algorithms amplify misinformation that leads to offline violence.” That bill would amend Section 230 to reference existing civil rights laws.

Section 230’s defenders argue that any insufficiently targeted changes to the law could disrupt the modern internet as we know it, resulting in cascading negative impacts well beyond the intended scope of reform efforts. An outright repeal of the law is almost certainly off the table, but even small tweaks could completely realign internet businesses, for better or worse.

During the hearing, Hawley made a broader suggestion for companies that use algorithms to chase profits. “Why shouldn’t we just remove section 230 protection from any platform that engages in behavioral advertising or algorithmic amplification?” he asked, adding that he wasn’t opposed to an outright repeal of the law.

Sen. Klobuchar, who leads the Senate’s antitrust subcommittee, connected the algorithmic concerns to anti-competitive behavior in the tech industry. “If you have a company that buys out everyone from under them… we’re never going to know if they could have developed the bells and whistles to help us with misinformation because there is no competition,” Klobuchar said.

Subcommittee members Klobuchar and Sen. Mazie Hirono (D-HI) have their own major Section 230 reform bill, the Safe Tech Act, but that legislation is less concerned with algorithms than ads and paid content.

At least one more major bill looking at Section 230 through the lens of algorithms is still on the way. Prominent big tech critic House Rep. David Cicilline (D-RI) is due out soon with a Section 230 bill that could suspend liability protections for companies that rely on algorithms to boost engagement and line their pockets.

“That’s a very complicated algorithm that is designed to maximize engagement to drive up advertising prices to produce greater profits for the company,” Cicilline told Axios last month. “…That’s a set of business decisions for which, it might be quite easy to argue, that a company should be liable for.”

#anna-eshoo, #behavioral-advertising, #biden, #communications-decency-act, #congress, #josh-hawley, #misinformation, #operating-systems, #section-230, #section-230-of-the-communications-decency-act, #senate, #senator, #social-media, #tc, #tristan-harris, #tulsi-gabbard, #twitter

The next era of moderation will be verified

Since the dawn of the internet, knowing (or, perhaps more accurately, not knowing) who is on the other side of the screen has been one of the biggest mysteries and thrills. In the early days of social media and online forums, anonymous usernames were the norm and meant you could pretend to be whoever you wanted to be.

As exciting and liberating as this freedom was, the problems quickly became apparent — predators of all kinds have used this cloak of anonymity to prey upon unsuspecting victims, harass anyone they dislike or disagree with, and spread misinformation without consequence.

For years, the conversation around moderation has been focused on two key pillars. First, what rules to write: What content is deemed acceptable or forbidden, how do we define these terms, and who makes the final call on the gray areas? And second, how to enforce them: How can we leverage both humans and AI to find and flag inappropriate or even illegal content?

While these continue to be important elements to any moderation strategy, this approach only flags bad actors after an offense. There is another equally critical tool in our arsenal that isn’t getting the attention it deserves: verification.

Most people think of verification as the “blue checkmark” — a badge of honor bestowed upon the elite and celebrities among us. However, verification is becoming an increasingly important tool in moderation efforts to combat nefarious issues like harassment and hate speech.

That blue checkmark is more than just a signal showing who’s important — it also confirms that a person is who they say they are, which is an incredibly powerful means to hold people accountable for their actions.

One of the biggest challenges that social media platforms face today is the explosion of fake accounts, with the Brad Pitt impersonator on Clubhouse being one of the more recent examples. Bots and sock puppets spread lies and misinformation like wildfire, and they propagate more quickly than moderators can ban them.

This is why Instagram began implementing new verification measures last year to combat this exact issue. By verifying users’ real identities, Instagram said it “will be able to better understand when accounts are attempting to mislead their followers, hold them accountable, and keep our community safe.”

It’s important to remember that verification is not a single tactic, but rather a collection of solutions that must be used dynamically in concert to be effective.

The urgency to implement verification is also bigger than just stopping the spread of questionable content. It can also help companies ensure they’re staying on the right side of the law.

Following an exposé revealing illegal content was being uploaded to Pornhub’s site, the company banned posts from nonverified users and deleted all content uploaded from unverified sources (more than 80% of the videos hosted on its platform). It has since implemented new measures to verify its users to prevent this kind of issue from infiltrating its systems again in the future.

Companies of all kinds should be looking at this case as a cautionary tale — if there had been verification from the beginning, the systems would have been in a much better place to identify bad actors and keep them out.

However, it’s important to remember that verification is not a single tactic, but rather a collection of solutions that must be used dynamically in concert to be effective. Bad actors are savvy and continually updating their methods to circumvent systems. Using a single-point solution to verify users — such as through a photo ID — might sound sufficient on its face, but it’s relatively easy for a motivated fraudster to overcome.

At Persona, we’ve detected increasingly sophisticated fraud attempts ranging from using celebrity photos and data to create accounts to intricate photoshopping of IDs and even using deepfakes to mimic a live selfie.

That’s why it’s critical for verification systems to take multiple signals into account when verifying users, including actively collected customer information (like a photo ID), passive signals (their IP address or browser fingerprint), and third-party data sources (like phone and email risk lists). By combining multiple data points, a valid but stolen ID won’t pass through the gates because signals like location or behavioral patterns will raise a red flag that this user’s identity is likely fraudulent or at the very least warrants further investigation.

This kind of holistic verification system will enable social and user-generated-content platforms to not only deter and flag bad actors but also prevent them from repeatedly entering your platform under new usernames and emails, a common tactic of trolls and account abusers who have previously been banned.

Beyond individual account abusers, a multisignal approach can help manage an arguably bigger problem for social media platforms: coordinated disinformation campaigns. Any issue involving groups of bad actors is like battling the multiheaded Hydra — you cut off one head only to have two more grow back in its place.

Yet killing the beast is possible when you have a comprehensive verification system that can help surface groups of bad actors based on shared properties (e.g., location). While these groups will continue to look for new ways in, multifaceted verification that is tailored for the end user can help keep them from running rampant.

Historically, identity verification systems like Jumio or Trulioo were designed for specific industries, like financial services. But we’re starting to see the rise in demand for industry-agnostic solutions like Persona to keep up with these new and emerging use cases for verification. Nearly every industry that operates online can benefit from verification, even ones like social media, where there isn’t necessarily a financial transaction to protect.

It’s not a question of if verification will become a part of the solution for challenges like moderation, but rather a question of when. The technology and tools exist today, and it’s up to social media platforms to decide that it’s time to make this a priority.

#column, #misinformation, #opinion, #privacy, #security, #social, #social-media, #social-media-platforms, #verification