Fact-checking works to undercut misinformation in many countries

The word

Enlarge (credit: Gordon Jolly / Flickr)

In the wake of the flood of misinformation that’s drowning the US, lots of organizations have turned to fact-checks. Many newsrooms set up dedicated fact-check groups, and some independent organizations were formed to provide the service. We get live fact-checking of political debates, and Facebook will now tag material it deems misinformation with links to a fact-check.

Obviously, given how many people are still afraid of COVID-19 vaccines, there are limits to how much fact-checking can accomplish. But might it be effective outside the overheated misinformation environment in the US? A new study tests out the efficacy of fact-checking in a set of countries that are both geographically and culturally diverse, and it finds that fact-checking is generally more effective at shaping public understanding than misinformation is.

Checking in with different countries

The two researchers behind the new work, Ethan Porter and Thomas Wood, identified three countries that are outside the usual group of rich, industrialized nations where most population surveys occur. These were Argentina, Nigeria, and South Africa. As a bit of a control for the typical surveys, they also ran their study in the UK. All four of these countries have professional fact-checking organizations that assisted with the work and were able to recruit 2,000 citizens for the study.

Read 11 remaining paragraphs | Comments

#behavioral-science, #fact-checking, #human-behavior, #misinformation, #science

Reddit’s teach-the-controversy stance on COVID vaccines sparks wider protest

Photo illustration with a hand holding a mobile phone and a Reddit logo in the background.

Enlarge (credit: Getty Images | SOPA Images )

Over 135 subreddits have gone dark this week in protest of Reddit’s refusal to ban communities that spread misinformation about the COVID pandemic and vaccines.

Subreddits that went private include two with 10 million or more subscribers, namely r/Futurology and r/TIFU. The PokemonGo community is one of 15 other subreddits with at least 1 million subscribers that went private; another 15 subreddits with at least 500,000 subscribers also went private. They’re all listed in a post on “r/VaxxHappened” which has been coordinating opposition to Reddit management’s stance on pandemic misinformation. More subreddits are being added as they join the protest.

“Futurology has gone private to protest Reddit’s inaction on COVID-19 misinformation,” a message on that subreddit says. “Reddit won’t enforce their policies against misinformation, brigading, and spamming. Misinformation subreddits such as NoNewNormal and r/conspiracy must be shut down. People are dying from misinformation.”

Read 12 remaining paragraphs | Comments

#covid, #misinformation, #policy, #reddit, #vaccine

Facebook will reportedly launch its own advisory group for election policy decisions

Facebook is looking to create a standalone advisory committee for election-related policy decisions, according to a new report from The New York Times. The company has reportedly approached a number of policy experts and academics it is interested in recruiting for the group, which could give the company cover for some of its most consequential choices.

The group, which the Times characterizes as a commission, would potentially be empowered to weigh in on issues like election misinformation and political advertising — two of Facebook’s biggest policy headaches. Facebook reportedly plans for the commission to be in place for the 2022 U.S. midterm elections and could announce its formation as soon as this fall.

Facebook’s election commission could be modeled after the Oversight Board, the company’s first experiment in quasi-independent external decision making. The Oversight Board began reviewing cases in October of last year, but didn’t gear up in time to impact the flood of election misinformation that swept the platform during the U.S. presidential election. Initially, the board could only make policy rulings based on material that was already removed from Facebook.

The company touts the independence of the Oversight Board, and while it does operate independently, Facebook created the group and appointed its four original co-chairs. The Oversight Board is able to set policy precedents and make binding per-case moderation rulings, but ultimately its authority comes from Facebook itself, which at any point could decide to ignore the board’s decisions.

A similar external policy-setting body focused on elections would be very politically useful for Facebook. The company is a frequent target for both Republicans and Democrats, with the former claiming Facebook censors conservatives disproportionately and the latter calling attention to Facebook’s long history of incubating conspiracies and political misinformation.

Neither side was happy when Facebook decided to suspend political advertising after the election — a gesture that failed to address the exponential spread of organic misinformation. Facebook asked the Oversight Board to review its decision to suspend former President Trump, though the board ultimately kicked its most controversial case back to the company itself.

#content-moderation, #facebook, #facebook-oversight-board, #misinformation, #oversight-board, #political-advertising, #presidential-election, #social, #social-media, #tc, #united-states

YouTube has removed 1 million videos for dangerous COVID-19 misinformation

YouTube has removed 1 million videos for dangerous COVID-19 misinformation since February 2020, according to YouTube’s Chief Product Officer Neal Mahon.

Mahon shared the statistic in a blog post outlining how the company approaches misinformation on its platform. “Misinformation has moved from the marginal to the mainstream,” he wrote. “No longer contained to the sealed-off worlds of Holocaust deniers or 9-11 truthers, it now stretches into every facet of society, sometimes tearing through communities with blistering speed.”

At the same time, the Youtube executive argued that “bad content” accounts for only a small percentage of YouTube content overall. “Bad content represents only a tiny percentage of the billions of videos on YouTube (about .16-.18% of total views turn out to be content that violates our policies),” Mahon wrote. He added that YouTube removes almost 10 million videos each quarter, “the majority of which don’t even reach 10 views.”

Facebook recently made a similar argument about content on its platform. The social network published a report last week that claimed that the most popular posts are memes and other non-political content. And, faced with criticism over its handling of COVID-19 and vaccine misinformation, the company has argued that vaccine misinformation isn’t representative of the kind of content most users see.

Both Facebook and YouTube have come under particular scrutiny for their policies around health misinformation during the pandemic. Both platforms have well over a billion users, which means that even a small fraction of content can have a far-reaching impact. And both platforms have so far declined to disclose details about how vaccine and health misinformation spreads or how many users are encountering it. Mahon also said that removing misinformation is only one aspect of the company’s approach. YouTube is also working on “ratcheting up information from trusted sources and reducing the spread of videos with harmful misinformation.”

Editor’s note: This post originally appeared on Engadget.

#column, #covid-19, #misinformation, #tc, #tceng, #youtube

A mathematician walks into a bar (of disinformation)

Disinformation, misinformation, infotainment, algowars — if the debates over the future of media the past few decades have meant anything, they’ve at least left a pungent imprint on the English language. There’s been a lot of invective and fear over what social media is doing to us, from our individual psychologies and neurologies to wider concerns about the strength of democratic societies. As Joseph Bernstein put it recently, the shift from “wisdom of the crowds” to “disinformation” has indeed been an abrupt one.

What is disinformation? Does it exist, and if so, where is it and how do we know we are looking at it? Should we care about what the algorithms of our favorite platforms show us as they strive to squeeze the prune of our attention? It’s just those sorts of intricate mathematical and social science questions that got Noah Giansiracusa interested in the subject.

Giansiracusa, a professor at Bentley University in Boston, is trained in mathematics (focusing his research in areas like algebraic geometry) but he’s also had a penchant of looking at social topics through a mathematical lens, such as connecting computational geometry to the Supreme Court. Most recently, he’s published a book called How Algorithms Create and Prevent Fake News to explore some of the challenging questions around the media landscape today and how technology is exacerbating and ameliorating those trends.

I hosted Giansiracusa on a Twitter Space recently, and since Twitter hasn’t made it easy to listen to these talks afterwards (ephemerality!), I figured I’d pull out the most interesting bits of our conversation for you and posterity.

This interview has been edited and condensed for clarity.

Danny Crichton: How did you decide to research fake news and write this book?

Noah Giansiracusa: One thing I noticed is there’s a lot of really interesting sociological, political science discussion of fake news and these types of things. And then on the technical side, you’ll have things like Mark Zuckerberg saying AI is going to fix all these problems. It just seemed like, it’s a little bit difficult to bridge that gap.

Everyone’s probably heard this recent quote of Biden saying, “they’re killing people,”in regards to misinformation on social media. So we have politicians speaking about these things where it’s hard for them to really grasp the algorithmic side. Then we have computer science people that are really deep in the details. So I’m kind of sitting in between, I’m not a real hardcore computer science person. So I think it’s a little easier for me to just step back and get the bird’s eye view.

At the end of the day, I just felt I kind of wanted to explore some more interactions with society where things get messy, where the math is not so clean.

Crichton: Coming from a mathematical background, you’re entering this contentious area where a lot of people have written from a lot of different angles. What are people getting right in this area and what have people perhaps missed some nuance?

Giansiracusa: There’s a lot of incredible journalism, I was blown away at how a lot of journalists really were able to deal with pretty technical stuff. But I would say one thing that maybe they didn’t get wrong, but kind of struck me was, there’s a lot of times when an academic paper comes out, or even an announcement from Google or Facebook or one of these tech companies, and they’ll kind of mention something, and the journalist will maybe extract a quote, and try to describe it, but they seem a little bit afraid to really try to look and understand it. And I don’t think it’s that they weren’t able to, it really seems like more of an intimidation and a fear.

One thing I’ve experienced a ton as a math teacher is people are so afraid of saying something wrong and making a mistake. And this goes for journalists who have to write about technical things, they don’t want to say something wrong. So it’s easier to just quote a press release from Facebook or quote an expert.

One thing that’s so fun and beautiful about pure math, is you don’t really worry about being wrong, you just try ideas and see where they lead and you see all these interactions. When you’re ready to write a paper or give a talk, you check the details. But most of math is this creative process where you’re exploring, and you’re just seeing how ideas interact. My training as a mathematician you think would make me apprehensive about making mistakes and to be very precise, but it kind of had the opposite effect.

Second, a lot of these algorithmic things, they’re not as complicated as they seem. I’m not sitting there implementing them, I’m sure to program them is hard. But just the big picture, all these algorithms nowadays, so much of these things are based on deep learning. So you have some neural net, doesn’t really matter to me as an outsider what architecture they’re using, all that really matters is, what are the predictors? Basically, what are the variables that you feed this machine learning algorithm? And what is it trying to output? Those are things that anyone can understand.

Crichton: One of the big challenges I think of analyzing these algorithms is the lack of transparency. Unlike, say, the pure math world which is a community of scholars working to solve problems, many of these companies can actually be quite adversarial about supplying data and analysis to the wider community.

Giansiracusa: It does seem there’s a limit to what anyone can deduce just by kind of being from the outside.

So a good example is with YouTube, teams of academics wanted to explore whether the YouTube recommendation algorithm sends people down these conspiracy theory rabbit holes of extremism. The challenge is that because this is the recommendation algorithm, it’s using deep learning, it’s based on hundreds and hundreds of predictors based on your search history, your demographics, the other videos you’ve watched and for how long — all these things. It’s so customized to you and your experience, that all the studies I was able to find use incognito mode.

So they’re basically a user who has no search history, no information and they’ll go to a video and then click the first recommended video then the next one. And let’s see where the algorithm takes people. That’s such a different experience than an actual human user with a history. And this has been really difficult. I don’t think anyone has figured out a good way to algorithmically explore the YouTube algorithm from the outside.

Honestly, the only way I think you could do it is just kind of like an old school study where you recruit a whole bunch of volunteers and sort of put a tracker on their computer and say, “Hey, just live life the way you normally do with your histories and everything and tell us the videos that you’re watching.” So it’s it’s been difficult to get past this fact that a lot of these algorithms, almost all of them, I would say, are so heavily based on your individual data. We don’t know how to study that in the aggregate.

And it’s not just that me or anyone else on the outside who has trouble because we don’t have the data. It’s even people within these companies who built the algorithm and who know how the algorithm works on paper, but they don’t know how it’s going to actually behave. It’s like Frankenstein’s monster: they built this thing, but they don’t know how it’s going to operate. So the only way I think you can really study it is if people on the inside with that data go out of their way and spend time and resources to study it.

Crichton: There are a lot of metrics used around evaluating misinformation and determining engagement on a platform. Coming from your mathematical background, do you think those measures are robust?

Giansiracusa: People try to debunk misinformation. But in the process, they might comment on it, they might retweet it or share it, and that counts as engagement. So a lot of these measurements of engagement, are they really looking at positive or just all engagement? You know, it kind of all gets lumped together?

This happens in academic research, too. Citations are the universal metric of how successful researches is. Well, really bogus things like Wakefield’s original autism and vaccines paper got tons of citations, a lot of them were people citing it because they thought it’s right, but a lot of it was scientists who were debunking it, they cite it in their paper to say, we demonstrate that this theory is wrong. But somehow a citation is a citation. So it all counts towards the success metric.

So I think that’s a bit of what’s happening with engagement. If I post something on my comments saying, “Hey, that’s crazy,” how does the algorithm know if I’m supporting it or not? They could use some AI language processing to try but I’m not sure if they are, and it’s a lot of effort to do so.

Crichton: Lastly, I want to talk a bit about GPT-3 and the concern around synthetic media and fake news. There’s a lot of fear that AI bots will overwhelm media with disinformation — how scared or not scared should we be?

Giansiracusa: Because my book really grew out of a class from experience, I wanted to try to stay impartial, and just kind of inform people and let them reach their own decisions. I decided to try to cut through that debate and really let both sides speak. I think the newsfeed algorithms and recognition algorithms do amplify a lot of harmful stuff, and that is devastating to society. But there’s also a lot of amazing progress of using algorithms productively and successfully to limit fake news.

There’s these techno-utopians, who say that AI is going to fix everything, we’ll have truth-telling, and fact-checking and algorithms that can detect misinformation and take it down. There’s some progress, but that stuff is not going to happen, and it never will be fully successful. It’ll always need to rely on humans. But the other thing we have is kind of irrational fear. There’s this kind of hyperbolic AI dystopia where algorithms are so powerful, kind of like singularity type of stuff that they’re going to destroy us.

When deep fakes were first hitting the news in 2018, and GPT-3 had been released a couple years ago, there was a lot of fear that, “Oh shit, this is gonna make all our problems with fake news and understanding what’s true in the world much, much harder.” And I think now that we have a couple of years of distance, we can see that they’ve made it a little harder, but not nearly as significantly as we expected. And the main issue is kind of more psychological and economic than anything.

So the original authors of GPT-3 have a research paper that introduces the algorithm, and one of the things they did was a test where they pasted some text in and expanded it to an article, and then they had some volunteers evaluate and guess which is the algorithmically-generated one and which article is the human-generated one. They reported that they got very, very close to 50% accuracy, which means barely above random guesses. So that sounds, you know, both amazing and scary.

But if you look at the details, they were extending like a one line headline to a paragraph of text. If you tried to do a full, The Atlantic-length or New Yorker-length article, you’re gonna start to see the discrepancies, the thought is going to meander. The authors of this paper didn’t mention this, they just kind of did their experiment and said, “Hey, look how successful it is.”

So it looks convincing, they can make these impressive articles. But here’s the main reason, at the end of the day, why GPT-3 hasn’t been so transformative as far as fake news and misinformation and all this stuff is concerned. It’s because fake news is mostly garbage. It’s poorly written, it’s low quality, it’s so cheap and fast to crank out, you could just pay your 16-year-old nephew to just crank out a bunch of fake news articles in minutes.

It’s not so much that math helped me see this. It’s just that somehow, the main thing we’re trying to do in mathematics is to be skeptical. So you have to question these things and be a little skeptical.

#algorithms, #artificial-intelligence, #deep-learning, #disinformation, #facebook, #government, #gpt-3, #machine-learning, #media, #misinformation, #policy, #social-media, #youtube

Facebook releases a glimpse of its most popular posts, but we don’t learn much

Facebook is out with a new report collecting the most popular posts on the platform, responding to critics who believe the company is deliberately opaque about its top-performing content.

Facebook’s new “widely viewed content reports” will come out quarterly, reflecting most viewed top News Feed posts in the U.S. every three months — not exactly the kind of real-time data monitoring that might prove useful for observing emerging trends.

With the new data set, Facebook hopes to push back against criticism that its algorithms operate within a black box. But like its often misleading blogged rebuttals and the other sets of cherry-picked data it shares, the company’s latest gesture at transparency is better than nothing, but not particularly useful.

So what do we learn? According to the new data set, 87% of posts that people viewed in the U.S. during Q2 of this year didn’t include an outside link. That’s notable but not very telling since Facebook still has an incredibly massive swath of people sharing and seeing links on a daily basis.

YouTube is predictably the top domain by Facebook’s chosen metric of “content viewers,” which it defines as any account that saw a piece of content on the News Feed, though we don’t get anything in the way of potentially helpful granular data there. Amazon, Gofundme, TikTok and others also in the top 10, no surprises there either.

Things get weirder when Facebook starts breaking down its most viewed links. The top five links include a website for alumni of the Green Bay Packers football team, a random online CBD marketplace and reppnforchrist.com, an apparently prominent portal for Christianity-themed graphic T-shirts. The subscription page for the Epoch Times, a site well known for spreading pro-Trump conspiracies and other disinformation, comes in at No 10, though it was beaten by a Tumblr link to two cats walking with their tails intertwined.

Image Credits: Facebook

Yahoo and ABC News are the only prominent national media outlets that make the top 20 when the data is sliced and diced in this particular way. Facebook also breaks down which posts the most people viewed during the period with a list of mostly benign if odd memes, including one that reads “If your VAGINA [cat emoji] or PENIS [eggplant emoji] was named after the last TV show/Move u watched what would it be.”

If you’re wondering why Facebook chose to collect and present this set of data in this specific way, it’s because the company is desperately trying to prove a point: That its platform isn’t overrun by the political conspiracies and controversial right-wing personalities that make headlines.

The dataset is Facebook’s latest argument in its long feud with New York Times reporter Kevin Roose, who created a Twitter account that surfaces Facebook’s most engaging posts on a daily basis, as measured through the Facebook-owned social monitoring tool CrowdTangle.

By the metric of engagement, Facebook’s list of top-performing posts in the U.S. are regularly dominated by far-right personalities and sites like Newsmax, which pushes election conspiracies that Facebook would prefer to distance itself from.

The company argues that Facebook posts with the most interactions don’t accurately represent the top content on the platform. Facebook insists that reach data, which measures how many people see a given post, is a superior metric, but there’s no reason that engagement data isn’t just as relevant if not more so.

“The content that’s seen by the most people isn’t necessarily the content that also gets the most engagement,” Facebook wrote, in a dig clearly aimed at Roose.

The platform wants to de-emphasize political content across the board, which isn’t surprising given its track record of amplifying Russian disinformation, violent far-right militias and the Stop the Steal movement, which culminated in deadly violence at the U.S. Capitol in January.

As The New York Times previously reported, Facebook actually scrapped plans to make its reach data widely available through a public dashboard over fears that even that version of its top-performing posts wouldn’t reflect well on the company.

Instead, the company opted to offer a taste of that data in a quarterly report and the result shows plenty of junk content, but less in the way of politics. Facebook’s cursory gesture of transparency notwithstanding, it’s worth remembering that nothing is stopping the company from letting people see a leaderboard of its most popular content at any given time — in real-time even! — beyond the its own fear of bad press.

#amazon, #computing, #facebook, #misinformation, #new-york-times, #news-feed, #social, #social-media, #software, #tc, #the-new-york-times, #united-states

Twitter asks users to flag COVID-19 and election misinformation

Twitter introduced a new test feature Tuesday that allows users to report misinformation they run into on the platform, flagging it to the company as “misleading.” The test will roll out starting today to most users in the U.S., Australia and South Korea.

In the new test, Twitter users will be able to expand the three dot contextual menu in the upper right corner of a tweet to select “report tweet” where they’ll be met with the new option to flag a misleading tweet. The next menu offers users a choice to specify that a tweet is misleading about “politics,” “health” or “something else.” If they select politics, they can specify if the misleading political tweet pertains to elections and if they choose health they can flag a misleading tweet about COVID-19 specifically.

Twitter has added a way for users to report election-related misinformation before, though previously those options were temporary features linked to global elections. Back in 2019, the platform rolled out the option to report misleading tweets about voting to help safeguard elections in Europe and India.

The intention is to give users a way to surface tweets that violate Twitter’s existing policies around election and pandemic-related misinformation, two topics it focuses policy and enforcement efforts around. The user reporting system will work in tandem with Twitter’s proactive systems for identifying potentially dangerous misinformation, which rely on a combination of human and automated moderation. For now, users won’t receive any updates from the company on what happens to misleading tweets they report, though those updates could be added in the future.

While the new reporting feature will be available very broadly, the company describes the test as an “experiment,” not a finished feature. Twitter will observe how people on the platform use the new misinformation reporting tool to see if user reporting can be an effective tool for identifying potentially harmful misleading tweets, though the company isn’t on a set timeline for when to fully implement or remove the test feature.

For now, Twitter doesn’t seem very worried about users abusing the feature, since the new user reporting option will plug directly into its established moderation system. Still, the idea of users pointing the company toward “misleading” tweets is sure to spark new cries of censorship from corners of the platform already prone to spreading misinformation.

While the option to flag tweets as misleading is new, the feature will feed reported tweets into Twitter’s existing enforcement flow, where its established rules around health and political misinformation are implemented through a blend of human and algorithmic moderation.

That process will also sort reported tweets for review based on priority. Tweets from users with large followings or tweets generating an unusually high level of engagement will go to the front of the review line, as will tweets that pertain to elections and COVID-19, Twitter’s two areas of emphasis when it comes to policing misinformation.

The new test is Twitter’s latest effort to lean more on its own community to identify misinformation. Twitter’s most ambitious experiment along those lines is Birdwatch, a crowdsourced way for users to append contextual notes and fact-checks to tweets that can be upvoted or downvoted, Reddit-style. For now, Birdwatch is just a pilot program, but it’s clear the company is interested in decentralizing moderation — an experiment far thornier than just adding a new way to report tweets.

#australia, #covid-19, #disinformation, #election-misinformation, #misinformation, #pandemic, #political-misinformation, #social, #social-media, #south-korea, #tc, #twitter, #united-states

Senators press Facebook for answers about why it cut off misinformation researchers

Facebook’s decision to close accounts connected to a misinformation research project last week prompted a broad outcry from the company’s critics — and now Congress is getting involved.

A handful of lawmakers criticized the decision at the time, slamming Facebook for being hostile toward efforts to make the platform’s opaque algorithms and ad targeting methods more transparent. Researchers believe that studying those hidden systems is crucial work for gaining insight on the flow of political misinformation.

The company specifically punished two researchers with NYU’s Cybersecurity for Democracy project who work on Ad Observer, an opt-in browser tool that allows researchers to study how Facebook targets ads to different people based on their interests and demographics.

In a new letter, embedded below, a trio of Democratic senators are pressing Facebook for more answers. Senators Amy Klobuchar (D-MN), Chris Coons (D-DE) and Mark Warner (D-VA) wrote to Facebook CEO Mark Zuckerberg asking for a full explanation on why the company terminated the researcher accounts and how they violated the platform’s terms of service and compromised user privacy. The lawmakers sent the letter on Friday.

“While we agree that Facebook must safeguard user privacy, it is similarly imperative that Facebook allow credible academic researchers and journalists like those involved in the Ad Observatory project to conduct independent research that will help illuminate how the company can better tackle misinformation, disinformation, and other harmful activity that is proliferating on its platforms,” the senators wrote.

Lawmakers have long urged the company to be more transparent about political advertising and misinformation, particularly after Facebook was found to have distributed election disinformation in 2016. Those concerns were only heightened by the platform’s substantial role in spreading election misinformation leading up to the insurrection at the U.S. Capitol, where Trump supporters attempted to overturn the vote.

In a blog post defending its decision, Facebook cited compliance with FTC as one of the reason the company severed the accounts. But the FTC called Facebook’s bluff last week in a letter to Zuckerberg, noting that nothing about the agency’s guidance for the company would preclude it from encouraging research in the public interest.

“Indeed, the FTC supports efforts to shed light on opaque business practices, especially around surveillance-based advertising,” Samuel Levine, the FTC’s acting director for the Bureau of Consumer Protection, wrote.

#amy-klobuchar, #computing, #congress, #facebook, #federal-trade-commission, #mark-zuckerberg, #misinformation, #nyu, #political-advertising, #privacy, #social, #social-media, #software, #tc, #technology, #trump

Deep dive into stupid: Meet the growing group that rejects germ theory

This thriving Facebook group says viruses don't cause disease and the pandemic isn't real.

Enlarge / This thriving Facebook group says viruses don’t cause disease and the pandemic isn’t real. (credit: Facebook)

Listen up, sheeple: COVID-19 doesn’t exist. Viruses don’t cause disease, and they aren’t contagious. Those doctors and health experts who say otherwise don’t know what they’re talking about; the real experts are on Facebook. And they’re saying it loud and clear: The pandemic is caused by your own deplorable life choices, like eating meat or pasta. Any “COVID” symptoms you might experience are actually the result of toxic lifestyle exposures—and you have only yourself to blame.

As utterly idiotic and abhorrent as all of the above is, it’s not an exaggeration of the messages being spread by a growing group of Darwin-award finalists on the Internet—that is, germ theory denialists. Yes, you read that correctly: Germ theory denialists—also known as people who don’t believe that pathogenic viruses and bacteria can cause disease.

As an extension of their rejection of basic scientific and clinical data collected over centuries, they deny the existence of the devastating pandemic that has sickened upwards of 200 million people worldwide, killing more than 4 million.

Read 20 remaining paragraphs | Comments

#facebook, #germ-theory, #misinformation, #public-health, #science, #vaccines

Twitter partners with AP and Reuters to address misinformation on its platform

Twitter announced today it’s partnering with news organizations The Associated Press (AP) and Reuters to expand its efforts focused on highlighting reliable news and information on its platform. Through the new agreements, Twitter’s Curation team will be able to leverage the expertise of the partnered organizations to add more context to the news and trends that circulate across Twitter, as well as aid with the company’s use of public service announcements during high-visibility events, misinformation labels and more.

Currently, the Curation team works to add additional information to content that includes Top Trends and other news on Twitter’s Explore tab. The team is also involved with how certain search results are ranked, to ensure that content from high-quality searches appear at the top of search results when certain keywords or hashtags are searched for on Twitter.

The team may also be involved with the prompts that appear in the Explore tab on the Home Timeline related to major events, like public health emergencies (such as the pandemic) or other events, like elections. And they may help with the misinformation labels that appear on tweets that are allowed to remain visible on Twitter, but are labeled with informative context from authoritative sources. These include tweets that violate Twitter’s rules around manipulated media, election integrity, or COVID-19.

However, the team operates separately from Twitter’s Trust and Safety team, which determines when tweets violate Twitter’s guidelines and punitive action, like removal or bans, must be taken, Twitter confirmed that neither the AP nor Reuters will be involved in those sorts of enforcement decisions.

Image Credits: Twitter

By working more directly with AP and Reuters, who also partner with Facebook on fact checks, Twitter says it will be able to increase the speed and scale to which it’s able to add this additional information to tweets and elsewhere on its platform. In particular, that means in times where news is breaking and when facts are in dispute as a story emerges, Twitter’s own team will be able to quickly turn to these more trusted sources to improve how contextual information is added to the conversations taking place on Twitter.

This could also be useful in stopping misinformation from going viral, instead of waiting until after the fact to correct misleading tweets.

Twitter’s new crowdsourced fact-checking system Birdwatch will also leverage feedback from AP and Reuters to help determine the quality of information shared by Birdwatch participants.

The work will see the Curation team working with the news organizations not just to add context to stories and conversations, but also to help identify which stories need context added, Twitter told us. This added context could appear in many different places on Twitter, including on tweets, search, in Explore, and in curated selections, called Twitter Moments.

Twitter has often struggled with handling misinformation on its platform due its real-time nature and use by high-profile figures, who attempt to manipulate the truth for their own ends. To date, it has experimented with many features to slow or stop the spread of misinformation from disabling one-click retweets, to adding fact checks, to banning accounts, and more. Birdwatch is the latest effort to add context to tweets, but the system is a decentralized attempt at handling misinformation — not one that relies on trusted partners.

“AP has a long history of working closely with Twitter, along with other platforms, to expand the reach of factual journalism,” noted Tom Januszewski, vice president of Global Business Development at AP, in a statement about the new agreement. “This work is core to our mission. We are particularly excited about leveraging AP’s scale and speed to add context to online conversations, which can benefit from easy access to the facts,” he said.

“Trust, accuracy and impartiality are at the heart of what Reuters does every day, providing billions of people with the information they need to make smart decisions,” added Hazel Baker, the head of UGC Newsgathering at Reuters. “Those values also drive our commitment to stopping the spread of misinformation. We’re excited to partner with Twitter to leverage our deep global and local expertise to serve the public conversation with reliable information,” Baker said.

Initially, the collaborations will focus on English-language content on Twitter, but the company says it expects the work to grow over time to support more languages and timezones. We’re told that, during this initial phase, Twitter will evaluate new opportunities to onboard collaborators that can support additional languages.

#ap-news, #misinformation, #reuters, #tc, #the-associated-press, #twitter

ActiveFence comes out of the shadows with $100M in funding and tech that detects online harm

Online abuse, disinformation, fraud and other malicious content is growing and getting more complex to track. Today, a startup called ActiveFence, which has quietly built a tech platform to suss out threats as they are being formed and planned, to make it easier for trust and safety teams to combat them on platforms, is coming out of the shadows to announce significant funding on the back of a surge of large organizations using its services.

The startup, co-headquartered in New York and Tel Aviv, has raised $100 million, funding that it will use to continue developing its tools and to continue expanding its customer base. To date, ActiveFence says that its customers include companies in social media, audio and video streaming, file sharing, gaming, marketplaces and other technologies — it has yet to disclose any specific names but says that its tools collectively cover “billions” of users. Governments and brands are two other categories that it is targeting as it continues to expand. It has been around since 2018 and is growing at around 100% annually.

The $100 million being announced today actually covers two rounds: its most recent Series B led by CRV and Highland Europe, as well as a Series A it never announced led by Grove Ventures and Norwest Venture Partners. Vintage Investment Partners, Resolute Ventures and other unnamed backers also participated. It’s not disclosing valuation but I understand it’s between $300 million and $400 million. (I’ll update this if we learn more.)

The increase presence of social media and online chatter on other platforms has put a strong spotlight on how those forums are used by bad actors to spread malicious content. ActiveFence’s particular approach is a set of algorithms that tap into innovations in AI (natural language processing) and to map relationships between conversations. It crawls all of the obvious, and less obvious and harder-to-reach parts of the internet to pick up on chatter that is typically where a lot of the malicious content and campaigns are born — some 3 million sources in all — before they become higher-profile issues.  It’s built both on the concept of big data analytics as well as understanding that the long tail of content online has a value if it can be tapped effectively.

“We take a fundamentally different approach to trust, safety and content moderation,” Noam Schwartz, the co-founder and CEO, said in an interview. “We are proactively searching the darkest corners of the web and looking for bad actors in order to understand the sources of malicious content. Our customers then know what’s coming. They don’t need to wait for the damage, or for internal research teams to identify the next scam or disinformation campaign. We work with some of the most important companies in the world, but even tiny, super niche platforms have risks.”

The insights that ActiveFence gathers are then packaged up in an API that its customers can then feed into whatever other systems they use to track or mitigate traffic on their own platforms.

ActiveFence is not the only company building technology to help platform operators, governments and brands to have a better picture of what is going on in the wider online world. Factmata has built algorithms to better understand and track sentiments online; Primer (which also recently raised a big round) also uses NLP to help its customers track online information, with its customers including government organizations that used its technology to track misinformation during election campaigns; Bolster (formerly called RedMarlin) is another.

Some of the bigger platforms have also gotten more proactive in bringing tracking technology and talent in-house: Facebook acquired Bloomsbury AI several years ago for this purpose; Twitter has acquired Fabula (and is working on a bigger efforts like Birdwatch to build better tools), and earlier this year Discord picked up Sentropy, another online abuse tracker. In some cases, companies that more regularly compete against each other for eyeballs and dollars are even teaming up to collaborate on efforts.

Indeed, may well be that ultimately there will exist multiple efforts and multiple companies doing good work in this area, not unlike other corners of the world of security, which might need more than one hammer thrown at problems to crack them. In this particular case, the growth of the startup to date, and its effectiveness in identifying early warning signs, is one reason why investors have been interested in ActiveFence.

“We are pleased to support ActiveFence in this important mission” commented Izhar Armony, the lead investor from CRV, in a statement. “We believe they are ready for the next phase of growth and that they can maintain leadership in the dynamic and fast growing trust and safety market.”

“ActiveFence has emerged as a clear leader in the developing online trust and safety category. This round will help the company to accelerate the growth momentum we witnessed in the past few years,” said Dror Nahumi, general partner at Norwest Venture Partners, in a statement.

#big-data, #enterprise, #europe, #funding, #government, #misinformation, #security, #tc

How much COVID misinformation is on Facebook? Its execs don’t want to know

How much COVID misinformation is on Facebook? Its execs don’t want to know

Enlarge (credit: KJ Parish)

For years, misinformation has flourished on Facebook. Falsehoods, misrepresentations, and outright lies posted on the site have shaped the discourse on everything from national politics to public health.

But despite their role in facilitating communications for billions of people, Facebook executives refused to commit resources to understand the extent to which COVID-19-related misinformation pervaded its platform, according to a report in The New York Times.

Early in the pandemic, a group of data scientists at Facebook met with executives to propose a project that would determine how many users saw misleading or false information about COVID. It wasn’t a small task—they estimated that the process could take up to a year or more to complete—but it would give the company a solid understanding of the extent to which misinformation spread on its platform.

Read 7 remaining paragraphs | Comments

#covid-19, #covid-19-vaccine, #facebook, #misinformation, #policy, #vaccine-misinformation

Tennessee has gone “anti-vaccine,” state vaccine chief says after being fired

Grown women comfort a masked child with a rolled up sleeve.

Enlarge / US first lady Jill Biden (L) comforts Adriana Lyttle, 12, as she receives her vaccine at a COVID-19 vaccination site at Ole Smoky Distillery in Nashville, Tennessee. (credit: Getty | Tom Brenner)

The Tennessee state government on Monday fired its top vaccination official, Dr. Michelle Fiscus, who says that state leaders have “bought into the anti-vaccine misinformation campaign.”

In a fiery statement published late Monday by The Tennessean, Fiscus warns that as the delta variant continues to spread in the undervaccinated state, more Tennesseans “will continue to become sick and die from this vaccine-preventable disease because they choose to listen to the nonsense spread by ignorant people.”

Fiscus is just the latest public health official to quit or lose their position amid the devastating pandemic, many aspects of which have become tragically politicized. Fiscus wrote that, as the now-former medical director for vaccine-preventable diseases and immunization programs at the Tennessee Department of Health, she is the 25th immunization director to leave their position amid the pandemic. With only 64 territorial immunization directors in the country, her firing brings the nationwide turnover in immunization directors to nearly 40 percent during the health crisis.

Read 8 remaining paragraphs | Comments

#anti-vaccine, #covid-19, #infectious-disease, #misinformation, #public-health, #science, #tennessee, #vaccines

Twitter tests more attention-grabbing misinformation labels

Twitter is considering changes to the way it contextualizes misleading tweets that the company doesn’t believe are dangerous enough to be removed from the platform outright.

The company announced the test in a tweet Thursday with an image of the new misinformation labels. Within the limited test, those labels will appear with color-coded backgrounds now, making them much more visible in the feed while also giving users a way to quickly parse the information from visual cues. Some users will begin to see the change this week.

Tweets that Twitter deems “misleading” will get a red background with a short explanation and a notice that users can’t reply to, like or share the content. Yellow labels will appear on content that isn’t as actively misleading. In both cases, Twitter has made it more clear that you can click the labels to find verified information about the topic at hand (in this case, the pandemic).

“People who come across the new labels as a part of this limited test should expect a more communicative impact from the labels themselves both through copy, symbols and colors used to distill clear context about not only the label, but the information or content they are engaging with,” a Twitter spokesperson told TechCrunch.

Image Credits: Twitter

Twitter found that even tiny shifts in design could impact how people interacted with labeled tweets. In a test the company ran with a pink variation of the label, users clicked through to the authoritative information that Twitter provided more but they also quote-tweeted the content itself more, furthering its spread. Twitter says that it tested many variations on the written copy, colors and symbols that made their way into the new misinformation labels.

The changes come after a long public feedback period that convinced the company that misinformation labels needed to stand out better in a sea of tweets. Facebook’s own misinformation labels have also faced criticism for blending in too easily and failing to create much friction for potentially dangerous information on the platform.

Twitter first created content labels as a way to flag “manipulated media” — photos and videos altered to deliberately mislead people, like the doctored deepfake of Nancy Pelosi that went viral back in 2019. Last May, Twitter expanded its use of labels to address the wave of Covid-19 misinformation that swept over social media early in the pandemic.

A month ago, the company rolled out new labels specific to vaccine misinformation and introduced a strike-based system into its rules. The idea is for Twitter to build a toolkit it can use to respond in a proportional way to misinformation depending on the potential for real-world harm.

“… We know that even within the space of our policies, not all misleading claims are equally harmful,” a Twitter spokesperson said. “For example, telling someone to drink bleach in order to cure COVID is a more immediate and severe harm than sharing a viral image of a shark swimming on a flooded highway and claiming that’s footage from a hurricane. (That’s a real thing that happens every hurricane season.)”

Labels are just one of the content moderation options that Twitter developed over the course of the last couple of years, along with warnings that require a click-through and pop-up messages designed to subtly steer people away from impulsively sharing inflammatory tweets.

When Twitter decides not to remove content outright, it turns to an a la carte menu of potential content enforcement options:

  • Apply a label and/or warning message to the Tweet
  • Show a warning to people before they share or like the Tweet;
  • Reduce the visibility of the Tweet on Twitter and/or prevent it from being recommended;
  • Turn off likes, replies, and Retweets; and/or
  • Provide a link to additional explanations or clarifications, such as in a curated landing page or relevant Twitter policies.

In most scenarios, the company will opt for all of the above.

“While there is no single answer to addressing the unique challenges presented by the range of types of misinformation, we believe investing in a multi-prong approach will allow us to be nimble and shift with the constantly changing dynamic of the public conversation,” the spokesperson said.

#disinformation, #misinformation, #social, #social-media, #tc, #twitter

Twitter starts rolling out Birdwatch fact checks inside tweets

Twitter is looking to crowdsource its way out of misinformation woes with its new product Birdwatch which taps a network of engaged tweeters to add notes to misleading tweets. Today, Twitter announced that they are starting to roll out the Birdwatch notes to pilot participants across iOS, Android and desktop.

The company launched a pilot version of the program back in January, describing the effort as a way to add context to misinformation in real time.

“We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable” Product VP Keith Coleman wrote in a blog post at the time. “Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.”

That time is apparently now for an early set of Birdwatch pilot participants.

Twitter says that once Birdwatch notes are added to a tweet, users will have the opportunity to rate whether the feedback is helpful or not. If none of the replies are deemed helpful, the Birdwatch card itself will disappear, but if any notes are deemed helpful they’ll pop up directly inside the tweet.

There have been an awful lot of questions about how and whether Birdwatch will work inside the current social media framework. Using community feedback differs from more centralized efforts used by platforms like Facebook that have tapped independent fact-checking organizations. Twitter is clearly aiming to decentralize this effort as much as it can and put power in the hands of Birdwatch contributors, but with audiences of individual tweeters currently responsible for deeming the helpfulness and visibility of fact checks, it’s clear this is going to be a pretty messy solution at times.

#android, #computing, #crowdsource, #crowdsourcing, #deception, #keith-coleman, #misinformation, #operating-systems, #social, #social-media, #software, #twitter

Dunning-Kruger meets fake news

A silhouetted figure goes fishing in a complex collage.

Enlarge (credit: Aurich Lawson | Getty Images)

The Dunning-Kruger effect is perhaps both one of the the most famous biases in human behavior—and the most predictable. It posits that people who don’t understand a topic also lack sufficient knowledge to recognize that they don’t understand it. Instead, they know just enough to convince themselves they’re completely on top of the topic, with results ranging from hilarious to painful.

Inspired by the widespread sharing of news articles that are blatantly false, a team of US-based researchers looked into whether Dunning-Kruger might be operating in the field of media literacy. Not surprisingly, people do, in fact, overestimate their ability to identify misleading news. But the details are complicated, and there’s no obvious route to overcoming this bias in any case.

Evaluating the news

Media literacy has the potential to limit the rapid spread of misinformation. Assuming people care about the accuracy of the things they like or share—something that’s far from guaranteed—a stronger media literacy would help people evaluate if something was likely to be accurate before pressing that share button. Evaluating the credibility of sources is an essential part of that process.

Read 13 remaining paragraphs | Comments

#behavioral-science, #biology, #dunning-kruger, #misinformation, #science

Indivisible is training an army of volunteers to neutralize political misinformation

The grassroots Democratic organization Indivisible is launching its own team of stealth fact-checkers to push back against misinformation — an experiment in what it might look like to train up a political messaging infantry and send them out into the information trenches.

Called the “Truth Brigade,” the corps of volunteers will learn best practices for countering popular misleading narratives on the right. They’ll coordinate with the organization on a biweekly basis to unleash a wave of progressive messaging that aims to drown out political misinformation and boost Biden’s legislative agenda in the process.

Considering the scope of the misinformation that remains even after social media’s big January 6 cleanup, the project will certainly have its work cut out for it.

“This is an effort to empower volunteers to step into a gap that is being created by very irresponsible behavior by the social media platforms,” Indivisible co-founder and co-executive director Leah Greenberg told TechCrunch. “It is absolutely frustrating that we’re in this position of trying to combat something that they ultimately have a responsibility to address.”

Greenberg co-founded Indivisible with her husband following the 2016 election. The organization grew out of the viral success the pair had when they and two other former House staffers published a handbook to Congressional activism. The guide took off in the flurry of “resist”-era activism on the left calling on Americans to push back on Trump and his agenda.

Indivisible’s Truth Brigade project blossomed out of a pilot program in Colorado spearheaded by Jody Rein, a senior organizer concerned about what she was seeing in her state. Since that pilot began last fall, the program has grown into 2,500 volunteers across 45 states.

The messaging will largely center around Biden’s ambitious legislative packages: the American Rescue plan, the voting rights bill HR1 and the forthcoming infrastructure package. Rather than debunking political misinformation about those bills directly, the volunteer team will push back with personalized messages promoting the legislation and dispelling false claims within their existing social spheres on Facebook and Twitter.

The coordinated networks at Indivisible will cross-promote those pieces of semi-organic content using tactics parallel to what a lot of disinformation campaigns do to send their own content soaring (In the case of groups that make overt efforts to conceal their origins, Facebook calls this “coordinated inauthentic behavior.”) Since the posts are part of a volunteer push and not targeted advertising, they won’t be labeled, though some might contain hashtags that connect them back to the Truth Brigade campaign.

Volunteers are trained to serve up progressive narratives in a “truth sandwich” that’s careful to not amplify the misinformation it’s meant to push back against. For Indivisible, training volunteers to avoid giving political misinformation even more oxygen is a big part of the effort.

“What we know is that actually spreads disinformation and does the work of some of these bad actors for them,” Greenberg said. “We are trying to get folks to respond not by engaging in that fight — that’s really doing their work for them — but by trying to advance the kind of narrative that we actually want people to buy into.”

She cites the social media outrage cycle perpetuated by Georgia Rep. Marjorie Taylor Greene as a harbinger of what Democrats will again be up against in 2022. Taylor Greene is best known for endorsing QAnon, getting yanked off of her Congressional committee assignments and comparing mask requirements to the Holocaust — comments that inspired some Republicans to call for her ouster from the party.

Political figures like Greene regularly rile up the left with outlandish claims and easily debunked conspiracies. Greenberg believes that political figures like Greene who regularly rile up the online left suck up a lot of energy that could be better spent resisting the urge to rage-retweet and spreading progressive political messages.

“It’s not enough to just fact check [and] it’s not enough to just respond, because then fundamentally we’re operating from a defensive place,” Greenberg said.

“We want to be proactively spreading positive messages that people can really believe in and grab onto and that will inoculate them from some of this.”

For Indivisible, the project is a long-term experiment that could pave the way for a new kind of online grassroots political campaign beyond targeted advertising — one that hopes to boost the signal in a sea of noise.

#articles, #biden, #disinformation, #energy, #government, #misinformation, #operating-systems, #policy, #president, #social-media, #social-media-platforms, #tc, #trump, #twitter

Facebook changes misinfo rules to allow posts claiming Covid-19 is man-made

Facebook made a few noteworthy changes to its misinformation policies this week, including the news that the company will now allow claims that Covid was created by humans — a theory that contradicts the previously prevailing assumption that humans picked up the virus naturally from animals.

“In light of ongoing investigations into the origin of COVID-19 and in consultation with public health experts, we will no longer remove the claim that COVID-19 is man-made from our apps,” a Facebook spokesperson told TechCrunch. “We’re continuing to work with health experts to keep pace with the evolving nature of the pandemic and regularly update our policies as new facts and trends emerge.”

The company is adjusting its rules about pandemic misinformation in light of international investigations legitimating the theory that the virus could have escaped from a lab. While that theory clearly has enough credibility to be investigated at this point, it is often interwoven with demonstrably false misinformation about fake cures, 5G towers causing Covid and most recently the false claim that the AstraZeneca vaccine implants recipients with a bluetooth chip.

Earlier this week, President Biden ordered a multi-agency intelligence report evaluating if the virus could have accidentally leaked out of a lab in Wuhan, China. Biden called this possibility one of two “likely scenarios.”

“… Shortly after I became President, in March, I had my National Security Advisor task the Intelligence Community to prepare a report on their most up-to-date analysis of the origins of COVID-19, including whether it emerged from human contact with an infected animal or from a laboratory accident,” Biden said in an official White House statement, adding that there isn’t sufficient evidence to make a final determination.

Claims that the virus was man-made or lab-made have circulated widely since the pandemic’s earliest days, even as the scientific community largely maintained that the virus probably made the jump from an infected animal to a human via natural means. But many questions remain about the origins of the virus and the U.S. has yet to rule out the possibility that the virus emerged from a Chinese lab — a scenario that would be a bombshell for international relations.

Prior to the Covid policy change, Facebook announced that it would finally implement harsher punishments against individuals who repeatedly peddle misinformation. The company will now throttle the News Feed reach of all posts from accounts that are found to habitually share known misinformation, restrictions it previously put in place for Pages, Groups, Instagram accounts and websites that repeatedly break the same rules.

#astrazeneca, #biden, #china, #covid-19, #facebook, #government, #misinformation, #president, #social, #tc, #united-states, #white-house

Facebook is testing pop-up messages telling people to read a link before they share it

Years after popping open a pandora’s box of bad behavior, social media companies are trying to figure out subtle ways to reshape how people use their platforms.

Following Twitter’s lead, Facebook is trying out a new feature designed to encourage users to read a link before sharing it. The test will reach 6 percent of Facebook’s Android users globally in a gradual rollout that aims to encourage “informed sharing” of news stories on the platform.

Users can still easily click through to share a given story, but the idea is that by adding friction to the experience, people might rethink their original impulses to share the kind of inflammatory content that currently dominates on the platform.

Twitter introduced prompts urging users to read a link before retweeting it last June and the company quickly found the test feature to be successful, expanding it to more users.

Facebook began trying out more prompts like this last year. Last June, the company rolled out pop-up messages to warn users before they share any content that’s more than 90 days old in an an effort to cut down on misleading stories taken out of their original context.

At the time, Facebook said it was looking at other pop-up prompts to cut down on some kinds of misinformation. A few months later, Facebook rolled out similar pop-up messages that noted the date and the source of any links they share related to COVID-19.

The strategy demonstrates Facebook’s preference for a passive strategy of nudging people away from misinformation and toward its own verified resources on hot button issues like COVID-19 and the 2020 election.

While the jury is still out on how much of an impact this kind of gentle behavioral shaping can make on the misinformation epidemic, both Twitter and Facebook have also explored prompts that discourage users from posting abusive comments.

Pop-up messages that give users a sense that their bad behavior is being observed might be where more automated moderation is headed on social platforms. While users would probably be far better served by social media companies scrapping their misinformation and abuse-ridden existing platforms and rebuilding them more thoughtfully from the ground up, small behavioral nudges will have to do.

#android, #facebook, #misinformation, #social, #social-media, #tc, #twitter

At social media hearing, lawmakers circle algorithm-focused Section 230 reform

Rather than a CEO-slamming sound bite free-for-all, Tuesday’s big tech hearing on algorithms aimed for more of a listening session vibe — and in that sense it mostly succeeded.

The hearing centered on testimony from the policy leads at Facebook, YouTube and Twitter rather than the chief executives of those companies for a change. The resulting few hours didn’t offer any massive revelations but was still probably more productive than squeezing some of the world’s most powerful men for their commitments to “get back to you on that.”

In the hearing, lawmakers bemoaned social media echo chambers and the ways that the algorithms pumping content through platforms are capable of completely reshaping human behavior. .

“… This advanced technology is harnessed into algorithms designed to attract our time and attention on social media, and the results can be harmful to our kids’ attention spans, to the quality of our public discourse, to our public health, and even to our democracy itself,” said Chris Coons (D-DE), chair of the Senate Judiciary’s subcommittee on privacy and tech, which held the hearing.

Coons struck a cooperative note, observing that algorithms drive innovation but that their dark side comes with considerable costs

None of this is new, of course. But Congress is crawling closer to solutions, one repetitive tech hearing at a time. The Tuesday hearing highlighted some zones of bipartisan agreement that could determine the chances of a tech reform bill passing the Senate, which is narrowly controlled by Democrats. Coons expressed optimism that a “broadly bipartisan solution” could be reached.

What would that look like? Probably changes to Section 230 of the Communications Decency Act, which we’ve written about extensively over the years. That law protects social media companies from liability for user-created content and it’s been a major nexus of tech regulation talk, both in the newly Democratic Senate under Biden and the previous Republican-led Senate that took its cues from Trump.

Lauren Culbertson, head of U.S. public policy at Twitter

Lauren Culbertson, head of U.S. public policy at Twitter Inc., speaks remotely during a Senate Judiciary Subcommittee hearing in Washington, D.C., U.S., on Tuesday, April 27, 2021. Photographer: Al Drago/Bloomberg via Getty Images

A broken business model

In the hearing, lawmakers pointed to flaws inherent to how major social media companies make money as the heart of the problem. Rather than criticizing companies for specific failings, they mostly focused on the core business model from which social media’s many ills spring forth.

“I think it’s very important for us to push back on the idea that really complicated, qualitative problems have easy quantitative solutions,” Sen. Ben Sasse (R-NE) said. He argued that because social media companies make money by keeping users hooked to their products, any real solution would have to upend that business model altogether.

“The business model of these companies is addiction,” Josh Hawley (R-MO) echoed, calling social media an “attention treadmill” by design.

Ex-Googler and frequent tech critic Tristan Harris didn’t mince words about how tech companies talk around that central design tenet in his own testimony. “It’s almost like listening to a hostage in a hostage video,” Harris said, likening the engagement-seeking business model to a gun just offstage.

Spotlight on Section 230

One big way lawmakers propose to disrupt those deeply entrenched incentives? Adding algorithm-focused exceptions to the Section 230 protections that social media companies enjoy. A few bills floating around take that approach.

One bill from Sen. John Kennedy (R-LA) and Reps. Paul Gosar (R-A) and Tulsi Gabbard (R-HI) would require platforms with 10 million or more users to obtain consent before serving users content based on their behavior or demographic data if they want to keep Section 230 protections. The idea is to revoke 230 immunity from platforms that boost engagement by “funneling information to users that polarizes their views” unless a user specifically opts in.

In another bill, the Protecting Americans from Dangerous Algorithms Act, Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) propose suspending Section 230 protections and making companies liable “if their algorithms amplify misinformation that leads to offline violence.” That bill would amend Section 230 to reference existing civil rights laws.

Section 230’s defenders argue that any insufficiently targeted changes to the law could disrupt the modern internet as we know it, resulting in cascading negative impacts well beyond the intended scope of reform efforts. An outright repeal of the law is almost certainly off the table, but even small tweaks could completely realign internet businesses, for better or worse.

During the hearing, Hawley made a broader suggestion for companies that use algorithms to chase profits. “Why shouldn’t we just remove section 230 protection from any platform that engages in behavioral advertising or algorithmic amplification?” he asked, adding that he wasn’t opposed to an outright repeal of the law.

Sen. Klobuchar, who leads the Senate’s antitrust subcommittee, connected the algorithmic concerns to anti-competitive behavior in the tech industry. “If you have a company that buys out everyone from under them… we’re never going to know if they could have developed the bells and whistles to help us with misinformation because there is no competition,” Klobuchar said.

Subcommittee members Klobuchar and Sen. Mazie Hirono (D-HI) have their own major Section 230 reform bill, the Safe Tech Act, but that legislation is less concerned with algorithms than ads and paid content.

At least one more major bill looking at Section 230 through the lens of algorithms is still on the way. Prominent big tech critic House Rep. David Cicilline (D-RI) is due out soon with a Section 230 bill that could suspend liability protections for companies that rely on algorithms to boost engagement and line their pockets.

“That’s a very complicated algorithm that is designed to maximize engagement to drive up advertising prices to produce greater profits for the company,” Cicilline told Axios last month. “…That’s a set of business decisions for which, it might be quite easy to argue, that a company should be liable for.”

#anna-eshoo, #behavioral-advertising, #biden, #communications-decency-act, #congress, #josh-hawley, #misinformation, #operating-systems, #section-230, #section-230-of-the-communications-decency-act, #senate, #senator, #social-media, #tc, #tristan-harris, #tulsi-gabbard, #twitter

The next era of moderation will be verified

Since the dawn of the internet, knowing (or, perhaps more accurately, not knowing) who is on the other side of the screen has been one of the biggest mysteries and thrills. In the early days of social media and online forums, anonymous usernames were the norm and meant you could pretend to be whoever you wanted to be.

As exciting and liberating as this freedom was, the problems quickly became apparent — predators of all kinds have used this cloak of anonymity to prey upon unsuspecting victims, harass anyone they dislike or disagree with, and spread misinformation without consequence.

For years, the conversation around moderation has been focused on two key pillars. First, what rules to write: What content is deemed acceptable or forbidden, how do we define these terms, and who makes the final call on the gray areas? And second, how to enforce them: How can we leverage both humans and AI to find and flag inappropriate or even illegal content?

While these continue to be important elements to any moderation strategy, this approach only flags bad actors after an offense. There is another equally critical tool in our arsenal that isn’t getting the attention it deserves: verification.

Most people think of verification as the “blue checkmark” — a badge of honor bestowed upon the elite and celebrities among us. However, verification is becoming an increasingly important tool in moderation efforts to combat nefarious issues like harassment and hate speech.

That blue checkmark is more than just a signal showing who’s important — it also confirms that a person is who they say they are, which is an incredibly powerful means to hold people accountable for their actions.

One of the biggest challenges that social media platforms face today is the explosion of fake accounts, with the Brad Pitt impersonator on Clubhouse being one of the more recent examples. Bots and sock puppets spread lies and misinformation like wildfire, and they propagate more quickly than moderators can ban them.

This is why Instagram began implementing new verification measures last year to combat this exact issue. By verifying users’ real identities, Instagram said it “will be able to better understand when accounts are attempting to mislead their followers, hold them accountable, and keep our community safe.”

It’s important to remember that verification is not a single tactic, but rather a collection of solutions that must be used dynamically in concert to be effective.

The urgency to implement verification is also bigger than just stopping the spread of questionable content. It can also help companies ensure they’re staying on the right side of the law.

Following an exposé revealing illegal content was being uploaded to Pornhub’s site, the company banned posts from nonverified users and deleted all content uploaded from unverified sources (more than 80% of the videos hosted on its platform). It has since implemented new measures to verify its users to prevent this kind of issue from infiltrating its systems again in the future.

Companies of all kinds should be looking at this case as a cautionary tale — if there had been verification from the beginning, the systems would have been in a much better place to identify bad actors and keep them out.

However, it’s important to remember that verification is not a single tactic, but rather a collection of solutions that must be used dynamically in concert to be effective. Bad actors are savvy and continually updating their methods to circumvent systems. Using a single-point solution to verify users — such as through a photo ID — might sound sufficient on its face, but it’s relatively easy for a motivated fraudster to overcome.

At Persona, we’ve detected increasingly sophisticated fraud attempts ranging from using celebrity photos and data to create accounts to intricate photoshopping of IDs and even using deepfakes to mimic a live selfie.

That’s why it’s critical for verification systems to take multiple signals into account when verifying users, including actively collected customer information (like a photo ID), passive signals (their IP address or browser fingerprint), and third-party data sources (like phone and email risk lists). By combining multiple data points, a valid but stolen ID won’t pass through the gates because signals like location or behavioral patterns will raise a red flag that this user’s identity is likely fraudulent or at the very least warrants further investigation.

This kind of holistic verification system will enable social and user-generated-content platforms to not only deter and flag bad actors but also prevent them from repeatedly entering your platform under new usernames and emails, a common tactic of trolls and account abusers who have previously been banned.

Beyond individual account abusers, a multisignal approach can help manage an arguably bigger problem for social media platforms: coordinated disinformation campaigns. Any issue involving groups of bad actors is like battling the multiheaded Hydra — you cut off one head only to have two more grow back in its place.

Yet killing the beast is possible when you have a comprehensive verification system that can help surface groups of bad actors based on shared properties (e.g., location). While these groups will continue to look for new ways in, multifaceted verification that is tailored for the end user can help keep them from running rampant.

Historically, identity verification systems like Jumio or Trulioo were designed for specific industries, like financial services. But we’re starting to see the rise in demand for industry-agnostic solutions like Persona to keep up with these new and emerging use cases for verification. Nearly every industry that operates online can benefit from verification, even ones like social media, where there isn’t necessarily a financial transaction to protect.

It’s not a question of if verification will become a part of the solution for challenges like moderation, but rather a question of when. The technology and tools exist today, and it’s up to social media platforms to decide that it’s time to make this a priority.

#column, #misinformation, #opinion, #privacy, #security, #social, #social-media, #social-media-platforms, #verification

US privacy, consumer, competition and civil rights groups urge ban on ‘surveillance advertising’

Ahead of another big tech vs Congress ‘grab your popcorn’ grilling session, scheduled for March 25 — when US lawmakers will once again question the CEOs of Facebook, Google and Twitter on the unlovely topic of misinformation — a coalition of organizations across the privacy, antitrust, consumer protection and civil rights spaces has called for a ban on “surveillance advertising”, further amplifying the argument that “big tech’s toxic business model is undermining democracy”.

The close to 40-strong coalition behind this latest call to ban ‘creepy ads’ which rely on the mass tracking and profiling of web users in order to target them with behavioral ads includes the American Economic Liberties Project, the Campaign for a Commercial Free Childhood, the Center for Digital Democracy, the Center for Humane Technology, Epic.org, Fair Vote, Media Matters for America, the Tech Transparency Project and The Real Facebook Oversight Board, to name a few.

As leaders across a broad range of issues and industries, we are united in our concern for the safety of our communities and the health of democracy,” they write in the open letter. “Social media giants are eroding our consensus reality and threatening public safety in service of a toxic, extractive business model. That’s why we’re joining forces in an effort to ban surveillance advertising.”

The coalition is keen to point out that less toxic non-tracking alternatives (like contextual ads) exist, while arguing that greater transparency and oversight of adtech infrastructure could help clean up a range of linked problems, from junk content and rising conspiracism to ad fraud and denuded digital innovation.

“There is no silver bullet to remedy this crisis – and the members of this coalition will continue to pursue a range of different policy approaches, from comprehensive privacy legislation to reforming our antitrust laws and liability standards,” they write. “But here’s one thing we all agree on: It’s time to ban surveillance advertising.”

“Big Tech platforms amplify hate, illegal activities, and conspiracism — and feed users increasingly extreme content — because that’s what generates the most engagement and profit,” they warn.

“Their own algorithmic tools have boosted everything from white supremacist groups and Holocaust denialism to COVID-19 hoaxes, counterfeit opioids and fake cancer cures. Echo chambers, radicalization, and viral lies are features of these platforms, not bugs — central to the business model.”

The coalition also warns over surveillance advertising’s impact on the traditional news business, noting that shrinking revenues for professional journalism is raining more harm down upon the (genuine) information ecosystem democracies need to thrive.

The potshots are well rehearsed at this point although it’s an oversimplification to blame the demise of traditional news on tech giants so much as ‘giant tech’: aka the industrial disruption wrought by the Internet making so much information freely available. But dominance of the programmatic adtech pipeline by a couple of platform giants clearly doesn’t help. (Australia’s recent legislative answer to this problem is still too new to assess for impacts but there’s a risk its news media bargaining code will merely benefit big media and big tech while doing nothing about the harms of either industry profiting off of outrage.)

“Facebook and Google’s monopoly power and data harvesting practices have given them an unfair advantage, allowing them to dominate the digital advertising market, siphoning up revenue that once kept local newspapers afloat. So while Big Tech CEOs get richer, journalists get laid off,” the coalition warns, adding: “Big Tech will continue to stoke discrimination, division, and delusion — even if it fuels targeted violence or lays the groundwork for an insurrection — so long as it’s in their financial interest.”

Among a laundry list of harms the coalition is linking to the dominant ad-based online business models of tech giants Facebook and Google is the funding of what they describe as “insidious misinformation sites that promote medical hoaxes, conspiracy theories, extremist content, and foreign propaganda”.

“Banning surveillance advertising would restore transparency and accountability to digital ad placements, and substantially defund junk sites that serve as critical infrastructure in the disinformation pipeline,” they argue, adding: “These sites produce an endless drumbeat of made-to-go-viral conspiracy theories that are then boosted by bad-faith social media influencers and the platforms’ engagement-hungry algorithms — a toxic feedback loop fueled and financed by surveillance advertising.”

Other harms they point to are the risks posed to public health by platforms’ amplification of junk/bogus content such as COVID-19 conspiracy theories and vaccine misinformation; the risk of discrimination through unfairly selective and/or biased ad targeting, such as job ads that illegally exclude women or ethnic minorities; and the perverse economic incentives for ad platforms to amplify extremist/outrageous content in order to boost user engagement with content and ads, thereby fuelling societal division and driving partisanship as a byproduct of the fact platforms benefit financially from more content being spread.

The coalition also argues that the surveillance advertising system is “rigging the game against small businesses” because it embeds platform monopolies — which is a neat counterpoint to tech giants’ defensive claim that creepy ads somehow level the playing field for SMEs vs larger brands.

“While Facebook and Google portray themselves as lifelines for small businesses, the truth is they’re simply charging monopoly rents for access to the digital economy,” they write, arguing that the duopoly’s “surveillance-driven stranglehold over the ad market leaves the little guys with no leverage or choice” — opening them up to exploitation by big tech.

The current market structure — with Facebook and Google controlling close to 60% of the US ad market — is thus stifling innovation and competition, they further assert.

“Instead of being a boon for online publishers, surveillance advertising disproportionately benefits Big Tech platforms,” they go on, noting that Facebook made $84.2BN in 2020 ad revenue and Google made $134.8BN off advertising “while the surveillance ad industry ran rife with allegations of fraud”.

The campaign being kicked off is by no means the first call for a ban on behavioral advertising but given how many signatories are backing this one it’s a sign of the scale of the momentum building against a data-harvesting business model that has shaped the modern era and allowed a couple of startups to metamorphosize into society- and democracy-denting giants.

That looks important as US lawmakers are now paying close attention to big tech impacts — and have a number of big tech antitrust cases actively on the table. Although it was European privacy regulators that were among the first to sound the alarm over microtargeting’s abusive impacts and risks for democratic societies.

Back in 2018, in the wake of the Facebook data misuse and voter targeting scandal involving Cambridge Analytica, the UK’s ICO called for an ethical pause on the use of online ad tools for political campaigning — penning a report entitled Democracy Disrupted? Personal information and political influence.

It’s no small irony that the self-same regulator has so far declined to take any action against the adtech industry’s unlawful use of people’s data — despite warning in 2019 that behavioral advertising is out of control.

The ICO’s ongoing inaction seems likely to have fed into the UK government’s decision that a dedicated unit is required to oversee big tech.

In recent years the UK has singled out the online ad space for antitrust concern — saying it will establish a pro-competition regulator to tackle big tech’s dominance, following a market study of the digital advertising sector carried out in 2019 by its Competition and Markets Authority which reported substantial concerns over the power of the adtech duopoly.

Last month, meanwhile, the European Union’s lead data protection supervisor urged not a pause but a ban on targeted advertising based on tracking internet users’ digital activity — calling on regional lawmakers’ to incorporate the lever into a major reform of digital services rules which is intended to boost operators’ accountability, among other goals.

The European Commission’s proposal had avoided going so far. But negotiations over the Digital Services Act and Digital Markets Act are ongoing.

Last year the European Parliament also backed a tougher stance on creepy ads. Again, though, the Commission’s framework for tackling online political ads does not suggest anything so radical — with EU lawmakers pushing for greater transparency instead.

It remains to be seen what US lawmakers will do but with US civil society organizations joining forces to amplify an anti-ad-targeting message there’s rising pressure to clean up the toxic adtech in its own backyard.

Commenting in a statement on the coalition’s website, Zephyr Teachout, an associate professor of law at Fordham Law School, said: “Facebook and Google possess enormous monopoly power, combined with the surveillance regimes of authoritarian states and the addiction business model of cigarettes. Congress has broad authority to regulate their business models and should use it to ban them from engaging in surveillance advertising.”

“Surveillance advertising has robbed newspapers, magazines, and independent writers of their livelihoods and commoditized their work — and all we got in return were a couple of abusive monopolists,” added David Heinemeier Hansson, creator of Ruby on Rails, in another supporting statement. “That’s not a good bargain for society. By banning this practice, we will return the unique value of writing, audio, and video to the people who make it rather than those who aggregate it.”

With US policymakers paying increasingly close attention to adtech, it’s interesting to see Google is accelerating its efforts to replace support for individual-level tracking with what it’s branded as a ‘privacy-safe’ alternative (FLoC).

Yet the tech it’s proposed via its Privacy Sandbox will still enable groups (cohorts) of web users to be targeted by advertisers, with ongoing risks for discrimination, the targeting of vulnerable groups of people and societal-scale manipulation — so lawmakers will need to pay close attention to the detail of the ‘Privacy Sandbox’ rather than Google’s branding.

“This is, in a word, bad for privacy,” warned the EFF, writing about the proposal back in 2019. “A flock name would essentially be a behavioral credit score: a tattoo on your digital forehead that gives a succinct summary of who you are, what you like, where you go, what you buy, and with whom you associate.”

“FLoC is the opposite of privacy-preserving technology,” it added. “Today, trackers follow you around the web, skulking in the digital shadows in order to guess at what kind of person you might be. In Google’s future, they will sit back, relax, and let your browser do the work for them.”

#advertising-tech, #behavioral-ads, #facebook, #google, #microtargeting, #misinformation, #online-ads, #policy, #privacy, #surveillance

Distraction, not partisanship, drives sharing of misinformation

The words

Enlarge (credit: Lewis Ogden / Flickr)

We don’t need a study to know that misinformation is rampant on social media; we just need to do a search for “vaccines” or “climate change” to confirm that. A more compelling question is why. It’s clear that, at a minimum, there are contributions from organized disinformation campaigns, rampant political partisans, and questionable algorithms. But beyond those, there are still a lot of people who choose to share stuff that even a cursory examination would show was garbage. What’s driving them?

That was the question that motivated a small international team of researchers, who decided to take a look at how a group of US residents decided on which news to share. Their results suggest that some of the standard factors that people point to when explaining the tsunami of misinformation—inability to evaluate information and partisan biases—aren’t having as much influence as most of us think. Instead, a lot of the blame gets directed at people just not paying careful attention.

You shared that?

The researchers ran a number of fairly similar experiments to get at the details of misinformation sharing. This involved panels of US-based participants recruited either through Mechanical Turk or via a survey population that provided a more representative sample of the US. Each panel had several hundred to over 1,000 individuals, and the results were consistent across different experiments, so there was a degree of reproducibility to the data.

Read 14 remaining paragraphs | Comments

#behavioral-science, #human-behavior, #misinformation, #science, #social-media

In expanded crackdown, Facebook increases penalties for rule-breaking groups and their members

Facebook this morning announced it will increase the penalties against its rule-breaking Facebook Groups and their members, alongside other changes designed to reduce the visibility of groups’ potentially harmful content. The company says it will now remove civic and political groups from its recommendations in markets outside the U.S., and will further restrict the reach of groups and members who continue to violate its rules.

The changes follow what has been a steady, but slow and sometimes ineffective crackdown on Facebook Groups that produce and share harmful, polarizing or even dangerous content.

Ahead of the U.S. elections, Facebook implemented a series of new rules designed to penalize those who violated its Community Standards or spread misinformation via Facebook Groups. These rules largely assigned more responsibility to Groups themselves, and penalized individuals who broke rules. Facebook also stopped recommending health groups, to push users to official sources for health information, including for information about Covid-19.

This January, Facebook made a more significant move against potentially dangerous groups. It announced it would remove civic and political groups, as well as newly created groups, from its recommendations in the U.S. following the insurrection at the U.S. Capitol on Jan. 6, 2021. (Previously, it had temporarily limited these groups ahead of the U.S. elections.)

As The WSJ reported when this policy became permanent, Facebook’s internal research had found that Facebook groups in the U.S. were polarizing users and inflaming the calls for violence that spread after the elections. The researchers said roughly 70% of the top 100 most active civic Facebook Groups in the U.S. had issues with hate, misinformation, bullying and harassment that should make them non-recommendable, leading to the January 2021 crackdown.

Today, that same policy is being rolled out to Facebook’s global user base, not just Facebook U.S. users.

That means in addition to health groups, users worldwide won’t be “recommended” civic or political groups when browsing Facebook. It’s important, however, to note that recommendations are only one of many ways users find Facebook Groups. Users can also find them in search, through links people post, through invites and friends’ private messages.

In addition, Facebook says groups that have gotten in trouble for violating Facebook’s rules will now be shown lower in recommendations — a sort of downranking penalty Facebook often uses to reduce the visibility of News Feed content.

The company will also increase the penalties against rule-violating groups and their individual members through a variety of other enforcement actions.

Image Credits: Facebook

For example, users who attempt to join groups that have a history of breaking Facebook’s Community Standards will be alerted to the the group’s violations through a warning message (shown above), which may cause the user to reconsider joining.

The rule-violating groups will have their invite notifications limited, and current members will begin to see less of the groups’ content in their News Feed, as the content will be shown further down. These groups will also be demoted in Facebook’s recommendations.

When a group hosts a substantial number of members who have violated Facebook policies or participated in other groups that were shut down for Facebook Community Standards violations, the group itself will have to temporarily approve all members’ new posts. And if the admin or moderator repeatedly approves rule-breaking content, Facebook will then take the entire group down.

This rule aims to address problems around groups that re-form after being banned, only to restart their bad behavior unchecked.

The final change being announced today applies to group members.

When someone has repeated violations in Facebook Groups, they’ll be temporarily stopped from posting or commenting in any group, won’t be allowed to invite others to join groups, and won’t be able to create new groups. This measure aims to slow down the reach of bad actors, Facebook says.

The new policies give Facebook a way to more transparently document a group’s bad behavior that led to its final shutdown. This “paper trail,” of sorts, also helps Facebook duck accusations of bias when it comes to its enforcement actions —  a charge often raised by Facebook critics on the right, who believe social networks are biased against conservatives.

But the problem with these policies is that they’re still ultimately hand slaps for those who break Facebook’s rules — not all that different from what users today jokingly refer to as “Facebook jail“. When individuals or Facebook Pages violate Facebook’s Community Standards, they’re temporarily prevented from interacting on the site or using specific features. Facebook is now trying to replicate that formula, with modifications, for Facebook Groups and their members.

There are other issues, as well. For one, these rules rely on Facebook to actually enforce them, and it’s unclear how well it will be able to do so. For another, they ignore one of the key means of group discovery: search. Facebook claims it downranks low-quality results here, but results of its efforts are decidedly mixed.

For example, though Facebook made sweeping statements about banning QAnon content across its platform in a misinformation crackdown last fall, it’s still possible to search for and find QAnon-adjacent content — like groups that aren’t titled QAnon but cater to QAnon-styled “patriots” and conspiracies).

Similarly, searches for terms like “antivax” or “covid hoax,” can also direct users to problematic groups — like the one for people who “aren’t anti-vax in general,” but are “just anti-RNA,” the group’s title explains; or the “parents against vaccines” group; or the “vaccine haters” group that proposes it’s spreading the “REAL vaccine information.” (We surfaced these on Tuesday, ahead of Facebook’s announcement.)

 

Cleary, these are not official health resources, and would not otherwise be recommended per Facebook policies — but are easy to surface through Facebook search. The company, however, takes stronger measures against Covid-19 and Covid vaccine misinformation — it says it will remove Pages, groups, and accounts that repeatedly shared debunked claims, and otherwise downranks them.

Facebook, to be clear, is fully capable of using stronger technical means of blocking access to content.

It banned “stop the steal” and other conspiracies following the U.S. elections, for example. And even today, a search for “stop the steal” groups simply returns a blank page saying no results were found.

Image Credits: Facebook fully blocks “stop the steal”

So why should a search for a banned topic like “QAnon” return anything at all?

Why should “covid hoax?” (see below)

Image Credits: Facebook group search results for “covid hoax”

 

If Facebook wanted to broaden its list of problematic search terms, and return blank pages for other types of harmful content, it could. In fact, if it wanted to maintain a block list of URLs that are known to spread false information, it could do that, too. It could prevent users from re-sharing any post that included those links. It could make those posts default to non-public. It could flag users who violate its rules repeatedly, or some subset of those rules, as users who no longer get to set their posts to public…ever.

In other words, Facebook could do many, many things if it truly wanted to have a significant impact on the spread misinformation, toxicity, polarizing and otherwise harmful content on its platform. Instead, it continues inching forward with temporary punishments and those that are often only aimed at “repeated” violations, such as the ones announced today. These are, arguably, more penalties than it had before — but also maybe not enough.

#facebook, #facebook-groups, #misinformation, #news-feed, #social, #social-media, #social-networks

Facebook to label all COVID-19 vaccine posts with pointer to official info

Facebook will soon label all posts discussing the coronavirus vaccination with a pointer to official information about COVID-19, it said today.

It also revealed it has implemented some new “temporary” measures aimed at limiting the spread of vaccine misinformation/combating vaccine hesitancy — saying it’s reducing the distribution of content from users that have violated its policies on COVID-19 and vaccine misinformation; or “that have repeatedly shared content debunked as False or Altered by our third-party fact-checking partners”.

It’s also reducing distribution of any COVID-19 or vaccine content that fact-checking partners have rated as “Missing Context”, per the blog post.

While admins for groups with admins or members who have violated its COVID-19 policies will also be required to temporarily approve all posts within their group, it said. (It’s not clear what happens if a group only has one admin and they have violated its policies.)

Facebook will also “further elevate information from authoritative sources when people seek information about COVID-19 or vaccines”, it added.

It’s not clear why users who repeatedly violate Facebook’s COVID-19 policies do not face at least a period of suspension. (We’ve asked the company for clarity on its policies.)

“We’re continuing to expand our efforts to address COVID-19 vaccine misinformation by adding labels to Facebook and Instagram posts that discuss the vaccines,” Facebook said in the Newsroom post today.

“These labels contain credible information about the safety of COVID-19 vaccines from the World Health Organization. For example, we’re adding a label on posts that discuss the safety of COVID-19 vaccines that notes COVID-19 vaccines go through tests for safety and effectiveness before they’re approved.”

The incoming COVID-19 information labels are rolling out globally in English, Spanish, Indonesian, Portuguese, Arabic and French (with additional languages touted “in the coming weeks”), per Facebook.

As well as soon rolling out labels “on all posts generally about COVID-19 vaccines” — pointing users to its COVID-19 Information Center — Facebook said it would add additional “targeted” labels about “COVID-19 vaccine subtopics”. So it sounds like it may respond directly to specific anti-vaxxer misinformation it’s seeing spreading on its platform.

“We will also add an additional screen when someone goes to share a post on Facebook and Instagram with an informational COVID-19 vaccine label. It will provide more information so people have the context they need to make informed decisions about what to share,” Facebook added.

The moves follow revelations that an internal Facebook study of vaccine hesitancy — which was reported on by the Washington Post yesterday after it obtained documents on the large-scale internal research effort — found a small number of US users are responsible for driving most of the content that’s hesitant about getting vaccinated.

“Just 10 out of the 638 population segments [Facebook’s study divided US users into] contained 50 percent of all vaccine hesitancy content on the platform,” the Post reported. “And in the population segment with the most vaccine hesitancy, just 111 users contributed half of all vaccine hesitant content.”

Last week the MIT Technology Review also published a deep-dive article probing Facebook’s approach to interrogating, via an internal ‘Responsible AI’ team, the impacts of AI-fuelled content distribution — which accused the company of prioritizing growth and engagement and neglecting the issue of toxic misinformation (and the individual and societal harms that can flow from algorithmic content choices which amplify lies and hate speech).

In the case of COVID-19, lies being spread about vaccination safety or efficacy present a clear and present danger to public health. And Facebook’s PR machine does appear to have, tardily, recognized the extent of the reputational risk it’s facing if it’s platform is associated with driving vaccine hesitancy.

To wit: Also today it’s announced the launch of a global COVID-19 education drive that it says it hopes will bring 50M people “closer to getting vaccinated”.

“By working closely with national and global health authorities and using our scale to reach people quickly, we’re doing our part to help people get credible information, get vaccinated and come back together safely,” Facebook writes in the Newsroom post that links directly to a Facebook post by founder Mark Zuckerberg also trailing the new measures, including the launch of a tool that will show U.S. Facebook users where they can get vaccinated and provide them with a link to make an appointment.

Facebook said it plans to expand the tool to other countries as global vaccine availability steps up.

Facebook’s vaccine appointment finder tool (Image credits: Facebook)

Facebook has further announced that the COVID-19 information portal it launched in the Facebook app in March last year which points users to “the latest information about the virus from local health ministries and the World Health Organization” is finally being brought to Instagram.

It’s not clear why Facebook hadn’t already launched the portal on Instagram. But it’s decided to double down on fighting bad speech (related to vaccines) with better speech — saying Instagram users will get new stickers they can add to their Instagram Stories “so people can inspire others to get vaccinated when it becomes available to them”.

In other moves being trailed in Facebook’s crisis PR blitz today it has touted “new data and insights” on vaccine attitudes being made available to public officials via COVID-19 dashboards and maps it was already offering (the data is collected by Facebook’s Data for Good partners for the effort at Carnegie Mellon University and University of Maryland as part of the COVID-19 Symptom Survey).

Albeit, it doesn’t specify what new information is being added there (or why now).

Also today it said it’s “making it easy to track how COVID-19 vaccine information is being spread on social media through CrowdTangle’s COVID-19 Live Displays“.

“Publishers, global aid organizations, journalists and others can access real-time, global streams of vaccine-related posts on Facebook, Instagram and Reddit in 34 languages. CrowdTangle also offers Live Displays for 104 countries and all 50 states in the US to help aid organizations and journalists track posts and trends at a regional level as well,” Facebook added, again without offering any context on why it hadn’t made it easier to use this tool to track vaccine information spread before.

Its blog post also touts “new” partnerships with health authorities and governments on vaccine registration — while trumpeting the ~3BN messages it says have already been sent “by governments, nonprofits and international organizations to citizens through official WhatsApp chatbots on COVID-19”. (As WhatsApp is end-to-end encrypted there is no simple way to quantify how many vaccine misinformation messages have been sent via the same platform.)

Per Facebook, it’s now “working directly with health authorities and governments to get people registered for vaccinations” (such as in the city and province of Buenos Aires, Argentina, which is using WhatsApp as the official channel to send notifications to citizens when it’s their turn to receive the vaccine).

“Since the beginning of the COVID-19 pandemic, we have partnered with ministries of health and health-focused organizations in more than 170 countries by providing free ads, enabling partners to share their own public health guidance on COVID-19 and information about the COVID-19 vaccine,” Facebook’s PR adds in a section of the post which it’s titled “amplifying credible health information and resources from experts”.

#artificial-intelligence, #coronavirus, #covid-19, #facebook, #health, #instagram, #misinformation, #social, #vaccine, #whatsapp

Big Tech companies cannot be trusted to self-regulate: We need Congress to act

It’s been two months since Donald Trump was kicked off of social media following the violent insurrection on Capitol Hill in January. While the constant barrage of hate-fueled commentary and disinformation from the former president has come to a halt, we must stay vigilant.

Now is the time to think about how to prevent Trump, his allies and other bad actors from fomenting extremism in the future. It’s time to figure out how we as a society address the misinformation, conspiracy theories and lies that threaten our democracy by destroying our information infrastructure.

As vice president at Color Of Change, my team and I have had countless meetings with leaders of multi-billion-dollar tech companies like Facebook, Twitter and Google, where we had to consistently flag hateful, racist content and disinformation on their platforms. We’ve also raised demands supported by millions of our members to adequately address these systemic issues — calls that are too often met with a lack of urgency and sense of responsibility to keep users and Black communities safe.

The violent insurrection by white nationalists and far-right extremists in our nation’s capital was absolutely fueled and enabled by tech companies who had years to address hate speech and disinformation that proliferated on their social media platforms. Many social media companies relinquished their platforms to far-right extremists, white supremacists and domestic terrorists long ago, and it will take more than an attempted coup to hold them fully accountable for their complicity in the erosion of our democracy — and to ensure it can’t happen again.

To restore our systems of knowledge-sharing and eliminate white nationalist organizing online, Big Tech must move beyond its typical reactive and shallow approach to addressing the harm they cause to our communities and our democracy. But it’s more clear than ever that the federal government must step in to ensure tech giants act.

After six years leading corporate accountability campaigns and engaging with Big Tech leaders, I can definitively say it’s evident that social media companies do have the power, resources and tools to enforce policies that protect our democracy and our communities. However, leaders at these tech giants have demonstrated time and time again that they will choose not to implement and enforce adequate measures to stem the dangerous misinformation, targeted hate and white nationalist organizing on their platforms if it means sacrificing maximum profit and growth.

And they use their massive PR teams to create an illusion that they’re sufficiently addressing these issues. For example, social media companies like Facebook continue to follow a reactive formula of announcing disparate policy changes in response to whatever public relations disaster they’re fending off at the moment. Before the insurrection, the company’s leaders failed to heed the warnings of advocates like Color Of Change about the dangers of white supremacists, far-right conspiracists and racist militias using their platforms to organize, recruit and incite violence. They did not ban Trump, implement stronger content moderation policies or change algorithms to stop the spread of misinformation-superspreader Facebook groups — as we had been recommending for years.

These threats were apparent long before the attack on Capitol Hill. They were obvious as Color Of Change and our allies propelled the #StopHateForProfit campaign last summer, when over 1,000 advertisers pulled millions in ad revenues from the platform. They were obvious when Facebook finally agreed to conduct a civil rights audit in 2018 after pressure from our organization and our members. They were obvious even before the deadly white nationalist demonstration in Charlottesville in 2017.

Only after significant damage had already been done did social media companies take action and concede to some of our most pressing demands, including the call to ban Trump’s accounts, implement disclaimers on voter fraud claims, and move aggressively remove COVID misinformation as well as posts inciting violence at the polls amid the 2020 election. But even now, these companies continue to shirk full responsibility by, for example, using self-created entities like the Facebook Oversight Board — an illegitimate substitute for adequate policy enforcement — as PR cover while the fate of recent decisions, such as the suspension of Trump’s account, hang in the balance.

Facebook, Twitter, YouTube and many other Big Tech companies kick into action when their profits, self-interests and reputation are threatened, but always after the damage has been done because their business models are built solely around maximizing engagement. The more polarized content is, the more engagement it gets; the more comments it elicits or times it’s shared, the more of our attention they command and can sell to advertisers. Big Tech leaders have demonstrated they neither have the willpower nor the ability to proactively and successfully self-regulate, and that’s why Congress must immediately intervene.

Congress should enact and enforce federal regulations to reign in the outsized power of Big Tech behemoths, and our lawmakers must create policies that translate to real-life changes in our everyday lives — policies that protect Black and other marginalized communities both online and offline.

We need stronger antitrust enforcement laws to break up big tech monopolies that evade corporate accountability and impact Black businesses and workers; comprehensive privacy and algorithmic discrimination legislation to ensure that profits from our data aren’t being used to fuel our exploitation; expanded broadband access to close the digital divide for Black and low-income communities; restored net neutrality so that internet services providers can’t charge differently based on content or equipment; and disinformation and content moderation by making it clear that Section 230 does not exempt platforms from complying with civil rights laws.

We’ve already seen some progress following pressure from activists and advocacy groups including Color Of Change. Last year alone, Big Tech companies like Zoom hired chief diversity experts; Google took action to block the Proud Boys website and online store; and major social media platforms like TikTok adopted better, stronger policies on banning hateful content.

But we’re not going to applaud billion-dollar tech companies for doing what they should and could have already done to address the years of misinformation, hate and violence fueled by social media platforms. We’re not going to wait for the next PR stunt or blanket statement to come out or until Facebook decides whether or not to reinstate Trump’s accounts — and we’re not going to stand idly by until more lives are lost.

The federal government and regulatory powers need to hold Big Tech accountable to their commitments by immediately enacting policy change. Our nation’s leaders have a responsibility to protect us from the harms Big Tech is enabling on our democracy and our communities — to regulate social media platforms and change the dangerous incentives in the digital economy. Without federal intervention, tech companies are on pace to repeat history.

#column, #congress, #disinformation, #misinformation, #opinion, #policy, #section-230, #social, #social-media, #social-media-platforms, #tc

Twitter rolls out vaccine misinformation warning labels and a strike-based system for violations

Twitter announced Monday that it would begin injecting new labels into users’ timelines to push back against misinformation that could disrupt the rollout of COVID-19 vaccines. The labels, which will also appear as pop-up messages in the retweet window, are the company’s latest product experiment designed to shape behavior on the platform for the better.

The company will attach notices to tweeted misinformation warning users that the content “may be misleading” and linking out to vetted public health information. These initial vaccine misinformation sweeps, which begin today, will be conducted by human moderators at Twitter and not automated moderation systems.

Twitter says the goal is to use these initial determinations to train its AI systems so that down the road a blend of human and automated efforts will scan the site for vaccine misinformation. The latest misinformation measure will target tweets in English before expanding.

Twitter also introduced a new strike system for violations of its pandemic-related rules. The new system is modeled after a set of consequences it implemented for voter suppression and voting-related misinformation. Within that framework, a user with two or three “strikes” faces a 12-hour account lockout. With four violations, they lose account access for one week, with permanent suspension looming after five strikes.

Twitter introduced its first pandemic-specific policies a year ago, banning tweets promoting false treatment or prevention claims along with any content that could put people at higher risk of spreading COVID-19. In December, Twitter added new rules focused on popular vaccine conspiracy theories and announced that warning labels were on the way.

#covid-19, #misinformation, #public-health, #social, #tc, #twitter

Reducing the spread of misinformation on social media: What would a do-over look like?

The news is awash with stories of platforms clamping down on misinformation and the angst involved in banning prominent members. But these are Band-Aids over a deeper issue — namely, that the problem of misinformation is one of our own design. Some of the core elements of how we’ve built social media platforms may inadvertently increase polarization and spread misinformation.

If we could teleport back in time to relaunch social media platforms like Facebook, Twitter and TikTok with the goal of minimizing the spread of misinformation and conspiracy theories from the outset … what would they look like?

This is not an academic exercise. Understanding these root causes can help us develop better prevention measures for current and future platforms.

Some of the core elements of how we’ve built social media platforms may inadvertently increase polarization and spread misinformation.

As one of the Valley’s leading behavioral science firms, we’ve helped brands like Google, Lyft and others understand human decision-making as it relates to product design. We recently collaborated with TikTok to design a new series of prompts (launched this week) to help stop the spread of potential misinformation on its platform.

The intervention successfully reduces shares of flagged content by 24%. While TikTok is unique amongst platforms, the lessons we learned there have helped shape ideas on what a social media redux could look like.

Create opt-outs

We can take much bigger swings at reducing the views of unsubstantiated content than labels or prompts.

In the experiment we launched together with TikTok, people saw an average of 1.5 flagged videos over a two-week period. Yet in our qualitative research, many users said they were on TikTok for fun; they didn’t want to see any flagged videos whatsoever. In a recent earnings call, Mark Zuckerberg also spoke of Facebook users’ tiring of hyperpartisan content.

We suggest giving people an “opt-out of flagged content” option — remove this content from their feeds entirely. To make this a true choice, this opt-out needs to be prominent, not buried somewhere users must seek it out. We suggest putting it directly in the sign-up flow for new users and adding an in-app prompt for existing users.

Shift the business model

There’s a reason false news spreads six times faster on social media than real news: Information that’s controversial, dramatic or polarizing is far more likely to grab our attention. And when algorithms are designed to maximize engagement and time spent on an app, this kind of content is heavily favored over more thoughtful, deliberative content.

The ad-based business model is at the core the problem; it’s why making progress on misinformation and polarization is so hard. One internal Facebook team tasked with looking into the issue found that, “our algorithms exploit the human brain’s attraction to divisiveness.” But the project and proposed work to address the issues was nixed by senior executives.

Essentially, this is a classic incentives problem. If business metrics that define “success” are no longer dependent on maximizing engagement/time on site, everything will change. Polarizing content will no longer need to be favored and more thoughtful discourse will be able to rise to the surface.

Design for connection

A primary part of the spread of misinformation is feeling marginalized and alone. Humans are fundamentally social creatures who look to be part of an in-group, and partisan groups frequently provide that sense of acceptance and validation.

We must therefore make it easier for people to find their authentic tribes and communities in other ways (versus those that bond over conspiracy theories).

Mark Zuckerberg says his ultimate goal with Facebook was to connect people. To be fair, in many ways Facebook has done that, at least on a surface level. But we should go deeper. Here are some ways:

We can design for more active one-on-one communication, which has been shown to increase well-being. We can also nudge offline connection. Imagine two friends are chatting on Facebook messenger or via comments on a post. How about a prompt to meet in person, when they live in the same city (post-COVID, of course)? Or if they’re not in the same city, a nudge to hop on a call or video.

In the scenario where they’re not friends and the interaction is more contentious, platforms can play a role in highlighting not only the humanity of the other person, but things one shares in common with the other. Imagine a prompt that showed, as you’re “shouting” online with someone, everything you have in common with that person.

Platforms should also disallow anonymous accounts, or at minimum encourage the use of real names. Clubhouse has good norm-setting on this: In the onboarding flow they say, “We use real names here.” Connection is based on the idea that we’re interacting with a real human. Anonymity obfuscates that.

Finally, help people reset

We should make it easy for people to get out of an algorithmic rabbit hole. YouTube has been under fire for its rabbit holes, but all social media platforms have this challenge. Once you click a video, you’re shown videos like it. This may help sometimes (getting to that perfect “how to” video sometimes requires a search), but for misinformation, this is a death march. One video on flat earth leads to another, as well as other conspiracy theories. We need to help people eject from their algorithmic destiny.

With great power comes great responsibility

More and more people now get their news from social media, and those who do are less likely to be correctly informed about important issues. It’s likely that this trend of relying on social media as an information source will continue.

Social media companies are thus in a unique position of power and have a responsibility to think deeply about the role they play in reducing the spread of misinformation. They should absolutely continue to experiment and run tests with research-informed solutions, as we did together with the TikTok team.

This work isn’t easy. We knew that going in, but we have an even deeper appreciation for this fact after working with the TikTok team. There are many smart, well-intentioned people who want to solve for the greater good. We’re deeply hopeful about our collective opportunity here to think bigger and more creatively about how to reduce misinformation, inspire connection and strengthen our collective humanity all at the same time.

#column, #facebook, #internet-culture, #mark-zuckerberg, #misinformation, #opinion, #qanon, #social, #software, #tiktok

Instagram bans top anti-vaxxer Robert F. Kennedy Jr. over COVID falsehoods

Robert Kennedy Jr. heads up to a meeting at Trump Tower on January 10, 2017 in New York City.

Enlarge / Robert Kennedy Jr. heads up to a meeting at Trump Tower on January 10, 2017 in New York City. (credit: Spencer Platt/Getty Images)

Instagram has permanently banned the account of Robert F. Kennedy Jr., an infamous and prolific peddler of dangerous anti-vaccine and COVID-19 misinformation.

The move will likely be cheered by public health advocates who have struggled to combat such harmful bunkum online during the devastating pandemic. However, Kennedy’s account on Facebook—which owns Instagram—remained active Thursday and lists over 300,000 followers.

In an email to Ars, a Facebook spokesperson said Kennedy’s Instagram account was removed “for repeatedly sharing debunked claims about the coronavirus or vaccines.” The account had over 800,000 followers prior to its removal, according to The Wall Street Journal.

Read 6 remaining paragraphs | Comments

#anti-vaccine, #covid-19, #facebook, #instagram, #misinformation, #robert-f-kennedy-jr, #science