Twitch sues two users for harassing streamers with hate raids

Twitch filed a lawsuit late last week against two people on its own platform for running automated hate and harassment campaigns.

The harassment, often targeted at Black and LGBTQ streamers, manifests in a unique Twitch phenomenon as a “hate raid.” On Twitch, creators regularly point viewers toward another friendly account after their stream concludes to boost their audiences, a practice known as a “raid.” Hate raids invert that formula, sending swarms of bots to harass streamers who have inadequate tools at their disposal to block the influx of abuse.

The hate raids leverage Twitch’s new tagging system, which many transgender users had requested to make it easier to build community and to discover content that resonates. In May, Twitch added more than 350 new tags to help viewers sort streams by “gender, sexual orientation, race, nationality, ability, mental health, and more.” Accounts spreading abuse now use those tags to target racist, sexist, transphobic and homophobic harassment toward streamers, another unfortunate misuse of a tool explicitly designed to give creators a boost.

In the suit, Twitch described hate raiders as “highly motivated” malicious individuals who improvise new ways to circumvent the platform’s terms of service. Twitch named two users, “CruzzControl” and “CreatineOverdose,” in the suit but the company was unable to obtain their legal names. The users are based in the Netherlands and Austria, respectively, and their activity began in August of this year. Twitch alleges that CruzzControl alone has been linked to 3,000 bot accounts involved in hate raids.

While it’s possible that Twitch won’t be able to identify the real identities of individuals behind the recent harassment campaigns, the lawsuit could act as a deterrent for other accounts directing waves of abuse on the streaming platform.

“While we have identified and banned thousands of accounts over the past weeks, these actors continue to work hard on creative ways to circumvent our improvements, and show no intention of stopping,” the lawsuit reads. “We hope this Complaint will shed light on the identity of the individuals behind these attacks and the tools that they exploit, dissuade them from taking similar behaviors to other services, and help put an end to these vile attacks against members of our community.”

“This Complaint is by no means the only action we’ve taken to address targeted attacks, nor will it be the last,” a Twitch spokesperson told TechCrunch. “Our teams have been working around the clock to update our proactive detection systems, address new behaviors as they emerge, and finalize new proactive, channel-level safety tools that we’ve been developing for months.”

Prior to Twitch’s legal action, some Twitch creators organized #ADayOffTwitch to protest the company’s failure to offer solutions for users targeted by hate raids. People participating in the protest demanded that Twitch take decisive actions toward protecting streamers from hate raids, including letting creators deny incoming raids and screen out chat participants with newly made accounts. They also drew attention to Twitch policies that allow unlimited accounts to be linked to a single email address, a loophole that makes it easy to create and deploy armies of bot accounts.

#content-creators, #content-moderation, #online-harassment, #social, #streaming-video, #tc, #twitch

Discord buys Sentropy, which makes AI moderation software to fight online hate and abuse

The online chat platform Discord is buying Sentropy, a company that makes AI-powered software to detect and remove online harassment and hate.

Discord currently uses a “multilevel” approach to moderation, relying on an in-house human moderation team as well as volunteer mods and admins to create ground rules for individual servers. A Trust and Safety team dedicated to protecting users and shaping content moderation policies comprised 15% of Discord’s workforce as of May 2020.

Discord plans to integrate Sentropy’s own products into its existing toolkit and the company will also bring the smaller company’s leadership group aboard. The terms of the deal were not disclosed, but the acquisition is a sign that taking toxic content and harassment seriously isn’t just the right thing to do — it’s good business too.

“T&S tech and processes should not be used as a competitive advantage,” Sentropy CEO John Redgrave said in a blog post on the announcement. “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”

Discord hasn’t always had a reputation for taking dangerous content seriously. Far-right groups with ties to real-world violence previously thrived on the platform. Discord cracked down on hate and extremism following the Unite the Right rally in Charlottesville, which left anti-racist protester Heather Heyer dead.

By February of 2018, the company was purging white supremacist and neo-Nazi groups, cleaning up the platform on its journey to transcend its gaming roots and grow into a mainstream social network. Now, Discord boasts 150 million monthly active users and is positioning itself as a comfy home for all kinds of communities while holding onto its core user base of gamers.

In a blog post, Redgrave elaborated on the company’s natural connection with Discord:

“Discord represents the next generation of social companies — a generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth. In this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We don’t take this responsibility lightly and are humbled to work at the scale of Discord and with Discord’s resources to increase the depth of our impact.”

Sentropy launched out of stealth last summer with an AI system designed to detect, track and cleanse platforms of online harassment and abuse. The company emerged then with $13 million in funding from notable backers including Reddit co-founder Alexis Ohanian and his VC firm Initialized Capital, King River Capital, Horizons Ventures and Playground Global.

Sentropy will offer existing enterprise customers who use its software products Detect and Defend service through the end of September. The company shut down its free consumer dashboard, Sentropy Protect, earlier this month.

Sentropy’s products were conceived as social network-agnostic tools rather than as platform-specific solutions. It sounds like even under Discord’s wing, the team plans to share insights on building safer online spaces with the internet at large.

“We are excited to help Discord decide how we can most effectively share with the rest of the Internet the best practices, technology, and tools that we’ve developed to protect our own communities,” Redgrave said.

Discord’s future is looking bright. The company walked away from a possible acquisition by Microsoft earlier this year that reportedly valued it at around $10 billion. Discord looks content to remain independent for now and could chart a path toward an IPO in the not-too-distant future.

#alexis-ohanian, #artificial-intelligence, #discord, #horizons-ventures, #initialized-capital, #internet-culture, #john-redgrave, #king-river-capital, #online-harassment, #playground-global, #reddit, #sentropy, #social, #software, #tc

Twitch expands its rules against hate and abuse to include behavior off the platform

Twitch will start holding its streamers to a higher standard. The company just expanded its hate and harassment policy, specifying more kinds of bad behavior that break its rules and could result in a ban from the streaming service.

The news comes as Twitch continues to grapple with reports of abusive behavior and sexual harassment, both on the platform and within the company itself. In December, Twitch released an updated set of rules designed to take harassment and abuse more seriously, admitting that women, people of color the and LGBTQ community were impacted by a “disproportionate” amount of that toxic behavior on the platform.

Twitch’s policies now include serious offenses that could pose a safety threat, even when they happen entirely away from the streaming service. Those threats include violent extremism, terrorism, threats of mass violence, sexual assault and ties to known hate groups.

The company will also continue to evaluate off-platform behavior in cases that happen on Twitch, like an on-stream situation that leads to harassment on Twitter or Facebook.

“While this policy is new, we have taken action historically against serious, clear misconduct that took place off service, but until now, we didn’t have an approach that scaled,” the company wrote in a blog post, adding that investigating off-platform behavior requires additional resources to address the complexity inherent in those cases.

To handle reports for its broadened rules, Twitch created a dedicated email address (OSIT@twitch.tv) to handle reports about off-service behavior. The company says it has partnered with a third party investigative law firm to vet the reports it receives.

Twitch cites its actions against former President Donald Trump as the most high profile instance of off-platform behavior resulting in enforcement. The company disabled Trump’s account following the attack on the U.S. Capitol and later suspended him indefinitely, citing fears that he could use the service to incite violence.

It’s hard to have a higher profile than the president, but Trump isn’t the only big time banned Twitch user. Last June, Twitch kicked one of its biggest streamers off of the platform without providing an explanation for the decision.

Going on a year later, no one seems to know why Dr. Disrespect got the boot from Twitch, though the company’s insistence that it only acts in cases with a “preponderance of evidence” suggests his violations were serious and well-corroborated.

 

#gaming, #online-harassment, #platform-policy, #tc, #twitch

Twitter bots and memorialized users will become ‘new account types’ in 2021

After a period of public feedback, Twitter adjusted some its plans for a new verification process, set to roll out next year. The company suspended public verification applications in 2017 and since appears to have rethought a few aspects of what information the platform should signal to its users, blue checks and beyond.

One big verification-adjacent change around the corner: Twitter plans to add a way of distinguishing bots and other automated accounts.

“… It can be confusing to people if it’s not clear that these accounts are automated,” the company wrote in a blog post. “In 2021, we’re planning to build a new account type to distinguish automated accounts from human-run accounts to make it easier for people to know what’s a bot and what’s not.”

Of course, not all bots are good bots, but automated accounts have flourished on the platform since its early days and bots remain some of the most useful, whimsical and otherwise beloved sources of tweets.

 

The company is also working on a better way to handle accounts for users who have died, and plans to introduce a memorialization process in 2021. Twitter says that memorialized accounts, like bots, will become “a new account type” making them distinct from normal users. The idea grew out of the same spirit as Twitter’s labels for political figures, which sought to provide contextual info about users that can be seen at a glance.

Taking more than 22,000 pieces of feedback on the new verification process into account, Twitter will no longer require a profile bio or header picture to verify users, calling its former thinking “too restrictive.” It’s also redefined a few of its eligible verification categories, expanding “sports” to include esports and adding more language around digital content creators into the entertainment category.

Twitter also apparently received a lot of suggestions calling for additional verification categories for scientists, academics and religious figures. Until it spins out more categories, those users can seek verification under the “activists, organizers, and other influential individuals” catch-all category.

Verification applicants will need to apply under a particular category and provide links or other information supporting their application. The new “self-serve” verification process will be available through account settings on both mobile and desktop.

Twitter will implement the new account verification policy on January 20, 2021, three years after freezing the process. The company did not specify when public verification applications will be accepted again, but it sounds like the wait won’t be too long and the company plans to share more soon. Starting on the 20th, Twitter will begin sweeping out inactive verified accounts and others that don’t meet its new bar for a “complete account.”

In the adjusted policy, a complete account — and one eligible for verification — must have a verified email or phone number, a profile image and a display name. Anyone who’s verified but doesn’t meet those criteria will receive notifications of the required changes, which must be made before January 20.

Twitter’s new policy also lays out the company’s right to revoke verification for accounts in “severe or repeated violation” of the platform’s rules. It sounds like new policy could lay a clearer path for the company to take against users who break the rules, though that ultimately will come down to enforcement rather than written policies.

“We will continue to evaluate such accounts on a case-by-case basis, and will make improvements in 2021 on the relationship between enforcement of our rules and verification,” Twitter wrote in the post.

Twitter paused the verification process in November, 2017 following a public outcry over its decision to verify Jason Kessler. Kessler infamously organized the Unite the Right event in Charlottesville, Virginia that gathered neo-Nazis and white supremacists, ultimately leaving one peaceful counter protester dead. The pause was extended the next year as the company decided to direct more resources toward election integrity.

With the midterms and the general U.S. election behind it, Twitter has returned to its effort to rethink the verification process and what it symbolizes for users on the platform. The company is also experimenting with new features that could dial down harassment, toxicity and misinformation.

Twitter recently added friction to the retweet process in an effort to slow the spread of misinformation, though it rolled the change back after the election. Twitter’s latest test: A new pop-up that displays shared interests and a profile bio when a user goes to reply to someone they don’t follow.

#harassment, #identity-verification, #online-harassment, #social, #social-media, #tc, #twitter

Decrypted: DEA spying on protesters, DDoS attacks, Signal downloads spike

This week saw protests spread across the world sparked by the murder of George Floyd, an unarmed Black man, killed by a white police officer in Minneapolis last month.

The U.S. hasn’t seen protests like this in a generation, with millions taking to the streets each day to lend their voice and support. But they were met with heavily armored police, drones watching from above, and “covert” surveillance by the federal government.

That’s exactly why cybersecurity and privacy is more important than ever, not least to protect law-abiding protesters demonstrating against police brutality and institutionalized, systemic racism. It’s also prompted those working in cybersecurity — many of which are former law enforcement themselves — to check their own privilege and confront the racism from within their ranks and lend their knowledge to their fellow citizens.


THE BIG PICTURE

DEA allowed ‘covert surveillance’ of protesters

The Justice Department has granted the Drug Enforcement Administration, typically tasked with enforcing federal drug-related laws, the authority to conduct “covert surveillance” on protesters across the U.S., effectively turning the civilian law enforcement division into a domestic intelligence agency.

The DEA is one of the most tech-savvy government agencies in the federal government, with access to “stingray” cell site simulators to track and locate phones, a secret program that allows the agency access to billions of domestic phone records, and facial recognition technology.

Lawmakers decried the Justice Department’s move to allow the DEA to spy on protesters, calling on the government to “immediately rescind” the order, describing it as “antithetical” to Americans’ right to peacefully assembly.

#ceo, #cloudflare, #computer-security, #cybercrime, #cyberwarfare, #decrypted, #department-of-justice, #extra-crunch, #federal-government, #george-floyd, #google, #government, #information-technology, #inky, #insight-partners, #internet-security, #iphone, #israel, #lastline, #law-enforcement, #market-analysis, #matthew, #matthew-prince, #minneapolis, #moxie-marlinspike, #national-security, #online-harassment, #police-brutality, #prevention, #privacy, #security, #series-b, #startups, #surveillance, #team8, #techcrunch, #united-states, #vmware

Twitter runs a test prompting users to revise ‘harmful’ replies

In its latest effort to deal with rampant harassment on its platform, Twitter will look into giving users a second chance before they tweet. In a new feature the company is testing, users who use “harmful” language will see a prompt suggesting that they self-edit before posting a reply.

The framing here is a bit disingenuous — harassment on Twitter certainly doesn’t just happen in the “heat of the moment” by otherwise well-meaning individuals — but anything that can reduce toxicity on the platform is probably better than what we’ve got now.

Last year at F8, Instagram rolled out a similar test for its users that would “nudge” them with a warning before they post a potentially offensive comment. In December, the company offered an update on its efforts. “Results have been promising, and we’ve found that these types of nudges can encourage people to reconsider their words when given a chance,” the company wrote in a blog post.

This kind of thing is particularly relevant right now, as companies conduct moderation across their massive platforms with relative skeleton crews. All of the major social networks have announced an increased reliance on AI detection as the pandemic keeps tech workers away from the office. In Facebook’s case, content moderators are among the employees they’d like to bring back first.

We’ve reached out to Twitter for more information about the kind of language that triggers the new test feature and if the company will also consider the prompt for regular tweets that aren’t replies. We’ll update the story if we receive additional info about what this experiment will look like.

#harassment, #online-harassment, #social, #tc, #twitter