Hydra, the world’s biggest cybercrime forum, shut down in police sting

A cartoon figure stalks a giant bitcoin logo.

Enlarge / Laundering of stolen cryptocurrency was a key service offered by Hydra. (credit: Getty Images)

Hydra, the world’s biggest cybercrime forum, is no more. Authorities in Germany have seized servers and other infrastructure used by the sprawling, billion-dollar enterprise along with a stash of about $25 million in bitcoin.

Hydra had been operating since at least 2015 and had seen a meteoric rise since then. In 2020, it had annual revenue of more than $1.37 billion, according to a 2021 report jointly published by security firm Flashpoint and blockchain analysis company Chainalysis. In 2016, the companies said Hydra had a revenue of just $9.4 million. German authorities said the site had 17 million customer and over 19,000 seller accounts registered.

Cybercrime bazaar

Available exclusively through the Tor network, Hydra was a bazaar that brokered sales of narcotics, fake documents, cryptocurrency-laundering services, and other digital goods. Flashpoint and Chainalysis identified 11 core operators but said the marketplace was so big that it likely was staffed by “several dozen people, with clearly delineated responsibilities.”

Read 6 remaining paragraphs | Comments

#biz-it, #cybercrime, #hydra, #law-enforcement

It’s not easy to control police use of tech—even with a law

It’s not easy to control police use of tech—even with a law

Enlarge (credit: Roy Rochlin | Getty Images)

In 2018, Oakland enacted an innovative law giving citizens a voice in police use of surveillance technology. The Electronic Frontier Foundation called it “the new gold standard in community control of police surveillance.” Since then, about 20 other cities have adopted similar laws.

Now, Brian Hofer, one of the architects of Oakland’s law, says it’s not working. Earlier this month, Hofer filed suit against the city and the police department, saying they had repeatedly violated the law.

“We ignored human nature,” Hofer says in an interview. “Police don’t like to be transparent. Surveillance technology use is by design secretive, and no self-interested party is going to voluntarily highlight anything negative about their own proposal.” A spokesperson for the Oakland Police Department says it doesn’t comment on ongoing legal matters.

Read 16 remaining paragraphs | Comments

#law-enforcement, #license-plate-readers, #oakland, #police, #policy, #privacy

UK offers cash for CSAM detection tech targeted at e2e encryption

The UK government is preparing to spend over half a million dollars to encourage the development of detection technologies for child sexual exploitation material (CSAM) that can be bolted on to end-to-end encrypted messaging platforms to scan for the illegal material, as part of its ongoing policy push around Internet and child safety.

In a joint initiative today, the Home Office and the Department for Digital, Media, Culture and Sport (DCMS) announced a “Tech Safety Challenge Fund” — which will distribute up to £425,000 (~$584k) to five organizations (£85k/$117k each) to develop “innovative technology to keep children safe in environments such as online messaging platforms with end-to-end encryption”.

A Challenge statement for applicants to the program adds that the focus is on solutions that can be deployed within e2e encrypted environments “without compromising user privacy”.

“The problem that we’re trying to fix is essentially the blindfolding of law enforcement agencies,” a Home Office spokeswoman told us, arguing that if tech platforms go ahead with their “full end-to-end encryption plans, as they currently are… we will be completely hindered in being able to protect our children online”.

While the announcement does not name any specific platforms of concern, Home Secretary Priti Patel has previously attacked Facebook’s plans to expand its use of e2e encryption — warning in April that the move could jeopardize law enforcement’s ability to investigate child abuse crime.

Facebook-owned WhatsApp also already uses e2e encryption so that platform is already a clear target for whatever ‘safety’ technologies might result from this taxpayer-funded challenge.

Apple’s iMessage and FaceTime are among other existing mainstream messaging tools which use e2e encryption.

So there is potential for very widespread application of any ‘child safety tech’ developed through this government-backed challenge. (Per the Home Office, technologies submitted to the Challenge will be evaluated by “independent academic experts”. The department was unable to provide details of who exactly will assess the projects.)

Patel, meanwhile, is continuing to apply high level pressure on the tech sector on this issue — including aiming to drum up support from G7 counterparts.

Writing in paywalled op-ed in Tory-friendly newspaper, The Telegraph, she trails a meeting she’ll be chairing today where she says she’ll push the G7 to collectively pressure social media companies to do more to address “harmful content on their platforms”.

“The introduction of end-to-end encryption must not open the door to even greater levels of child sexual abuse. Hyperbolic accusations from some quarters that this is really about governments wanting to snoop and spy on innocent citizens are simply untrue. It is about keeping the most vulnerable among us safe and preventing truly evil crimes,” she adds.

“I am calling on our international partners to back the UK’s approach of holding technology companies to account. They must not let harmful content continue to be posted on their platforms or neglect public safety when designing their products. We believe there are alternative solutions, and I know our law enforcement colleagues agree with us.”

In the op-ed, the Home Secretary singles out Apple’s recent move to add a CSAM detection tool to iOS and macOS to scan content on user’s devices before it’s uploaded to iCloud — welcoming the development as a “first step”.

“Apple state their child sexual abuse filtering technology has a false positive rate of 1 in a trillion, meaning the privacy of legitimate users is protected whilst those building huge collections of extreme child sexual abuse material are caught out. They need to see th[r]ough that project,” she writes, urging Apple to press ahead with the (currently delayed) rollout.

Last week the iPhone maker said it would delay implementing the CSAM detection system — following a backlash led by security experts and privacy advocates who raised concerns about vulnerabilities in its approach, as well as the contradiction of a ‘privacy-focused’ company carrying out on-device scanning of customer data. They also flagged the wider risk of the scanning infrastructure being seized upon by governments and states who might order Apple to scan for other types of content, not just CSAM.

Patel’s description of Apple’s move as just a “first step” is unlikely to do anything to assuage concerns that once such scanning infrastructure is baked into e2e encrypted systems it will become a target for governments to widen the scope of what commercial platforms must legally scan for.

However the Home Office’s spokeswoman told us that Patel’s comments on Apple’s CSAM tech were only intended to welcome its decision to take action in the area of child safety — rather than being an endorsement of any specific technology or approach. (And Patel does also write: “But that is just one solution, by one company. Greater investment is essential.”)

The Home Office spokeswoman wouldn’t comment on which types of technologies the government is aiming to support via the Challenge fund, either, saying only that they’re looking for a range of solutions.

She told us the overarching goal is to support ‘middleground’ solutions — denying the government is trying to encourage technologists to come up with ways to backdoor e2e encryption.

In recent years in the UK GCHQ has also floated the controversial idea of a so-called ‘ghost protocol’ — that would allow for state intelligence or law enforcement agencies to be invisibly CC’d by service providers into encrypted communications on a targeted basis. That proposal was met with widespread criticism, including from the tech industry, which warned it would undermine trust and security and threaten fundamental rights.

It’s not clear if the government has such an approach — albeit with a CSAM focus — in mind here now as it tries to encourage the development of ‘middleground’ technologies that are able to scan e2e encrypted content for specifically illegal stuff.

In another concerning development, earlier this summer, guidance put out by DCMS for messaging platforms recommended that they “prevent” the use of e2e encryption for child accounts altogether.

Asked about that, the Home Office spokeswoman told us the tech fund is “not too different” and “is trying to find the solution in between”.

“Working together and bringing academics and NGOs into the field so that we can find a solution that works for both what social media companies want to achieve and also make sure that we’re able to protect children,” said said, adding: “We need everybody to come together and look at what they can do.”

There is not much more clarity in the Home Office guidance to suppliers applying for the chance to bag a tranche of funding.

There it writes that proposals must “make innovative use of technology to enable more effective detection and/or prevention of sexually explicit images or videos of children”.

“Within scope are tools which can identify, block or report either new or previously known child sexual abuse material, based on AI, hash-based detection or other techniques,” it goes on, further noting that proposals need to address “the specific challenges posed by e2ee environments, considering the opportunities to respond at different levels of the technical stack (including client-side and server-side).”

General information about the Challenge — which is open to applicants based anywhere, not just in the UK — can be found on the Safety Tech Network website.

The deadline for applications is October 6.

Selected applicants will have five months, between November 2021 and March 2022 to deliver their projects.

When exactly any of the tech might be pushed at the commercial sector isn’t clear — but the government may be hoping that by keeping up the pressure on the tech sector platform giants will develop this stuff themselves, as Apple has been.

The Challenge is just the latest UK government initiative to bring platforms in line with its policy priorities — back in 2017, for example, it was pushing them to build tools to block terrorist content — and you could argue it’s a form of progress that ministers are not simply calling for e2e encryption to be outlawed, as they frequently have in the past.

That said, talk of ‘preventing’ the use of e2e encryption — or even fuzzy suggestions of “in between” solutions — may not end up being so very different.

What is different is the sustained focus on child safety as the political cudgel to make platforms comply. That seems to be getting results.

Wider government plans to regulate platforms — set out in a draft Online Safety bill, published earlier this year — have yet to go through parliamentary scrutiny. But in one already baked in change, the country’s data protection watchdog is now enforcing a children’s design code which stipulates that platforms need to prioritize kids’ privacy by default, among other recommended standards.

The Age Appropriate Design Code was appended to the UK’s data protection bill as an amendment — meaning it sits under wider legislation that transposed Europe’s General Data Protection Regulation (GDPR) into law, which brought in supersized penalties for violations like data breaches. And in recent months a number of social media giants have announced changes to how they handle children’s accounts and data — which the ICO has credited to the code.

So the government may be feeling confident that it has finally found a blueprint for bringing tech giants to heel.

#apple, #csam, #csam-detection, #e2e-encryption, #encrypted-communications, #encryption, #end-to-end-encryption, #europe, #facebook, #g7, #general-data-protection-regulation, #home-office, #law-enforcement, #policy, #privacy, #social-media, #tc, #uk-government, #united-kingdom, #whatsapp

FTC bans spyware maker SpyFone, and orders it to notify hacked victims

The Federal Trade Commission has unanimously voted to ban the spyware maker SpyFone and its chief executive Scott Zuckerman from the surveillance industry, the first order of its kind, after the agency accused the company of harvesting mobile data on thousands of people and leaving it on the open internet.

The agency said SpyFone “secretly harvested and shared data on people’s physical movements, phone use, and online activities through a hidden device hack,” allowing the spyware purchaser to “see the device’s live location and view the device user’s emails and video chats.”

SpyFone is one of many so-called “stalkerware” apps that are marketed under the guise of parental control but are often used by spouses to spy on their partners. The spyware works by being surreptitiously installed on someone’s phone, often without their permission, to steal their messages, photos, web browsing history, and real-time location data. The FTC also charged that the spyware maker exposed victims to additional security risks because the spyware runs at the “root” level of the phone, which allows the spyware to access off-limits parts of the device’s operating system. A premium version of the app included a keylogger and “live screen viewing,” the FTC says.

But the FTC said that SpyFone’s “lack of basic security” exposed those victims’ data, because of an unsecured Amazon cloud storage server that was spilling the data its spyware was collecting from more than 2,000 victims’ phones. SpyFone said it partnered with a cybersecurity firm and law enforcement to investigate, but the FTC says it never did.

Practically, the ban means SpyFone and its CEO Zuckerman are banned from “offering, promoting, selling, or advertising any surveillance app, service, or business,” making it harder for the company to operate. But FTC Commissioner Rohit Chopra said in a separate statement that stalkerware makers should also face criminal sanctions under U.S. computer hacking and wiretap laws.

The FTC has also ordered the company to delete all the data it “illegally” collected, and, also for the first time, notify victims that the app had been secretly installed on their devices.

In a statement, the FTC’s consumer protection chief Samuel Levine said: “This case is an important reminder that surveillance-based businesses pose a significant threat to our safety and security.”

The EFF, which launched the Coalition Against Stalkerware two years ago, a coalition of companies that detects, combats and raises awareness of stalkerware, praised the FTC’s order. “With the FTC now turning its focus to this industry, victims of stalkerware can begin to find solace in the fact that regulators are beginning to take their concerns seriously,” said EFF’s Eva Galperin and Bill Budington in a blog post.

This is the FTC’s second order against a stalkerware maker. In 2019, the FTC settled with Retina-X after the company was hacked several times and eventually shut down.

Over the years, several other stalkerware makers were either hacked or inadvertently exposed their own systems, including mSpy, Mobistealth, and Flexispy. Another stalkerware maker, ClevGuard, left thousands of hacked victims’ phone data on an exposed cloud server.

Read more:


If you or someone you know needs help, the National Domestic Violence Hotline (1-800-799-7233) provides 24/7 free, confidential support to victims of domestic abuse and violence. If you are in an emergency situation, call 911.

Did you receive a notification and want to tell your story? You can contact this reporter on Signal and WhatsApp at +1 646-755-8849 or zack.whittaker@techcrunch.com by email.

#cybercrime, #espionage, #law-enforcement, #mobile-applications, #privacy, #security, #stalkerware, #stalking

Google says geofence warrants make up one-quarter of all US demands

For the first time, Google has published the number of geofence warrants it’s historically received from U.S. authorities, providing a rare glimpse into how frequently these controversial warrants are issued.

The figures, published Thursday, reveal that Google has received thousands of geofence warrants each quarter since 2018, and at times accounted for about one-quarter of all U.S. warrants that Google receives. The data shows that the vast majority of geofence warrants are obtained by local and state authorities, with federal law enforcement accounting for just 4% of all geofence warrants served on the technology giant.

According to the data, Google received 982 geofence warrants in 2018, 8,396 in 2019, and 11,554 in 2020. But the figures only provide a small glimpse into the volume of warrants received, and did not break down how often it pushes back on overly broad requests. A spokesperson for Google would not comment on the record.

Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project (STOP), which led efforts by dozens of civil rights groups to lobby for the release of these numbers, commended Google for releasing the numbers.

“Geofence warrants are unconstitutionally broad and invasive, and we look forward to the day they are outlawed completely.” said Cahn.

Geofence warrants are also known as “reverse-location” warrants, since they seek to identify people of interest who were in the near-vicinity at the time a crime was committed. Police do this by asking a court to order Google, which stores vast amounts of location data to drive its advertising business, to turn over details of who was in a geographic area, such as a radius of a few hundred feet at a certain point in time, to help identify potential suspects.

Google has long shied away from providing these figures, in part because geofence warrants are largely thought to be unique to Google. Law enforcement has long known that Google stores vast troves of location data on its users in a database called Sensorvault, first revealed by The New York Times in 2019.

Sensorvault is said to have the detailed location data on “at least hundreds of millions of devices worldwide,” collected from users’ phones when they use an Android device with location data switched on, or Google services like Google Maps and Google Photo, and even Google search results. In 2018, the Associated Press reported that Google could still collect users’ locations even when their location history is “paused.”

But critics have argued that geofence warrants are unconstitutional because the authorities compel Google to turn over data on everyone else who was in the same geographic area.

Worse, these warrants have been known to ensnare entirely innocent people.

TechCrunch reported earlier this year that Minneapolis police used a geofence warrant to identify individuals accused of sparking violence in the wake of the police killing of George Floyd last year. One person on the ground who was filming and documenting the protests had his location data requested by police for being close to the violence. NBC News reported last year how one Gainesville, Fla. resident whose information was given by Google to police investigating a burglary, but was able to prove his innocence thanks to an app on his phone that tracked his fitness activity.

Although the courts have yet to deliberate widely on the legality of geofence warrants, some states are drafting laws to push back against geofence warrants. New York lawmakers proposed a bill last year that would ban geofence warrants in the state, amid fears that police could use these warrants to target protesters — as what happened in Minneapolis.

Cahn, who helped introduce the New York bill last year, said the newly released data will “help spur lawmakers to outlaw the technology.”

“Let’s be clear, the number of geofence warrants should be zero,” he said.

#android, #articles, #computing, #databases, #florida, #george-floyd, #google, #google-maps, #law-enforcement, #minneapolis, #new-york, #privacy, #security, #spokesperson, #technology, #the-new-york-times, #united-states, #warrant

Apple’s CSAM detection tech is under fire — again

Apple has encountered monumental backlash to a new child sexual abuse imagery (CSAM) detection technology it announced earlier this month. The system, which Apple calls NeuralHash, has yet to be activated for its billion-plus users, but the technology is already facing heat from security researchers who say the algorithm is producing flawed results.

NeuralHash is designed to identify known CSAM on a user’s device without having to possess the image or knowing the contents of the image. Because a user’s photos stored in iCloud are end-to-end encrypted so that even Apple can’t access the data, NeuralHash instead scans for known CSAM on a user’s device, which Apple claims is more privacy friendly as it limits the scanning to just photos rather than other companies which scan all of a user’s file.

Apple does this by looking for images on a user’s device that have the same hash — a string of letters and numbers that can uniquely identify an image — that are provided by child protection organizations like NCMEC. If NeuralHash finds 30 or more matching hashes, the images are flagged to Apple for a manual review before the account owner is reported to law enforcement. Apple says the chance of a false positive is about one in one trillion accounts.

But security experts and privacy advocates have expressed concern that the system could be abused by highly-resourced actors, like governments, to implicate innocent victims or to manipulate the system to detect other materials that authoritarian nation states find objectionable. NCMEC called critics the “screeching voices of the minority,” according to a leaked memo distributed internally to Apple staff.

Last night, Asuhariet Ygvar reverse-engineered Apple’s NeuralHash into a Python script and published code to GitHub, allowing anyone to test the technology regardless of whether they have an Apple device to test. In a Reddit post, Ygvar said NeuralHash “already exists” in iOS 14.3 as obfuscated code, but was able to reconstruct the technology to help other security researchers understand the algorithm better before it’s rolled out to iOS and macOS devices later this year.

It didn’t take long before others tinkered with the published code and soon came the first reported case of a “hash collision,” which in NeuralHash’s case is where two entirely different images produce the same hash. Cory Cornelius, a well-known research scientist at Intel Labs, discovered the hash collision. Ygvar confirmed the collision a short time later.

Hash collisions can be a death knell to systems that rely on cryptography to keep them secure, such as encryption. Over the years several well-known password hashing algorithms, like MD5 and SHA-1, were retired after collision attacks rendered them ineffective.

Kenneth White, a cryptography expert and founder of the Open Crypto Audit Project, said in a tweet: “I think some people aren’t grasping that the time between the iOS NeuralHash code being found and [the] first collision was not months or days, but a couple of hours.”

When reached, an Apple spokesperson declined to comment on the record. But in a background call where reporters were not allowed to quote executives directly or by name, Apple downplayed the hash collision and argued that the protections it puts in place — such as a manual review of photos before they are reported to law enforcement — are designed to prevent abuses. Apple also said that the version of NeuralHash that was reverse-engineered is a generic version, and not the complete version that will roll out later this year.

It’s not just civil liberties groups and security experts that are expressing concern about the technology. A senior lawmaker in the German parliament sent a letter to Apple chief executive Tim Cook this week saying that the company is walking down a “dangerous path” and urged Apple not to implement the system.

#algorithms, #apple, #apple-inc, #cryptography, #encryption, #github, #hash, #icloud, #law-enforcement, #password, #privacy, #python, #security, #sha-1, #spokesperson, #tim-cook

Interview: Apple’s Head of Privacy details child abuse detection and Messages safety features

Last week, Apple announced a series of new features targeted at child safety on its devices. Though not live yet, the features will arrive later this year for users. Though the goals of these features are universally accepted to be good ones — the protection of minors and the limit of the spread of Child Sexual Abuse Material (CSAM), there have been some questions about the methods Apple is using.

I spoke to Erik Neuenschwander, Head of Privacy at Apple, about the new features launching for its devices. He shared detailed answers to many of the concerns that people have about the features and talked at length to some of the tactical and strategic issues that could come up once this system rolls out. 

I also asked about the rollout of the features, which come closely intertwined but are really completely separate systems that have similar goals. To be specific, Apple is announcing three different things here, some of which are being confused with one another in coverage and in the minds of the public. 

CSAM detection in iCloud Photos – A detection system called NeuralHash creates identifiers it can compare with IDs from the National Center for Missing and Exploited Children and other entities to detect known CSAM content in iCloud Photo libraries. Most cloud providers already scan user libraries for this information — Apple’s system is different in that it does the matching on device rather than in the cloud.

Communication Safety in Messages – A feature that a parent opts to turn on for a minor on their iCloud Family account. It will alert children when an image they are going to view has been detected to be explicit and it tells them that it will also alert the parent.

Interventions in Siri and search – A feature that will intervene when a user tries to search for CSAM-related terms through Siri and search and will inform the user of the intervention and offer resources.

For more on all of these features you can read our articles linked above or Apple’s new FAQ that it posted this weekend.

From personal experience, I know that there are people who don’t understand the difference between those first two systems, or assume that there will be some possibility that they may come under scrutiny for innocent pictures of their own children that may trigger some filter. It’s led to confusion in what is already a complex rollout of announcements. These two systems are completely separate, of course, with CSAM detection looking for precise matches with content that is already known to organizations to be abuse imagery. Communication Safety in Messages takes place entirely on the device and reports nothing externally — it’s just there to flag to a child that they are or could be about to be viewing explicit images. This feature is opt-in by the parent and transparent to both parent and child that it is enabled.

Apple’s Communication Safety in Messages feature. Image Credits: Apple

There have also been questions about the on-device hashing of photos to create identifiers that can be compared with the database. Though NeuralHash is a technology that can be used for other kinds of features like faster search in photos, it’s not currently used for anything else on iPhone aside from CSAM detection. When iCloud Photos is disabled, the feature stops working completely. This offers an opt-out for people but at an admittedly steep cost given the convenience and integration of iCloud Photos with Apple’s operating systems.

Though this interview won’t answer every possible question related to these new features, this is the most extensive on-the-record discussion by Apple’s senior privacy member. It seems clear from Apple’s willingness to provide access and its ongoing FAQ’s and press briefings (there have been at least 3 so far and likely many more to come) that it feels that it has a good solution here. 

Despite the concerns and resistance, it seems as if it is willing to take as much time as is necessary to convince everyone of that. 

This interview has been lightly edited for clarity.

TC: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?

Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through user’s iCloud Photos. This system doesn’t change that either, it neither looks through data on the device, nor does it look through all photos in iCloud Photos. Instead what it does is gives us a new ability to identify accounts which are starting collections of known CSAM.

So the development of this new CSAM detection technology is the watershed that makes now the time to launch this. And Apple feels that it can do it in a way that it feels comfortable with and that is ‘good’ for your users?

That’s exactly right. We have two co-equal goals here. One is to improve child safety on the platform and the second is to preserve user privacy, And what we’ve been able to do across all three of the features, is bring together technologies that let us deliver on both of those goals.

Announcing the Communications safety in Messages features and the CSAM detection in iCloud Photos system at the same time seems to have created confusion about their capabilities and goals. Was it a good idea to announce them concurrently? And why were they announced concurrently, if they are separate systems?

Well, while they are [two] systems they are also of a piece along with our increased interventions that will be coming in Siri and search. As important as it is to identify collections of known CSAM where they are stored in Apple’s iCloud Photos service, It’s also important to try to get upstream of that already horrible situation. So CSAM detection means that there’s already known CSAM that has been through the reporting process, and is being shared widely re-victimizing children on top of the abuse that had to happen to create that material in the first place. for the creator of that material in the first place. And so to do that, I think is an important step, but it is also important to do things to intervene earlier on when people are beginning to enter into this problematic and harmful area, or if there are already abusers trying to groom or to bring children into situations where abuse can take place, and Communication Safety in Messages and our interventions in Siri and search actually strike at those parts of the process. So we’re really trying to disrupt the cycles that lead to CSAM that then ultimately might get detected by our system.

The process of Apple’s CSAM detection in iCloud Photos system. Image Credits: Apple

Governments and agencies worldwide are constantly pressuring all large organizations that have any sort of end-to-end or even partial encryption enabled for their users. They often lean on CSAM and possible terrorism activities as rationale to argue for backdoors or encryption defeat measures. Is launching the feature and this capability with on-device hash matching an effort to stave off those requests and say, look, we can provide you with the information that you require to track down and prevent CSAM activity — but without compromising a user’s privacy?

So, first, you talked about the device matching so I just want to underscore that the system as designed doesn’t reveal — in the way that people might traditionally think of a match — the result of the match to the device or, even if you consider the vouchers that the device creates, to Apple. Apple is unable to process individual vouchers; instead, all the properties of our system mean that it’s only once an account has accumulated a collection of vouchers associated with illegal, known CSAM images that we are able to learn anything about the user’s account. 

Now, why to do it is because, as you said, this is something that will provide that detection capability while preserving user privacy. We’re motivated by the need to do more for child safety across the digital ecosystem, and all three of our features, I think, take very positive steps in that direction. At the same time we’re going to leave privacy undisturbed for everyone not engaged in the illegal activity.

Does this, creating a framework to allow scanning and matching of on-device content, create a framework for outside law enforcement to counter with, ‘we can give you a list, we don’t want to look at all of the user’s data but we can give you a list of content that we’d like you to match’. And if you can match it with this content you can match it with other content we want to search for. How does it not undermine Apple’s current position of ‘hey, we can’t decrypt the user’s device, it’s encrypted, we don’t hold the key?’

It doesn’t change that one iota. The device is still encrypted, we still don’t hold the key, and the system is designed to function on on-device data. What we’ve designed has a device side component — and it has the device side component by the way, for privacy improvements. The alternative of just processing by going through and trying to evaluate users data on a server is actually more amenable to changes [without user knowledge], and less protective of user privacy.

Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature. I understand that it’s a complex attribute that a feature of the service has a portion where the voucher is generated on the device, but again, nothing’s learned about the content on the device. The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers which we’ve never done for iCloud Photos. It’s those sorts of systems that I think are more troubling when it comes to the privacy properties — or how they could be changed without any user insight or knowledge to do things other than what they were designed to do.

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference If pressured or asked by a government to compromise the system?

Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the US when they speak in that way, and the therefore it seems to be the case that people agree US law doesn’t offer these kinds of capabilities to our government. 

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. And secondly, the system requires the threshold of images to be exceeded so trying to seek out even a single image from a person’s device or set of people’s devices won’t work because the system simply does not provide any knowledge to Apple for single photos stored in our service. And then, thirdly, the system has built into it a stage of manual review where, if an account is flagged with a collection of illegal CSAM material, an Apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity. And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don’t believe that there’s a basis on which people will be able to make that request in the US. And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled no part of the system is functional.

So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?

If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you’re not using iCloud Photos. 

In recent years, Apple has often leaned into the fact that on-device processing preserves user privacy. And in nearly every previous case and I can think of that’s true. Scanning photos to identify their content and allow me to search them, for instance. I’d rather that be done locally and never sent to a server. However, in this case, it seems like there may actually be a sort of anti-effect in that you’re scanning locally, but for external use cases, rather than scanning for personal use — creating a ‘less trust’ scenario in the minds of some users. Add to this that every other cloud provider scans it on their servers and the question becomes why should this implementation being different from most others engender more trust in the user rather than less?

I think we’re raising the bar, compared to the industry standard way to do this. Any sort of server side algorithm that’s processing all users photos is putting that data at more risk of disclosure and is, by definition, less transparent in terms of what it’s doing on top of the user’s library. So, by building this into our operating system, we gain the same properties that the integrity of the operating system provides already across so many other features, the one global operating system that’s the same for all users who download it and install it, and so it in one property is much more challenging, even how it would be targeted to an individual user. On the server side that’s actually quite easy — trivial. To be able to have some of the properties and building it into the device and ensuring it’s the same for all users with the features enable give a strong privacy property. 

Secondly, you point out how use of on device technology is privacy preserving, and in this case, that’s a representation that I would make to you, again. That it’s really the alternative to where users’ libraries have to be processed on a server that is less private.

The things that we can say with this system is that it leaves privacy completely undisturbed for every other user who’s not into this illegal behavior, Apple gain no additional knowledge about any users cloud library. No user’s iCloud Library has to be processed as a result of this feature. Instead what we’re able to do is to create these cryptographic safety vouchers. They have mathematical properties that say, Apple will only be able to decrypt the contents or learn anything about the images and users specifically that collect photos that match illegal, known CSAM hashes, and that’s just not something anyone can say about a cloud processing scanning service, where every single image has to be processed in a clear decrypted form and run by routine to determine who knows what? At that point it’s very easy to determine anything you want [about a user’s images] versus our system only what is determined to be those images that match a set of known CSAM hashes that came directly from NCMEC and and other child safety organizations. 

Can this CSAM detection feature stay holistic when the device is physically compromised? Sometimes cryptography gets bypassed locally, somebody has the device in hand — are there any additional layers there?

I think it’s important to underscore how very challenging and expensive and rare this is. It’s not a practical concern for most users though it’s one we take very seriously, because the protection of data on the device is paramount for us. And so if we engage in the hypothetical where we say that there has been an attack on someone’s device: that is such a powerful attack that there are many things that that attacker could attempt to do to that user. There’s a lot of a user’s data that they could potentially get access to. And the idea that the most valuable thing that an attacker — who’s undergone such an extremely difficult action as breaching someone’s device — was that they would want to trigger a manual review of an account doesn’t make much sense. 

Because, let’s remember, even if the threshold is met, and we have some vouchers that are decrypted by Apple. The next stage is a manual review to determine if that account should be referred to NCMEC or not, and that is something that we want to only occur in cases where it’s a legitimate high value report. We’ve designed the system in that way, but if we consider the attack scenario you brought up, I think that’s not a very compelling outcome to an attacker.

Why is there a threshold of images for reporting, isn’t one piece of CSAM content too many?

We want to ensure that the reports that we make to NCMEC are high value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched, And so the threshold allows us to reach that point where we expect a false reporting rate for review of one in 1 trillion accounts per year. So, working against the idea that we do not have any interest in looking through users’ photo libraries outside those that are holding collections of known CSAM the threshold allows us to have high confidence that those accounts that we review are ones that when we refer to NCMEC, law enforcement will be able to take up and effectively investigate, prosecute and convict.

#apple, #apple-inc, #apple-photos, #china, #cloud-applications, #cloud-computing, #cloud-services, #computing, #cryptography, #encryption, #european-union, #head, #icloud, #ios, #iphone, #law-enforcement, #operating-system, #operating-systems, #privacy, #private, #siri, #software, #united-states, #webmail

This Week in Apps: In-app events hit the App Store, TikTok tries Stories, Apple reveals new child safety plan

Welcome back to This Week in Apps, the weekly TechCrunch series that recaps the latest in mobile OS news, mobile applications and the overall app economy.

The app industry continues to grow, with a record 218 billion downloads and $143 billion in global consumer spend in 2020. Consumers last year also spent 3.5 trillion minutes using apps on Android devices alone. And in the U.S., app usage surged ahead of the time spent watching live TV. Currently, the average American watches 3.7 hours of live TV per day, but now spends four hours per day on their mobile devices.

Apps aren’t just a way to pass idle hours — they’re also a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus. In 2020, investors poured $73 billion in capital into mobile companies — a figure that’s up 27% year-over-year.

This Week in Apps offers a way to keep up with this fast-moving industry in one place, with the latest from the world of apps, including news, updates, startup fundings, mergers and acquisitions, and suggestions about new apps and games to try, too.

Do you want This Week in Apps in your inbox every Saturday? Sign up here: techcrunch.com/newsletters

Top Stories

Apple to scan for CSAM imagery

Apple announced a major initiative to scan devices for CSAM imagery. The company on Thursday announced a new set of features, arriving later this year, that will detect child sexual abuse material (CSAM) in its cloud and report it to law enforcement. Companies like Dropbox, Google and Microsoft already scan for CSAM in their cloud services, but Apple had allowed users to encrypt their data before it reached iCloud. Now, Apple’s new technology, NeuralHash, will run on users’ devices, tatformso detect when a users upload known CSAM imagery — without having to first decrypt the images. It even can detect the imagery if it’s been cropped or edited in an attempt to avoid detection.

Meanwhile, on iPhone and iPad, the company will roll out protections to Messages app users that will filter images and alert children and parents if sexually explicit photos are sent to or from a child’s account. Children will not be shown the images but will instead see a grayed-out image instead. If they try to view the image anyway through the link, they’ll be shown interruptive screens that explain why the material may be harmful and are warned that their parents will be notified.

Some privacy advocates pushed back at the idea of such a system, believing it could expand to end-to-end encrypted photos, lead to false positives, or set the stage for more on-device government surveillance in the future. But many cryptology experts believe the system Apple developed provides a good balance between privacy and utility, and have offered their endorsement of the technology. In addition, Apple said reports are manually reviewed before being sent to the National Center for Missing and Exploited Children (NCMEC).

The changes may also benefit iOS developers who deal in user photos and uploads, as predators will no longer store CSAM imagery on iOS devices in the first place, given the new risk of detection.

In-App Events appear on the App Store

Image Credits: Apple

Though not yet publicly available to all users, those testing the new iOS 15 mobile operating system got their first glimpse of a new App Store discovery feature this week: “in-app events.” First announced at this year’s WWDC, the feature will allow developers and Apple editors alike to showcase directly on the App Store upcoming events taking place inside apps.

The events can appear on the App Store homepage, on the app’s product pages or can be discovered through personalized recommendations and search. In some cases, editors will curate events to feature on the App Store. But developers will also be provided tools to submit their own in-app events. TikTok’s “Summer Camp” for creators was one of the first in-app events to be featured, where it received a top spot on the iPadOS 15 App Store.

Weekly News

Platforms: Apple

Apple expands support for student IDs on iPhone and Apple Watch ahead of the fall semester. Tens of thousands more U.S. and Canadian colleges will now support mobile student IDs in the Apple Wallet app, including Auburn University, Northern Arizona University, University of Maine, New Mexico State University and others.

Apple was accused of promoting scam apps in the App Store’s featured section. The company’s failure to properly police its store is one thing, but to curate an editorial list that actually includes the scams is quite another. One of the games rounded up under “Slime Relaxations,” an already iffy category to say the least, was a subscription-based slime simulator that locked users into a $13 AUD per week subscription for its slime simulator. One of the apps on the curated list didn’t even function, implying that Apple’s editors hadn’t even tested the apps they recommend.

Tax changes hit the App Store. Apple announced tax and price changes for apps and IAPs in South Africa, the U.K. and all territories using the Euro currency, all of which will see decreases. Increases will occur in Georgia and Tajikistan, due to new tax changes. Proceeds on the App Store in Italy will be increased to reflect a change to the Digital Services Tax effective rate.

Game Center changes, too. Apple said that on August 4, a new certificate for server-based Game Center verification will be available via the publicKeyUrl.

Fintech

Robinhood stock jumped more than 24% to $46.80 on Tuesday after initially falling 8% on its first day of trading last week, after which it had continued to trade below its opening price of $38.

Square’s Cash app nearly doubled its gross profit to $546 million in Q2, but also reported a $45 million impairment loss on its bitcoin holdings.

Coinbase’s app now lets you buy your cryptocurrency using Apple Pay. The company previously made its Coinbase Card compatible with Apple Pay in June.

Social

An anonymous app called Sendit, which relies on Snap Kit to function, is climbing the charts of the U.S. App Store after Snap suspended similar apps, YOLO and LMK. Snap was sued by the parent of child who was bullied through those apps, which led to his suicide. Sendit also allows for anonymity, and reviews compare it to YOLO. But some reviews also complained about bullying. This isn’t the first time Snap has been involved in a lawsuit related to a young person’s death related to its app. The company was also sued for its irresponsible “speed filter” that critics said encouraged unsafe driving. Three young men died using the filter, which captured them doing 123 mph.

TikTok is testing Stories. As Twitter’s own Stories integrations, Fleets, shuts down, TikTok confirmed it’s testing its own Stories product. The TikTok Stories appear in a left-hand sidebar and allow users to post ephemeral images or video that disappear in 24 hours. Users can also comment on Stories, which are public to their mutual friends and the creator. Stories on TikTok may make more sense than they did on Twitter, as TikTok is already known as a creative platform and it gives the app a more familiar place to integrate its effects toolset and, eventually, advertisements.

Facebook has again re-arranged its privacy settings. The company continually moves around where its privacy features are located, ostensibly to make them easier to find. But users then have to re-learn where to go to find the tools they need, after they had finally memorized the location. This time, the settings have been grouped into six top-level categories, but “privacy” settings have been unbundled from one location to be scattered among the other categories.

A VICE report details ban-as-a-service operations that allow anyone to harass or censor online creators on Instagram. Assuming you can find it, one operation charged $60 per ban, the listing says.

TikTok merged personal accounts with creator accounts. The change means now all non-business accounts on TikTok will have access to the creator tools under Settings, including Analytics, Creator Portal, Promote and Q&A. TikTok shared the news directly with subscribers of its TikTok Creators newsletter in August, and all users will get a push notification alerting them to the change, the company told us.

Discord now lets users customize their profile on its apps. The company added new features to its iOS and Android apps that let you add a description, links and emojis and select a profile color. Paid subscribers can also choose an image or GIF as their banner.

Twitter Spaces added a co-hosting option that allows up to two co-hosts to be added to the live audio chat rooms. Now Spaces can have one main host, two co-hosts and up to 10 speakers. Co-hosts have all the moderation abilities as hosts, but can’t add or remove others as co-hosts.

Messaging

Tencent reopened new user sign-ups for its WeChat messaging app, after having suspended registrations last week for unspecified “technical upgrades.” The company, like many other Chinese tech giants, had to address new regulations from Beijing impacting the tech industry. New rules address how companies handle user data collection and storage, antitrust behavior and other checks on capitalist “excess.” The gaming industry is now worried it’s next to be impacted, with regulations that would restrict gaming for minors to fight addiction.

WhatsApp is adding a new feature that will allow users to send photos and videos that disappear after a single viewing. The Snapchat-inspired feature, however, doesn’t alert you if the other person takes a screenshot — as Snap’s app does. So it may not be ideal for sharing your most sensitive content.

Telegram’s update expands group video calls to support up to 1,000 viewers. It also announced video messages can be recorded in higher quality and can be expanded, regular videos can be watched at 0.5 or 2x speed, screen sharing with sound is available for all video calls, including 1-on-1 calls, and more.

Streaming & Entertainment

American Airlines added free access to TikTok aboard its Viasat-equipped aircraft. Passengers will be able to watch the app’s videos for up to 30 minutes for free and can even download the app if it’s not already installed. After the free time, they can opt to pay for Wi-Fi to keep watching. Considering how easy it is to fall into multi-hour TikTok viewing sessions without knowing it, the addition of the addictive app could make long plane rides feel shorter. Or at least less painful.

Chinese TikTok rival Kuaishou saw stocks fall by more than 15% in Hong Kong, the most since its February IPO. The company is another victim of an ongoing market selloff triggered by increasing investor uncertainty related to China’s recent crackdown on tech companies. Beijing’s campaign to rein in tech has also impacted Tencent, Alibaba, Jack Ma’s Ant Group, food delivery company Meituan and ride-hailing company Didi. Also related, Kuaishou shut down its controversial app Zynn, which had been paying users to watch its short-form videos, including those stolen from other apps.

Twitch overtook YouTube in consumer spending per user in April 2021, and now sees $6.20 per download as of June compared with YouTube’s $5.60, Sensor Tower found.

Image Credits: Sensor Tower

Spotify confirmed tests of a new ad-supported tier called Spotify Plus, which is only $0.99 per month and offers unlimited skips (like free users get on the desktop) and the ability to play the songs you want, instead of only being forced to use shuffle mode.

The company also noted in a forum posting that it’s no longer working on AirPlay2 support, due to “audio driver compatibility” issues.

Mark Cuban-backed audio app Fireside asked its users to invest in the company via an email sent to creators which didn’t share deal terms. The app has yet to launch.

YouTube kicks off its $100 million Shorts Fund aimed at taking on TikTok by providing creators with cash incentives for top videos. Creators will get bonuses of $100 to $10,000 based on their videos’ performance.

Dating

Match Group announced during its Q2 earnings it plans to add to several of the company’s brands over the next 12 to 24 months audio and video chat, including group live video, and other livestreaming technologies. The developments will be powered by innovations from Hyperconnect, the social networking company that this year became Match’s biggest acquisition to date when it bought the Korean app maker for a sizable $1.73 billion. Since then, Match was spotted testing group live video on Tinder, but says that particular product is not launching in the near-term. At least two brands will see Hyperconnect-powered integrations in 2021.

Photos

The Photo & Video category on U.S. app stores saw strong growth in the first half of the year, a Sensor Tower report found. Consumer spend among the top 100 apps grew 34% YoY to $457 million in Q2 2021, with the majority of the revenue (83%) taking place on iOS.

Image Credits: Sensor Tower

Gaming

Epic Games revealed the host of its in-app Rift Tour event is Ariana Grande, in the event that runs August 6-8.

Pokémon GO influencers threatened to boycott the game after Niantic removed the COVID safety measures that had allowed people to more easily play while social distancing. Niantic’s move seemed ill-timed, given the Delta variant is causing a new wave of COVID cases globally.

Health & Fitness

Apple kicked out an app called Unjected from the App Store. The new social app billed itself as a community for the unvaccinated, allowing like-minded users to connect for dating and friendships. Apple said the app violated its policies for COVID-19 content.

Google Pay expanded support for vaccine cards. In Australia, Google’s payments app now allows users to add their COVID-19 digital certification to their device for easy access. The option is available through Google’s newly updated Passes API which lets government agencies distribute digital versions of vaccine cards.

COVID Tech Connect, a U.S. nonprofit initially dedicated to collecting devices like phones and tablets for COVID ICU patients, has now launched its own app. The app, TeleHome, is a device-agnostic, HIPAA-compliant way for patients to place a video call for free at a time when the Delta variant is again filling ICU wards, this time with the unvaccinated — a condition that sometimes overlaps with being low-income. Some among the working poor have been hesitant to get the shot because they can’t miss a day of work, and are worried about side effects. Which is why the Biden administration offered a tax credit to SMBs who offered paid time off to staff to get vaccinated and recover.

Popular journaling app Day One, which was recently acquired by WordPress.com owner Automattic, rolled out a new “Concealed Journals” feature that lets users hide content from others’ viewing. By tapping the eye icon, the content can be easily concealed on a journal by journal basis, which can be useful for those who write to their journal in public, like coffee shops or public transportation.

Edtech

Recently IPO’d language learning app Duolingo is developing a math app for kids. The company says it’s still “very early” in the development process, but will announce more details at its annual conference, Duocon, later this month.

Educational publisher Pearson launched an app that offers U.S. students access to its 1,500 titles for a monthly subscription of $14.99. the Pearson+ mobile app (ack, another +), also offers the option of paying $9.99 per month for access to a single textbook for a minimum of four months.

News & Reading

Quora jumps into the subscription economy. Still not profitable from ads alone, Quora announced two new products that allow its expert creators to monetize their content on its service. With Quora+ ($5/mo or $50/yr), subscribers can pay for any content that a creator paywalls. Creators can choose to enable a adaptive paywall that will use an algorithm to determine when to show the paywall. Another product, Spaces, lets creators write paywalled publications on Quora, similar to Substack. But only a 5% cut goes to Quora, instead of 10% on Substack.

Utilities

Google Maps on iOS added a new live location-sharing feature for iMessage users, allowing them to more easily show your ETA with friends and even how much battery life you have left. The feature competes with iMessage’s built-in location-sharing feature, and offers location sharing of 1 hour up to 3 days. The app also gained a dark mode.

Security & Privacy

Controversial crime app Citizen launched a $20 per month “Protect” service that includes live agent support (who can refer calls to 911 if need be). The agents can gather your precise location, alert your designated emergency contacts, help you navigate to a safe location and monitor the situation until you feel safe. The system of live agent support is similar to in-car or in-home security and safety systems, like those from ADT or OnStar, but works with users out in the real world. The controversial part, however, is the company behind the product: Citizen has been making headlines for launching private security fleets outside law enforcement, and recently offered a reward in a manhunt for an innocent person based on unsubstantiated tips.

Funding and M&A

? Square announced its acquisition of the “buy now, pay later” giant AfterPay in a $29 billion deal that values the Australian firm at more than 30% higher than the stock’s last closing price of AUS$96.66. AfterPay has served over 16 million customers and nearly 100,000 merchants globally, to date, and comes at a time when the BNPL space is heating up. Apple has also gotten into the market recently with an Affirm partnership in Canada.

? Gaming giant Zynga acquired Chinese game developer StarLark, the team behind the mobile golf game Golf Rival, from Betta Games for $525 million in both cash and stock. Golf Rival is the second-largest mobile golf game behind Playdemic’s Golf Clash, and EA is in the process of buying that studio for $1.4 billion.

?  U.K.-based Humanity raised an additional $2.5 million for its app that claims to help slow down aging, bringing the total raise to date to $5 million. Backers include Calm’s co-founders, MyFitness Pal’s co-founder and others in the health space. The app works by benchmarking health advice against real-world data, to help users put better health practices into action.

? YELA, a Cameo-like app for the Middle East and South Asia, raised $2 million led by U.S. investors that include Tinder co-founder Justin Mateen and Sean Rad, general partner of RAD Fund. The app is focusing on signing celebrities in the regions it serves, where smartphone penetration is high and over 6% of the population is under 35.

? London-based health and wellness app maker Palta raised a $100 million Series B led by VNV Global. The company’s products include Flo.Health, Simple Fasting, Zing Fitness Coach and others, which reach a combined 2.4 million active, paid subscribers. The funds will be used to create more mobile subscription products.

? Emoji database and Wikipedia-like site Emojipedia was acquired by Zedge, the makers of a phone personalization app offering wallpapers, ringtones and more to 35 million MAUs. Deal terms weren’t disclosed. Emojipedia says the deal provides it with more stability and the opportunity for future growth. For Zedge, the deal provides?….um, a popular web resource it thinks it can better monetize, we suspect.

? Mental health app Revery raised $2 million led by Sequoia Capital India’s Surge program for its app that combines cognitive behavioral therapy for insomnia with mobile gaming concepts. The company will focus on other mental health issues in the future.

? London-based Nigerian-operating fintech startup Kuda raised a $55 million Series B, valuing its mobile-first challenger bank at $500 million. The inside round was co-led by Valar Ventures and Target Global.

? Vietnamese payments provider VNLife raised $250 million in a round led by U.S.-based General Atlantic and Dragoneer Investment Group. PayPal Ventures and others also participated. The round values the business at over $1 billion.

Downloads

Mastodon for iPhone

Fans of decentralized social media efforts now have a new app. The nonprofit behind the open source decentralized social network Mastodon released an official iPhone app, aimed at making the network more accessible to newcomers. The app allows you to find and follow people and topics; post text, images, GIFs, polls, and videos; and get notified of new replies and reblogs, much like Twitter.

Xingtu

@_666eveITS SO COOL FRFR do u guys want a tutorial? #fypシ #醒图 #醒图app♬ original sound – Ian Asher

TikTok users are teaching each other how to switch over to the Chinese App Store in order to get ahold of the Xingtu app for iOS. (An Android version is also available.) The app offers advanced editing tools that let users edit their face and body, like FaceTune, apply makeup, add filters and more. While image-editing apps can be controversial for how they can impact body acceptance, Xingtu offers a variety of artistic filters which is what’s primarily driving the demand. It’s interesting to see the lengths people will go to just to get a few new filters for their photos — perhaps making a case for Instagram to finally update its Post filters instead of pretending no one cares about their static photos anymore.

Tweets

Facebook still dominating top charts, but not the No. 1 spot:  

Not cool, Apple: 

This user acquisition strategy: 

Maybe Stories don’t work everywhere: 

#adt, #afterpay, #alibaba, #android, #ant-group, #api, #app-maker, #app-store, #apple, #apps, #australia, #automattic, #beijing, #biden-administration, #canada, #china, #cloud-services, #coinbase, #coinbase-card, #computing, #day-one, #dragoneer-investment-group, #driver, #dropbox, #duolingo, #emojipedia, #eta, #facebook, #fintech-startup, #food-delivery, #game-center, #game-developer, #general-atlantic, #general-partner, #georgia, #gif, #google, #hyperconnect, #instagram, #ios, #ios-devices, #ipad, #iphone, #italy, #itunes, #jam-fund, #justin-mateen, #kuaishou, #kuda, #law-enforcement, #london, #ma, #maine, #meituan, #microsoft, #middle-east, #mobile, #mobile-app, #mobile-applications, #mobile-devices, #online-creators, #onstar, #operating-system, #palta, #playdemic, #quora, #sean-rad, #sensor-tower, #sequoia-capital, #smartphone, #snap, #snapchat, #social-network, #social-networking, #software, #south-africa, #south-asia, #spotify, #stories, #target-global, #tc, #this-week-in-apps, #tiktok, #twitch, #united-kingdom, #united-states, #valar-ventures, #viasat, #vnv-global, #wi-fi, #wordpress-com, #zedge, #zynga

Apple says it will begin scanning iCloud Photos for child abuse images

Later this year, Apple will roll out a technology that will allow the company to detect and report known child sexual abuse material to law enforcement in a way it says will preserve user privacy.

Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology — NeuralHash — instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.

News of Apple’s effort leaked Wednesday when Matthew Green, a cryptography professor at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news was met with some resistance from some security experts and privacy advocates, but also users who are accustomed to Apple’s approach to security and privacy that most other companies don’t have.

Apple is trying to calm fears by baking in privacy through multiple layers of encryption, fashioned in a way that requires multiple steps before it ever makes it into the hands of Apple’s final manual review.

NeuralHash will land in iOS 15 and macOS Monterey, slated to be released in the next month or two, and works by converting the photos on a user’s iPhone or Mac into a unique string of letters and numbers, known as a hash. Any time you modify an image slightly, it changes the hash and can prevent matching. Apple says NeuralHash tries to ensure that identical and visually similar images — such as cropped or edited images — result in the same hash.

Before an image is uploaded to iCloud Photos, those hashes are matched on the device against a database of known hashes of child abuse imagery, provided by child protection organizations like the National Center for Missing & Exploited Children (NCMEC) and others. NeuralHash uses a cryptographic technique called private set intersection to detect a hash match without revealing what the image is or alerting the user.

The results are uploaded to Apple but cannot be read on their own. Apple uses another cryptographic principle called threshold secret sharing that allows it only to decrypt the contents if a user crosses a threshold of known child abuse imagery in their iCloud Photos. Apple would not say what that threshold was, but said — for example — that if a secret is split into a thousand pieces and the threshold is ten images of child abuse content, the secret can be reconstructed from any of those ten images.

Read more on TechCrunch

It’s at that point Apple can decrypt the matching images, manually verify the contents, disable a user’s account and report the imagery to NCMEC, which is then passed to law enforcement. Apple says this process is more privacy mindful than scanning files in the cloud as NeuralHash only searches for known and not new child abuse imagery. Apple said that there is a one in one trillion chance of a false positive, but there is an appeals process in place in the event an account is mistakenly flagged.

Apple has published technical details on its website about how NeuralHash works, which was reviewed by cryptography experts.

But despite the wide support of efforts to combat child sexual abuse, there is still a component of surveillance that many would feel uncomfortable handing over to an algorithm, and some security experts are calling for more public discussion before Apple rolls the technology out to users.

A big question is why now and not sooner. Apple said its privacy-preserving CSAM detection did not exist until now. But companies like Apple have also faced considerable pressure from the U.S. government and its allies to weaken or backdoor the encryption used to protect their users’ data to allow law enforcement to investigate serious crime.

Tech giants have refused efforts to backdoor their systems, but have faced resistance against efforts to further shut out government access. Although data stored in iCloud is encrypted in a way that even Apple cannot access it, Reuters reported last year that Apple dropped a plan for encrypting users’ full phone backups to iCloud after the FBI complained that it would harm investigations.

The news about Apple’s new CSAM detection tool, without public discussion, also sparked concerns that the technology could be abused to flood victims with child abuse imagery that could result in their account getting flagged and shuttered, but Apple downplayed the concerns and said a manual review would review the evidence for possible misuse.

Apple said NeuralHash will roll out in the U.S. at first, but would not say if, or when, it would be rolled out internationally. Until recently, companies like Facebook were forced to switch off its child abuse detection tools across the bloc after the practice was inadvertently banned. Apple said the feature is technically optional in that you don’t have to use iCloud Photos, but will be a requirement if users do. After all, your device belongs to you but Apple’s cloud does not.

#apple, #apple-inc, #cloud-applications, #cloud-services, #computing, #cryptography, #encryption, #facebook, #federal-bureau-of-investigation, #icloud, #ios, #iphone, #johns-hopkins-university, #law-enforcement, #macos, #privacy, #security, #technology, #u-s-government, #united-states, #webmail

Amazon will pay you $10 in credit for your palm print biometrics

How much is your palm print worth? If you ask Amazon, it’s about $10 in promotional credit if you enroll your palm prints in its checkout-free stores and link it to your Amazon account.

Last year, Amazon introduced its new biometric palm print scanners, Amazon One, so customers can pay for goods in some stores by waving their palm prints over one of these scanners. By February, the company expanded its palm scanners to other Amazon grocery, book and 4-star stores across Seattle.

Amazon has since expanded its biometric scanning technology to its stores across the U.S., including New York, New Jersey, Maryland, and Texas.

The retail and cloud giant says its palm scanning hardware “captures the minute characteristics of your palm — both surface-area details like lines and ridges as well as subcutaneous features such as vein patterns — to create your palm signature,” which is then stored in the cloud and used to confirm your identity when you’re in one of its stores.

Amazon’s latest promotion: $10 promotional credit in exchange for your palm print. (Image: Amazon)

What’s Amazon doing with this data exactly? Your palm print on its own might not do much — though Amazon says it uses an unspecified “subset” of anonymous palm data to improve the technology. But by linking it to your Amazon account, Amazon can use the data it collects, like shopping history, to target ads, offers, and recommendations to you over time.

Amazon also says it stores palm data indefinitely, unless you choose to delete the data once there are no outstanding transactions left, or if you don’t use the feature for two years.

While the idea of contactlessly scanning your palm print to pay for goods during a pandemic might seem like a novel idea, it’s one to be met with caution and skepticism given Amazon’s past efforts in developing biometric technology. Amazon’s controversial facial recognition technology, which it historically sold to police and law enforcement, was the subject of lawsuits that allege the company violated state laws that bar the use of personal biometric data without permission.

“The dystopian future of science fiction is now. It’s horrifying that Amazon is asking people to sell their bodies, but it’s even worse that people are doing it for such a low price,” said Albert Fox Cahn, the executive director of the New York-based Surveillance Technology Oversight Project, in an email to TechCrunch.

“Biometric data is one of the only ways that companies and governments can track us permanently. You can change your name, you can change your Social Security number, but you can’t change your palm print. The more we normalize these tactics, the harder they will be coming to escape. If we don’t try to line in the sand here, I am very fearful what our future will look like,” said Cahn.

When reached, an Amazon spokesperson declined to comment.

#amazon, #amazon-music, #biometrics, #computing, #law-enforcement, #maryland, #new-jersey, #new-york, #palm, #privacy, #retail, #seattle, #security, #technology, #texas, #united-states

Maine’s facial recognition law shows bipartisan support for protecting privacy

Maine has joined a growing number of cities, counties and states that are rejecting dangerously biased surveillance technologies like facial recognition.

The new law, which is the strongest statewide facial recognition law in the country, not only received broad, bipartisan support, but it passed unanimously in both chambers of the state legislature. Lawmakers and advocates spanning the political spectrum — from the progressive lawmaker who sponsored the bill to the Republican members who voted it out of committee, from the ACLU of Maine to state law enforcement agencies — came together to secure this major victory for Mainers and anyone who cares about their right to privacy.

Maine is just the latest success story in the nationwide movement to ban or tightly regulate the use of facial recognition technology, an effort led by grassroots activists and organizations like the ACLU. From the Pine Tree State to the Golden State, national efforts to regulate facial recognition demonstrate a broad recognition that we can’t let technology determine the boundaries of our freedoms in the digital 21st century.

Facial recognition technology poses a profound threat to civil rights and civil liberties. Without democratic oversight, governments can use the technology as a tool for dragnet surveillance, threatening our freedoms of speech and association, due process rights, and right to be left alone. Democracy itself is at stake if this technology remains unregulated.

Facial recognition technology poses a profound threat to civil rights and civil liberties.

We know the burdens of facial recognition are not borne equally, as Black and brown communities — especially Muslim and immigrant communities — are already targets of discriminatory government surveillance. Making matters worse, face surveillance algorithms tend to have more difficulty accurately analyzing the faces of darker-skinned people, women, the elderly and children. Simply put: The technology is dangerous when it works — and when it doesn’t.

But not all approaches to regulating this technology are created equal. Maine is among the first in the nation to pass comprehensive statewide regulations. Washington was the first, passing a weak law in the face of strong opposition from civil rights, community and religious liberty organizations. The law passed in large part because of strong backing from Washington-based megacorporation Microsoft. Washington’s facial recognition law would still allow tech companies to sell their technology, worth millions of dollars, to every conceivable government agency.

In contrast, Maine’s law strikes a different path, putting the interests of ordinary Mainers above the profit motives of private companies.

Maine’s new law prohibits the use of facial recognition technology in most areas of government, including in public schools and for surveillance purposes. It creates carefully carved out exceptions for law enforcement to use facial recognition, creating standards for its use and avoiding the potential for abuse we’ve seen in other parts of the country. Importantly, it prohibits the use of facial recognition technology to conduct surveillance of people as they go about their business in Maine, attending political meetings and protests, visiting friends and family, and seeking out healthcare.

In Maine, law enforcement must now — among other limitations — meet a probable cause standard before making a facial recognition request, and they cannot use a facial recognition match as the sole basis to arrest or search someone. Nor can local police departments buy, possess or use their own facial recognition software, ensuring shady technologies like Clearview AI will not be used by Maine’s government officials behind closed doors, as has happened in other states.

Maine’s law and others like it are crucial to preventing communities from being harmed by new, untested surveillance technologies like facial recognition. But we need a federal approach, not only a piecemeal local approach, to effectively protect Americans’ privacy from facial surveillance. That’s why it’s crucial for Americans to support the Facial Recognition and Biometric Technology Moratorium Act, a bill introduced by members of both houses of Congress last month.

The ACLU supports this federal legislation that would protect all people in the United States from invasive surveillance. We urge all Americans to ask their members of Congress to join the movement to halt facial recognition technology and support it, too.

#artificial-intelligence, #biometrics, #clearview-ai, #column, #facial-recognition, #facial-recognition-software, #government, #law-enforcement, #maine, #opinion, #privacy, #surveillance-technologies, #tc

Opioid addiction treatment apps found sharing sensitive data with third parties

Several widely used opioid treatment recovery apps are accessing and sharing sensitive user data with third parties, a new investigation has found.

As a result of the COVID-19 pandemic and efforts to reduce transmission in the U.S, telehealth services and apps offering opioid addiction treatment have surged in popularity. This rise of app-based services comes as addiction treatment facilities face budget cuts and closures, which has seen both investor and government interest turn to telehealth as a tool to combat the growing addiction crisis.

While people accessing these services may have a reasonable expectation of privacy of their healthcare data, a new report from ExpressVPN’s Digital Security Lab, compiled in conjunction with the Opioid Policy Institute and the Defensive Lab Agency, found that some of these apps collect and share sensitive information with third parties, raising questions about their privacy and security practices.

The report studied 10 opioid treatment apps available on Android: Bicycle Health, Boulder Care, Confidant Health. DynamiCare Health, Kaden Health, Loosid, Pear Reset-O, PursueCare, Sober Grid, and Workit Health. These apps have been installed at least 180,000 times, and have received more than $300 million in funding from investment groups and the federal government.

Despite the vast reach and sensitive nature of these services, the research found that the majority of the apps accessed unique identifiers about the user’s device and, in some cases, shared that data with third parties.

Of the 10 apps studied, seven access the Android Advertising ID (AAID), a user-generated identifier that can be linked to other information to provide insights into identifiable individuals. Five of the apps also access the devices’ phone number; three access the device’s unique IMEI and IMSI numbers, which can also be used to uniquely identify a person’s device; and two access a users’ list of installed apps, which the researchers say can be used to build a “fingerprint” of a user to track their activities.

Many of the apps examined are also obtaining location information in some form, which when correlated with these unique identifiers, strengthens the capability for surveilling an individual person, as well as their daily habits, behaviors, and who they interact with. One of the methods the apps are doing this is through Bluetooth; seven of the apps request permission to make Bluetooth connections, which the researchers say is particularly worrying due to the fact this can be used to track users in real-world locations.

“Bluetooth can do what I call proximity tracking, so if you’re in the grocery store, it knows how long you’re in a certain aisle, or how close you are to someone else,” Sean O’Brien, principal researcher at ExpressVPN’s Digital Security Lab who led the investigation, told TechCrunch. “Bluetooth is an area that I’m pretty concerned about.”

Another major area of concern is the use of tracker SDKs in these apps, which O’Brien previously warned about in a recent investigation that revealed that hundreds of Android apps were sending granular user location data to X-Mode, a data broker known to sell location data to U.S. military contractors, and now banned from both Apple and Google’s app stores. SDKs, or software development kits, are bundles of code that are included with apps to make them work properly, such as collecting location data. Often, SDKs are provided for free in exchange for sending back the data that the apps collect.

“Confidentiality continues to be one of the major concerns that people cite for not entering treatment… existing privacy laws are totally not up to speed.” Jacqueline Seitz, Legal Action Center

While the researchers keen to point out that it does not categorize all usage of trackers as malicious, particularly as many developers may not even be aware of their existence within their apps, they discovered a high prevalence of tracker SDKs in seven out of the 10 apps that revealed potential data-sharing activity. Some SDKs are designed specifically to collect and aggregate user data; this is true even where the SDK’s core functionality is concerned.

But the researchers explain that an app, which provides navigation to a recovery center, for example, may also be tracking a user’s movements throughout the day and sending that data back to the app’s developers and third parties.

In the case of Kaden Health, Stripe — which is used for payment services within the app — can read the list of installed apps on a user’s phone, their location, phone number, and carrier name, as well as their AAID, IP address, IMEI, IMSI, and SIM serial number.

“An entity as large as Stripe having an app share that information directly is pretty alarming. It’s worrisome to me because I know that information could be very useful for law enforcement,” O’Brien tells TechCrunch. “I also worry that people having information about who has been in treatment will eventually make its way into decisions about health insurance and people getting jobs.”

The data-sharing practices of these apps are likely a consequence of these services being developed in an environment of unclear U.S. federal guidance regarding the handling and disclosure of patient information, the researchers say, though O’Brien tells TechCrunch that the actions could be in breach of 42 CFR Part 2, a law that outlines strong controls over disclosure of patient information related to treatment for addiction.

Jacqueline Seitz, a senior staff attorney for health privacy at Legal Action Center, however, said this 40-year-old law hasn’t yet been updated to recognize apps.

“Confidentiality continues to be one of the major concerns that people cite for not entering treatment,” Seitz told TechCrunch. “While 42 CFR Part 2 recognizes the very sensitive nature of substance use disorder treatment, it doesn’t mention apps at all. Existing privacy laws are totally not up to speed.

“It would be great to see some leadership from the tech community to establish some basic standards and recognize that they’re collecting super-sensitive information so that patients aren’t left in the middle of a health crisis trying to navigate privacy policies,” said Seitz.

Another likely reason for these practices is a lack of security and data privacy staff, according to Jonathan Stoltman, director at Opioid Policy Institute, which contributed to the research. “If you look at a hospital’s website, you’ll see a chief information officer, a chief privacy officer, or a chief security officer that’s in charge of physical security and data security,” he tells TechCrunch. “None of these startups have that.”

“There’s no way you’re thinking about privacy if you’re collecting the AAID, and almost all of these apps are doing that from the get-go,” Stoltman added.

Google is aware of ExpressVPN’s findings but has yet to comment. However, the report has been released as the tech giant prepares to start limiting developer access to the Android Advertising ID, mirroring Apple’s recent efforts to enable users to opt out of ad tracking.

While ExpressVPN is keen to make patients aware that these apps may violate expectations of privacy, it also stresses the central role that addiction treatment and recovery apps may play in the lives of those with opioid addiction. It recommends that if you or a family member used one of these services and find the disclosure of this data to be problematic, contact the Office of Civil Rights through Health and Human Services to file a formal complaint.

“The bottom line is this is a general problem with the app economy, and we’re watching telehealth become part of that, so we need to be very careful and cautious,” said O’Brien. “There needs to be disclosure, users need to be aware, and they need to demand better.”

Recovery from addiction is possible. For help, please call the free and confidential treatment referral hotline (1-800-662-HELP) or visit findtreatment.gov.

Read more:

#android, #app-developers, #app-store, #apple, #apps, #artificial-intelligence, #bluetooth, #broker, #computing, #director, #federal-government, #google, #google-play, #governor, #health, #health-insurance, #healthcare-data, #imessage, #law-enforcement, #mobile-app, #operating-systems, #privacy, #read, #security, #software, #stripe, #terms-of-service, #united-states

UK tells messaging apps not to use e2e encryption for kids’ accounts

For a glimpse of the security and privacy dystopia the UK government has in store for its highly regulated ‘British Internet’, look no further than guidance put out by the Department of Digital, Media, Culture and Sport (DCMS) yesterday — aimed at social media platforms and private messaging services — which includes the suggestion that the latter should “prevent’ the use of end-to-end encryption on “child accounts”.

That’s right, the UK government is saying: ‘No end-to-end encryption for our kids please, they’re British’.

And while this is merely guidance for now, the chill is real — because legislation is already on the table.

The UK’s Online Safety Bill was published back in May, with Boris Johnson’s government setting out a sweeping plan to force platforms to regulate user generated content by imposing a legal duty to protect users from illegal (or merely just “harmful”) content.

The bill controversially bundles up requirements to report illegal stuff like child sexual exploitation content to law enforcement with far fuzzier mandates that platforms take action against a range of much-harder-to-define ‘harms’ (from cyber bullying to romance scams).

The end result looks like a sledgehammer to crack a nut. Except the ‘nut’ that could get smashed to pieces in this ministerial vice is UK Internet users’ digital security and privacy. (Not to mention any UK startups and digital businesses that aren’t on board with mass-surveillance-as-a-service.)

That’s the danger if the government follows through on its wonky idea that — on the Internet — ‘safety’ means security must be replaced with blanket surveillance in order to ‘keep kids safe’.

The Online Safety Bill is not the first wonky tech policy plan the UK has come up with. An earlier bid to force adult content providers to age verify users was dropped in 2019, having been widely criticized as unworkable as well as a massive privacy intrusion and security risk.

However, at the time, the government said it was only abandoning the ‘porn blocks’ measure because it was planning to bring forward “the most comprehensive approach possible to protecting children”. Hence the Online Safety Bill now stepping forward to push platforms to remove robust encryption in the name of ‘protecting children’.

Age verification technologies — and all sorts of content monitoring solutions (surveillance tech, doubtless badged as ‘safety’ tech) — also look likely to proliferate as a consequence of this approach.

Pushing platforms to proactively police speech and surveil usage in the hopes of preventing an ill-defined grab-bag of ‘harms’ — or, from the platforms’ perspective, to avoid the risk of eye-watering fines from the regulator if it decides they’ve failed in this ‘duty of care’ — also obviously conjures up a nightmare scenario for online freedom of expression.

Aka: ‘Watch what you type, even in the privacy of your private messaging app, because the UK Internet safety thought police are watching/might block you…’

Privacy rights for UK minors appear to be first on the chopping block, via what DCMS’ guidance refers to as “practical steps to manage the risk of online harm if your online platform allows people to interact, and to share text and other content”.

So, pretty much, if your online platform has any kind of communication layer at all then.

Letting kids have their own safe spaces to express themselves is apparently incompatible with ministers’ populist desire to brand the UK ‘the safest place to go online in the world’, as they like to spin it.

How exactly the UK will achieve safety online if government zealots force service providers to strip away robust security (e2e encryption) — torching the standard of data protection and privacy wrapping Brits’ personal information — is quite the burning question.

Albeit, it’s not one the UK government seems to have considered for even a split second.

“We’ve known for a long time that one of government’s goals for the Online Safety Bill is the restriction, if not the outright criminalisation, of the use of end-to-end encryption,” said Heather Burns, a policy manager for the digital rights organization Open Rights Group (ORG), one of many vocal critics of the government’s approach — discussing the wider implications of the policy push with TechCrunch.

“Recent messaging strategies promoted by government and the media have openly sought to associate end-to-end encryption with child abuse, and to imply that companies which use it are aiding and abetting child exploitation. So DCMS’s newly-published guidance advising the voluntary removal of encryption from children’s accounts is a precursor to it becoming a likely legal requirement.

“It’s also part of government’s drive, again as part of the Online Safety Bill, to require all services to implement mandatory age verification on all users, for all content or applications, in order to identify child users, in order to withhold encryption from them, thanks to aggressive lobbying from the age verification industry.”

That ministerial rhetoric around the Online Safety Bill is heavy on tub-thumping emotional appeals (to ‘protect our children from online nasties’) and low on sequential logic or technological coherence is not a surprise: Successive Conservative governments have, after all, had a massive bee in their bonnets about e2e encryption — dating back to the David Cameron years.

Back then ministers were typically taking aim at strong encryption on counter-terrorism grounds, arguing the tech is bad because it prevents law enforcement from catching terrorists. (And they went on to pass beefed up surveillance laws which also include powers to limit the use of robust encryption.)

However, under more recent PMs Theresa May and Boris Johnson, the child protection rhetoric has stepped up too — to the point where messaging channels are now being actively encouraged not to use e2e encryption altogether.

Next stop: State-sanctioned commercial mass surveillance. And massive risks for all UK Internet users subject to this anti-security, anti-privacy ‘safety’ regime.

“Despite government’s claim that the Bill will make the UK ‘the safest place in the world to be online’, restricting or criminalising encryption will actually make the UK an unsafe place for any company to do business,” warned Burns. “We will all need to resort to VPNs and foreign services, as happens in places like China, in order to keep our data safe. It’s likely that many essential services will block UK customers, or leave the UK altogether, rather than be compelled to act as a privatised nanny state over insecure data flows.”

In a section of the DCMS guidance entitled “protect children by limiting functionality”, the government department literally suggests that “private channels” (i.e. services like messaging apps) “prevent end-to-end encryption for child accounts”. And since accurately age identifying online users remains a challenge it follows that in-scope services may simply decide it’s less legally risky if they don’t use e2e at all.

DCMS’s guidance also follows up with an entirely bolded paragraph — in which the government then makes a point of highlighting e2e encryption as a “risk” to users, generally — and, therefore by implication, to future compliance with the forthcoming Online Safety legislation…

End-to-end encryption makes it more difficult for you to identify illegal and harmful content occurring on private channels. You should consider the risks this might pose to your users,” the UK government writes, emphasis its.

Whether anything can stop this self-destructive policy train now it’s left the Downing Street station is unclear. Johnson has a whopping majority in parliament — and years left before he has to call a general election.

The only thing that could derail the most harmful elements of the Online Safety Bill is if the UK public wakes up to the dangers it poses to everyone’s security and privacy — and if enough MPs take notice and push for amendments.

Earlier this month the ORG, along with some 30 other digital and humans rights groups, called on MPs to do just that and “help keep constituents’ data safe by protecting e2e encryption from legislative threats” — warning that this “basic and essential” security protocol is at risk from clauses in the bill that introduce requirements for companies to scan private and personal messages for evidence of criminal wrongdoing.

Zero access encryption is seen by the UK government as a blocker to such scanning.

“In order to do this, the use of end-to-end encryption is likely to be defined as a violation of the law,” the ORG also warned. “And companies operating in the UK who want to continue to defend user privacy through end-to-end encryption could, under the draft Bill, be threatened with partial shutdowns, being blocked from the UK, or even personal arrests.”

“We call on Parliament to ensure that end-to-end encryption must not be threatened or undermined by the Online Safety Bill, and that services utilising strong encryption are left out of the Bill’s content monitoring and filtering requirements,” it added in the online appeal.

DMCS has been contacted with questions on the logic of the government’s policy toward e2e encryption.

In a statement yesterday, the digital minister Caroline Dinenage said: “We’re helping businesses get their safety standards up to scratch before our new online harms laws are introduced and also making sure they are protecting children and users right now.

“We want businesses of all sizes to step up to a gold standard of safety online and this advice will help them to do so.”

#boris-johnson, #computer-security, #cryptography, #data-protection, #data-security, #e2e-encryption, #encryption, #end-to-end-encryption, #europe, #human-rights, #law-enforcement, #online-freedom, #online-safety-bill, #open-rights-group, #policy, #privacy, #security, #social-media-platforms, #telecommunications, #uk-government, #united-kingdom

Clop ransomware gang doxes two new victims days after police raids

The notorious Clop ransomware operation appears to be back in business, just days after Ukrainian police arrested six alleged members of the gang.

Last week, a law enforcement operation conducted by the National Police of Ukraine along with officials from South Korea and the U.S. saw the arrest of multiple suspects believed to be linked to the Clop ransomware gang. It’s believed to be the first time a national law enforcement group carried out mass arrests involving a ransomware group.

The Ukrainian police also claimed at the time to have successfully shut down the server infrastructure used by the gang. But it doesn’t seem the operation was completely successful.

While the Clop operation fell silent following the arrests, the gang has this week published a fresh batch of confidential data which it claims to have stolen from two new victims — a farm equipment retailer and an architects office — on its dark web site, seen by TechCrunch.

If true — and neither of the alleged victims responded to TechCrunch’s request for comment — this would suggest that the ransomware gang remains active, despite last week’s first-of-its-kind law enforcement sting. This is likely because the suspects cuffed included only those who played a lesser role in the Clop operation. Cybersecurity firm Intel 471 said it believes that last week’s arrests targeted the money laundering portion of the operation, with core members of the gang not apprehended.

“We do not believe that any core actors behind Clop were apprehended,” the security company said. “The overall impact to Clop is expected to be minor although this law enforcement attention may result in the Clop brand getting abandoned as we’ve recently seen with other ransomware groups like DarkSide and Babuk.”

Clop appears to still be in business, but it remains to be seen how long the group will remain operational. Not only have law enforcement operations dealt numerous blows to ransomware groups this year, such as U.S. investigators’ recent recovery of millions in cryptocurrency they claim was paid in ransom to the Colonial Pipeline hackers, but Russia has this week confirmed it will begin to work with the U.S. to locate cybercriminals.

Russia has until now taken a hands-off approach when it comes to dealing with hackers. Reuters reported Wednesday that the head of the country’s Federal Security Service (FSB) Alexander Bortnikov was quoted as saying it will co-operate with U.S. authorities on future cybersecurity operations.

Intel 471 previously said that it does not believe the key members of Clop were arrested in last week’s operation because “they are probably living in Russia,” which has long provided safe harbor to cybercriminals by refusing to take action.

The Clop ransomware gang was first spotted in early 2019, and the group has since been linked to a number of high-profile attacks. These include the breach of U.S. pharmaceutical giant ExecuPharm in April 2020 and the recent data breach at Accellion, which saw hackers exploit flaws in the IT provider’s software to steal data from dozens of its customers including the University of Colorado and cloud security vendor Qualys.

#accellion, #chief, #colorado, #computer-security, #crime, #cyberattack, #cybercrime, #head, #intel, #law-enforcement, #moscow, #qualys, #ransomware, #russia, #security, #security-breaches, #south-korea, #united-states

Mitiga raises $25M Series A to help organizations respond to cyberattacks

Israeli cloud security startup Mitiga has raised $25 million in a Series A round of funding as it moves to “completely change” the traditional incident response market.

Mitiga, unlike other companies in the cybersecurity space, isn’t looking to prevent cyberattacks, which the startup claims are inevitable no matter how much protection is in place. Rather, it’s looking to help organizations manage their incident response, particularly as they transition to hybrid and multi-cloud environments. 

The early-stage startup, which raised $7 million in seed funding in July last year, says its incident readiness and response tech stack accelerates post-incident bounce back from days down to hours. Its subscription-based offering automatically detects when a network is breached and quickly investigates, collects case data, and translates it into remediation steps for all relevant divisions within an organization so they can quickly and efficiently respond. Mitiga also documents each event, allowing organizations to fix the cause in order to prevent future attacks.

Mitiga’s Series A was led by ClearSky Security, Atlantic Bridge, and DNX, and the startup tells TechCrunch that it will use the funds to “continue to disrupt how incident readiness and response is delivered,” as well as “significantly” increasing its cybersecurity, engineering, sales, and marketing staff.

The company added that the funding comes amid a “changing mindset” for enterprise organizations when it comes to incident readiness and response. The pandemic has accelerated cloud adoption, and it’s predicted that spending on cloud services will surpass $332 billion this year alone. This acceleration, naturally, has provided a lucrative target for hackers, with cyberattacks on cloud services increasing 630% in the first four months of 2020, according to McAfee. 

“The cloud represents new challenges for incident readiness and response and we’re bringing the industry’s first incident response solution in the cloud, for the cloud,” said Tal Mozes, co-founder and CEO of Mitiga. 

“This funding will allow us to further our engagements with heads of enterprise security who are looking to recover from an incident in real-time, attract even more of the most innovative cybersecurity minds in the industry, and expand our partner network. I couldn’t be more excited about what Mitiga is going to do for cloud-first organizations who understand the importance of cybersecurity readiness and response.”

Mitiga was founded in 2019 by Mozes, Ariel Parnes and Ofer Maor, and the team of 42 currently works in Tel Aviv with offices in London and New York. It has customers in multiple sectors, including financial service institutions, banks, e-commerce, law enforcement and government agencies, and Mitiga also provides emergency response to active network security incidents such as ransomware and data breaches for non-subscription customers.

Recent funding:

#artificial-intelligence, #atlantic-bridge, #claroty, #cloud-services, #computer-security, #cyberattack, #cybercrime, #cyberwarfare, #data-security, #e-commerce, #funding, #law-enforcement, #london, #malware, #new-york, #security, #series-a, #techcrunch, #tel-aviv

A week after arrests, Cl0p ransomware group dumps new tranche of stolen data

A week after arrests, Cl0p ransomware group dumps new tranche of stolen data

Enlarge (credit: Getty Images)

A week after Ukrainian police arrested criminals affiliated with the notorious Cl0p ransomware gang, Cl0p has published a fresh batch of what’s purported to be confidential data stolen in a hack of a previously unknown victim. Ars won’t be identifying the possibly victimized company until there is confirmation that the data and the hack are genuine.

If genuine, the dump shows that Cl0p remains intact and able to carry out its nefarious actions despite the arrests. That suggests that the suspects don’t include the core leaders but rather affiliates or others who play a lesser role in the operations.

The data purports to be employee records, including verification of employment for loan applications and documents pertaining to workers whose wages have been garnished. I was unable to confirm that the information is genuine and that it was, in fact, taken during a hack on the company, although web searches showed that names listed in the documents matched names of people who work for the company.

Read 8 remaining paragraphs | Comments

#biz-it, #cl0p, #law-enforcement, #ransomware, #security, #tech

EU puts out final guidance on data transfers to third countries

The European Data Protection Board (EDPB) published its final recommendations yesterday setting on guidance for making transfers of personal data to third countries to comply with EU data protection rules in light of last summer’s landmark CJEU ruling (aka Schrems II).

The long and short of these recommendations — which are fairly long; running to 48 pages — is that some data transfers to third countries will simply not be possible to (legally) carry out. Despite the continued existence of legal mechanisms that can, in theory, be used to make such transfers (like Standard Contractual Clauses; a transfer tool that was recently updated by the Commission).

However it’s up to the data controller to assess the viability of each transfer, on a case by case basis, to determine whether data can legally flow in that particular case. (Which may mean, for example, a business making complex assessments about foreign government surveillance regimes and how they impinge upon its specific operations.)

Companies that routinely take EU users’ data outside the bloc for processing in third countries (like the US), which do not have data adequacy arrangements with the EU, face substantial cost and challenge in attaining compliance — in a best case scenario.

Those that can’t apply viable ‘special measures’ to ensure transferred data is safe are duty bound to suspend data flows — with the risk, should they fail to do that, of being ordered to by a data protection authority (which could also apply additional sanctions).

One alternative option could be for such a firm to store and process EU users’ data locally — within the EU. But clearly that won’t be viable for every company.

Law firms are likely to be very happy with this outcome since there will be increased demand for legal advice as companies grapple with how to structure their data flows and adapt to a post-Schrems II world.

In some EU jurisdictions (such as Germany) data protection agencies are now actively carrying out compliance checks — so orders to suspend transfers are bound to follow.

While the European Data Protection Supervisor is busy scrutinizing EU institutions’ own use of US cloud services giants to see whether high level arrangements with tech giants like AWS and Microsoft pass muster or not.

Last summer the CJEU struck down the EU-US Privacy Shield — only a few years after the flagship adequacy arrangement was inked. The same core legal issues did for its predecessor, ‘Safe Harbor‘, though that had stood for some fifteen years. And since the demise of Privacy Shield the Commission has repeatedly warned there will be no quick fix replacement this time; nothing short of major reform of US surveillance law is likely to be required.

US and EU lawmakers remain in negotiations over a replacement EU-US data flows deal but a viable outcome that can stand up to legal challenge as the prior two agreements could not, may well require years of work, not months.

And that means EU-US data flows are facing legal uncertainty for the foreseeable future.

The UK, meanwhile, has just squeezed a data adequacy agreement out of the Commission — despite some loudly enunciated post-Brexit plans for regulatory divergence in the area of data protection.

If the UK follows through in ripping up key tenets of its inherited EU legal framework there’s a high chance it will also lose adequacy status in the coming years — meaning it too could face crippling barriers to EU data flows. (But for now it seems to have dodged that bullet.)

Data flows to other third countries that also lack an EU adequacy agreement — such as China and India — face the same ongoing legal uncertainty.

The backstory to the EU international data flows issues originates with a complaint — in the wake of NSA whistleblower Edward Snowden’s revelations about government mass surveillance programs, so more than seven years ago — made by the eponymous Max Schrems over what he argued were unsafe EU-US data flows.

Although his complaint was specifically targeted at Facebook’s business and called on the Irish Data Protection Commission (DPC) to use its enforcement powers and suspend Facebook’s EU-US data flows.

A regulatory dance of indecision followed which finally saw legal questions referred to Europe’s top court and — ultimately — the demise of the EU-US Privacy Shield. The CJEU ruling also put it beyond legal doubt that Member States’ DPAs must step in and act when they suspect data is flowing to a location where the information is at risk.

Following the Schrems II ruling, the DPC (finally) sent Facebook a preliminary order to suspend its EU-US data flows last fall. Facebook immediately challenged the order in the Irish courts — seeking to block the move. But that challenge failed. And Facebook’s EU-US data flows are now very much operating on borrowed time.

As one of the platform’s subject to Section 702 of the US’ FISA law, its options for applying ‘special measures’ to supplement its EU data transfers look, well, limited to say the least.

It can’t — for example — encrypt the data in a way that ensures it has no access to it (zero access encryption) since that’s not how Facebook’s advertising empire functions. And Schrems has previously suggested Facebook will have to federate its service — and store EU users’ information inside the EU — to fix its data transfer problem.

Safe to say, the costs and complexity of compliance for certain businesses like Facebook look massive.

But there will be compliance costs and complexity for thousands of businesses in the wake of the CJEU ruling.

Commenting on the EDPB’s adoption of final recommendations, chair Andrea Jelinek said: “The impact of Schrems II cannot be underestimated: Already international data flows are subject to much closer scrutiny from the supervisory authorities who are conducting investigations at their respective levels. The goal of the EDPB Recommendations is to guide exporters in lawfully transferring personal data to third countries while guaranteeing that the data transferred is afforded a level of protection essentially equivalent to that guaranteed within the European Economic Area.

“By clarifying some doubts expressed by stakeholders, and in particular the importance of examining the practices of public authorities in third countries, we want to make it easier for data exporters to know how to assess their transfers to third countries and to identify and implement effective supplementary measures where they are needed. The EDPB will continue considering the effects of the Schrems II ruling and the comments received from stakeholders in its future guidance.”

The EDPB put out earlier guidance on Schrems II compliance last year.

It said the main modifications between that earlier advice and its final recommendations include: “The emphasis on the importance of examining the practices of third country public authorities in the exporters’ legal assessment to determine whether the legislation and/or practices of the third country impinge — in practice — on the effectiveness of the Art. 46 GDPR transfer tool; the possibility that the exporter considers in its assessment the practical experience of the importer, among other elements and with certain caveats; and the clarification that the legislation of the third country of destination allowing its authorities to access the data transferred, even without the importer’s intervention, may also impinge on the effectiveness of the transfer tool”.

Commenting on the EDPB’s recommendations in a statement, law firm Linklaters dubbed the guidance “strict” — warning over the looming impact on businesses.

“There is little evidence of a pragmatic approach to these transfers and the EDPB seems entirely content if the conclusion is that the data must remain in the EU,” said Peter Church, a Counsel at the global law firm. “For example, before transferring personal data to third country (without adequate data protection laws) businesses must consider not only its law but how its law enforcement and national security agencies operate in practice. Given these activities are typically secretive and opaque, this type of analysis is likely to cost tens of thousands of euros and take time. It appears this analysis is needed even for relatively innocuous transfers.”

“It is not clear how SMEs can be expected to comply with these requirements,” he added. “Given we now operate in a globalised society the EDPB, like King Canute, should consider the practical limitations on its power. The guidance will not turn back the tides of data washing back and forth across the world, but many businesses will really struggle to comply with these new requirements.”

 

#andrea-jelinek, #china, #data-controller, #data-protection, #data-security, #edpb, #edward-snowden, #eu-us-privacy-shield, #europe, #european-data-protection-board, #european-union, #facebook, #general-data-protection-regulation, #germany, #india, #law-enforcement, #law-firms, #linklaters, #max-schrems, #policy, #privacy, #schrems-ii, #surveillance-law, #united-kingdom, #united-states

Ban biometric surveillance in public to safeguard rights, urge EU bodies

There have been further calls from EU institutions to outlaw biometric surveillance in public.

In a joint opinion published today, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), Wojciech Wiewiórowski, have called for draft EU regulations on the use of artificial intelligence technologies to go further than the Commission’s proposal in April — urging that the planned legislation should be beefed up to include a “general ban on any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context”.

Such technologies are simply too harmful to EU citizens’ fundamental rights and freedoms — like privacy and equal treatment under the law — to permit their use, is the argument.

The EDPB is responsible for ensuring a harmonization application of the EU’s privacy rules, while the EDPS oversees EU institutions’ own compliance with data protection law and also provides legislative guidance to the Commission.

EU lawmakers’ draft proposal on regulating applications of AI contained restrictions on law enforcement’s use of biometric surveillance in public places — but with very wide-ranging exemptions which quickly attracted major criticism from digital rights and civil society groups, as well as a number of MEPs.

The EDPS himself also quickly urged a rethink. Now he’s gone further, with the EDPB joining in with the criticism.

The EDPB and the EDPS have jointly fleshed out a number of concerns with the EU’s AI proposal — while welcoming the overall “risk-based approach” taken by EU lawmakers — saying, for example, that legislators must be careful to ensure alignment with the bloc’s existing data protection framework to avoid rights risks.

“The EDPB and the EDPS strongly welcome the aim of addressing the use of AI systems within the European Union, including the use of AI systems by EU institutions, bodies or agencies. At the same time, the EDPB and EDPS are concerned by the exclusion of international law enforcement cooperation from the scope of the Proposal,” they write.

“The EDPB and EDPS also stress the need to explicitly clarify that existing EU data protection legislation (GDPR, the EUDPR and the LED) applies to any processing of personal data falling under the scope of the draft AI Regulation.”

As well as calling for the use of biometric surveillance to be banned in public, the pair have urged a total ban on AI systems using biometrics to categorize individuals into “clusters based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights”.

That’s an interesting concern in light of Google’s push, in the adtech realm, to replace behavioral micromarketing of individuals with ads that address cohorts (or groups) of users, based on their interests — with such clusters of web users set to be defined by Google’s AI algorithms.

(It’s interesting to speculate, therefore, whether FLoCs risks creating a legal discrimination risk — based on how individual mobile users are grouped together for ad targeting purposes. Certainly, concerns have been raised over the potential for FLoCs to scale bias and predatory advertising. And it’s also interesting that Google avoided running early tests in Europe, likely owning to the EU’s data protection regime.)

In another recommendation today, the EDPB and the EDPS also express a view that the use of AI to infer emotions of a natural person is “highly undesirable and should be prohibited” —  except for what they describe as “very specified cases, such as some health purposes, where the patient emotion recognition is important”.

“The use of AI for any type of social scoring should be prohibited,” they go on — touching on one use-case that the Commission’s draft proposal does suggest should be entirely prohibited, with EU lawmakers evidently keen to avoid any China-style social credit system taking hold in the region.

However by failing to include a prohibition on biometric surveillance in public in the proposed regulation the Commission is arguably risking just such a system being developed on the sly — i.e. by not banning private actors from deploying technology that could be used to track and profile people’s behavior remotely and en masse.

Commenting in a statement, the EDPB’s chair Andrea Jelinek and the EDPS Wiewiórowski argue as much, writing [emphasis ours]:

“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places. Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms. This calls for an immediate application of the precautionary approach. A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI. The proposed regulation should also prohibit any type of use of AI for social scoring, as it is against the EU fundamental values and can lead to discrimination.”

In their joint opinion they also express concerns about the Commission’s proposed enforcement structure for the AI regulation, arguing that data protection authorities (within Member States) should be designated as national supervisory authorities (“pursuant to Article 59 of the [AI] Proposal”) — pointing out the EU DPAs are already enforcing the GDPR (General Data Protection Regulation) and the LED (Law Enforcement Directive) on AI systems involving personal data; and arguing it would therefore be “a more harmonized regulatory approach, and contribute to the consistent interpretation of data processing provisions across the EU” if they were given competence for supervising the AI Regulation too.

They are also not happy with the Commission’s plan to give itself a predominant role in the planned European Artificial Intelligence Board (EAIB) — arguing that this “would conflict with the need for an AI European body independent from any political influence”. To ensure the Board’s independence the proposal should give it more autonomy and “ensure it can act on its own initiative”, they add.

The Commission has been contacted for comment.

The AI Regulation is one of a number of digital proposals unveiled by EU lawmakers in recent months. Negotiations between the different EU institutions — and lobbying from industry and civil society — continues as the bloc works toward adopting new digital rules.

In another recent and related development, the UK’s information commissioner warned last week over the threat posed by big data surveillance systems that are able to make use of technologies like live facial recognition — although she claimed it’s not her place to endorse or ban a technology.

But her opinion makes it clear that many applications of biometric surveillance may be incompatible with the UK’s privacy and data protection framework.

#andrea-jelinek, #artificial-intelligence, #biometrics, #data-protection, #data-protection-law, #edpb, #edps, #europe, #european-data-protection-board, #european-union, #facial-recognition, #general-data-protection-regulation, #law-enforcement, #privacy, #surveillance, #united-kingdom, #wojciech-wiewiorowski