WhatsApp will finally let users encrypt their chat backups in the cloud

WhatsApp said on Friday it will give its two billion users the option to encrypt their chat backups to the cloud, taking a significant step to put a lid on one of the tricky ways private communication between individuals on the app can be compromised.

The Facebook-owned service has end-to-end encrypted chats between users for more than a decade. But users have had no option but to store their chat backup to their cloud — iCloud on iPhones and Google Drive on Android — in an unencrypted format.

Tapping these unencrypted WhatsApp chat backups on Google and Apple servers is one of the widely known ways law enforcement agencies across the globe have been able to access WhatsApp chats of suspect individuals for years.

Now WhatsApp says it is patching this weak link in the system.

“WhatsApp is the first global messaging service at this scale to offer end-to-end encrypted messaging and backups, and getting there was a really hard technical challenge that required an entirely new framework for key storage and cloud storage across operating systems,” said Facebook’s chief executive Mark Zuckerberg said in a post announcing the new feature.

Store your own encryption keys

The company said it has devised a system to enable WhatsApp users on Android and iOS to lock their chat backups with encryption keys. WhatsApp says it will offer users two ways to encrypt their cloud backups, and the feature is optional.

In the “coming weeks,” users on WhatsApp will see an option to generate a 64-digit encryption key to lock their chat backups in the cloud. Users can store the encryption key offline or in a password manager of their choice, or they can create a password that backs up their encryption key in a cloud-based “backup key vault” that WhatsApp has developed. The cloud-stored encryption key can’t be used without the user’s password, which isn’t known by WhatsApp.

(Image: WhatsApp/supplied)

“We know that some will prefer the 64-digit encryption key whereas others want something they can easily remember, so we will be including both options. Once a user sets their backup password, it is not known to us. They can reset it on their original device if they forget it,” WhatsApp said.

“For the 64-digit key, we will notify users multiple times when they sign up for end-to-end encrypted backups that if they lose their 64-digit key, we will not be able to restore their backup and that they should write it down. Before the setup is complete, we’ll ask users to affirm that they’ve saved their password or 64-digit encryption key.”

A WhatsApp spokesperson told TechCrunch that once an encrypted backup is created, previous copies of the backup will be deleted. “This will happen automatically and there is no action that a user will need to take,” the spokesperson added.

Potential regulatory pushback?

The move to introduce this added layer of privacy is significant and one that could have far-reaching implications.

End-to-end encryption remains a thorny topic of discussion as governments continue to lobby for backdoors. Apple was reportedly pressured to not add encryption to iCloud Backups after the FBI complained, and while Google has offered users the ability to encrypt their data stored in Google Drive, the company allegedly didn’t tell governments before it rolled out the feature.

When asked by TechCrunch whether WhatsApp, or its parent firm Facebook, had consulted with government bodies — or if it had received their support — during the development process of this feature, the company declined to discuss any such conversations.

“People’s messages are deeply personal and as we live more of our lives online, we believe companies should enhance the security they provide their users. By releasing this feature, we are providing our users with the option to add this additional layer of security for their backups if they’d like to, and we’re excited to give our users a meaningful advancement in the safety of their personal messages,” the company told TechCrunch.

WhatsApp also confirmed that it will be rolling out this optional feature in every market where its app is operational.  It’s not uncommon for companies to withhold privacy features for legal and regulatory reasons. Apple’s upcoming encrypted browsing feature, for instance, won’t be made available to users in certain authoritarian regimes, such as China, Belarus, Egypt, Kazakhstan, Saudi Arabia, Turkmenistan, Uganda, and the Philippines.

At any rate, Friday’s announcement comes days after ProPublica reported that private end-to-end encrypted conversations between two users can be read by human contractors when messages are reported by users.

“Making backups fully encrypted is really hard and it’s particularly hard to make it reliable and simple enough for people to use. No other messaging service at this scale has done this and provided this level of security for people’s messages,” Uzma Barlaskar, product lead for privacy at WhatsApp, told TechCrunch.

“We’ve been working on this problem for many years, and to build this, we had to develop an entirely new framework for key storage and cloud storage that can be used across the world’s largest operating systems and that took time.”

#apple, #apps, #facebook, #google, #google-drive, #icloud, #mark-zuckerberg, #privacy, #signal, #social, #tc, #telegram, #whatsapp

Apple’s dangerous path

Hello friends, and welcome back to Week in Review.

Last week, we dove into the truly bizarre machinations of the NFT market. This week, we’re talking about something that’s a little bit more impactful on the current state of the web — Apple’s NeuralHash kerfuffle.

If you’re reading this on the TechCrunch site, you can get this in your inbox from the newsletter page, and follow my tweets @lucasmtny


the big thing

In the past month, Apple did something it generally has done an exceptional job avoiding — the company made what seemed to be an entirely unforced error.

In early August — seemingly out of nowhere** — the company announced that by the end of the year they would be rolling out a technology called NeuralHash that actively scanned the libraries of all iCloud Photos users, seeking out image hashes that matched known images of child sexual abuse material (CSAM). For obvious reasons, the on-device scanning could not be opted out of.

This announcement was not coordinated with other major consumer tech giants, Apple pushed forward on the announcement alone.

Researchers and advocacy groups had almost unilaterally negative feedback for the effort, raising concerns that this could create new abuse channels for actors like governments to detect on-device information that they regarded as objectionable. As my colleague Zach noted in a recent story, “The Electronic Frontier Foundation said this week it had amassed more than 25,000 signatures from consumers. On top of that, close to 100 policy and rights groups, including the American Civil Liberties Union, also called on Apple to abandon plans to roll out the technology.”

(The announcement also reportedly generated some controversy inside of Apple.)

The issue — of course — wasn’t that Apple was looking at find ways that prevented the proliferation of CSAM while making as few device security concessions as possible. The issue was that Apple was unilaterally making a massive choice that would affect billions of customers (while likely pushing competitors towards similar solutions), and was doing so without external public input about possible ramifications or necessary safeguards.

A long story short, over the past month researchers discovered Apple’s NeuralHash wasn’t as air tight as hoped and the company announced Friday that it was delaying the rollout “to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

Having spent several years in the tech media, I will say that the only reason to release news on a Friday morning ahead of a long weekend is to ensure that the announcement is read and seen by as few people as possible, and it’s clear why they’d want that. It’s a major embarrassment for Apple, and as with any delayed rollout like this, it’s a sign that their internal teams weren’t adequately prepared and lacked the ideological diversity to gauge the scope of the issue that they were tackling. This isn’t really a dig at Apple’s team building this so much as it’s a dig on Apple trying to solve a problem like this inside the Apple Park vacuum while adhering to its annual iOS release schedule.

illustration of key over cloud icon

Image Credits: Bryce Durbin / TechCrunch /

Apple is increasingly looking to make privacy a key selling point for the iOS ecosystem, and as a result of this productization, has pushed development of privacy-centric features towards the same secrecy its surface-level design changes command. In June, Apple announced iCloud+ and raised some eyebrows when they shared that certain new privacy-centric features would only be available to iPhone users who paid for additional subscription services.

You obviously can’t tap public opinion for every product update, but perhaps wide-ranging and trail-blazing security and privacy features should be treated a bit differently than the average product update. Apple’s lack of engagement with research and advocacy groups on NeuralHash was pretty egregious and certainly raises some questions about whether the company fully respects how the choices they make for iOS affect the broader internet.

Delaying the feature’s rollout is a good thing, but let’s all hope they take that time to reflect more broadly as well.

** Though the announcement was a surprise to many, Apple’s development of this feature wasn’t coming completely out of nowhere. Those at the top of Apple likely felt that the winds of global tech regulation might be shifting towards outright bans of some methods of encryption in some of its biggest markets.

Back in October of 2020, then United States AG Bill Barr joined representatives from the UK, New Zealand, Australia, Canada, India and Japan in signing a letter raising major concerns about how implementations of encryption tech posed “significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children.” The letter effectively called on tech industry companies to get creative in how they tackled this problem.


other things

Here are the TechCrunch news stories that especially caught my eye this week:

LinkedIn kills Stories
You may be shocked to hear that LinkedIn even had a Stories-like product on their platform, but if you did already know that they were testing Stories, you likely won’t be so surprised to hear that the test didn’t pan out too well. The company announced this week that they’ll be suspending the feature at the end of the month. RIP.

FAA grounds Virgin Galactic over questions about Branson flight
While all appeared to go swimmingly for Richard Branson’s trip to space last month, the FAA has some questions regarding why the flight seemed to unexpectedly veer so far off the cleared route. The FAA is preventing the company from further launches until they find out what the deal is.

Apple buys a classical music streaming service
While Spotify makes news every month or two for spending a massive amount acquiring a popular podcast, Apple seems to have eyes on a different market for Apple Music, announcing this week that they’re bringing the classical music streaming service Primephonic onto the Apple Music team.

TikTok parent company buys a VR startup
It isn’t a huge secret that ByteDance and Facebook have been trying to copy each other’s success at times, but many probably weren’t expecting TikTok’s parent company to wander into the virtual reality game. The Chinese company bought the startup Pico which makes consumer VR headsets for China and enterprise VR products for North American customers.

Twitter tests an anti-abuse ‘Safety Mode’
The same features that make Twitter an incredibly cool product for some users can also make the experience awful for others, a realization that Twitter has seemingly been very slow to make. Their latest solution is more individual user controls, which Twitter is testing out with a new “safety mode” which pairs algorithmic intelligence with new user inputs.


extra things

Some of my favorite reads from our Extra Crunch subscription service this week:

Our favorite startups from YC’s Demo Day, Part 1 
“Y Combinator kicked off its fourth-ever virtual Demo Day today, revealing the first half of its nearly 400-company batch. The presentation, YC’s biggest yet, offers a snapshot into where innovation is heading, from not-so-simple seaweed to a Clearco for creators….”

…Part 2
“…Yesterday, the TechCrunch team covered the first half of this batch, as well as the startups with one-minute pitches that stood out to us. We even podcasted about it! Today, we’re doing it all over again. Here’s our full list of all startups that presented on the record today, and below, you’ll find our votes for the best Y Combinator pitches of Day Two. The ones that, as people who sift through a few hundred pitches a day, made us go ‘oh wait, what’s this?’

All the reasons why you should launch a credit card
“… if your company somehow hasn’t yet found its way to launch a debit or credit card, we have good news: It’s easier than ever to do so and there’s actual money to be made. Just know that if you do, you’ve got plenty of competition and that actual customer usage will probably depend on how sticky your service is and how valuable the rewards are that you offer to your most active users….”


Thanks for reading, and again, if you’re reading this on the TechCrunch site, you can get this in your inbox from the newsletter page, and follow my tweets @lucasmtny

Lucas Matney

#american-civil-liberties-union, #apple, #apple-inc, #apple-music, #artificial-intelligence, #australia, #bryce-durbin, #bytedance, #canada, #china, #computing, #electronic-frontier-foundation, #encryption, #extra-crunch, #facebook, #federal-aviation-administration, #icloud, #india, #ios, #iphone, #japan, #linkedin, #new-zealand, #pico, #richard-branson, #siri, #spotify, #tech-media, #technology, #united-kingdom, #united-states, #virgin-galactic, #virtual-reality, #y-combinator

Nude hunt: LA phisherman accessed 4,700 iCloud accounts, 620K photos

The Internet is unfortunately packed full of criminals seeking to steal sexual (or sexualizable) images from privately held cloud backup accounts.

Enlarge / The Internet is unfortunately packed full of criminals seeking to steal sexual (or sexualizable) images from privately held cloud backup accounts. (credit: 1905HKN via Getty Images / Jim Salter)

The LA Times reported this week that Los Angeles man Hao Kuo “David” Chi pled guilty to four federal felonies related to his efforts to steal and share online nude images of young women. Chi collected more than 620,000 private photos and 9,000 videos from an undetermined number of victims across the US, most of whom were young and female.

“At least 306” victims

Chi’s plea agreement with federal prosecutors in Tampa, Florida, acknowledged “at least 306” victims. This number may be considerably smaller than the true total, since the FBI found that about 4,700 out of 500,000 emails in two of Chi’s Gmail accounts—backupagenticloud and applebackupicloud at Gmail—contained iCloud credentials that Chi tricked his victims into providing.

According to Chi, he selected roughly 200 of these victims based on online requests. Chi marketed his iCloud break-in “services” under the nom de guerre icloudripper4you. His “customers” would identify an iCloud account for attack, after which Chi would use his sketchily named Gmail accounts to contact the victim, impersonating an Apple service representative.

Read 11 remaining paragraphs | Comments

#biz-it, #fappening, #icloud, #infosec, #phishing, #privacy, #security, #tech

Apple photo-scanning plan faces global backlash from 90 rights groups

Closeup of woman's hand using a smartphone in the dark.

Enlarge (credit: Getty Images | d3sign)

More than 90 policy groups from the US and around the world signed an open letter urging Apple to drop its plan to have Apple devices scan photos for child sexual abuse material (CSAM).

“The undersigned organizations committed to civil rights, human rights, and digital rights around the world are writing to urge Apple to abandon the plans it announced on 5 August 2021 to build surveillance capabilities into iPhones, iPads, and other Apple products,” the letter to Apple CEO Tim Cook said today. “Though these capabilities are intended to protect children and to reduce the spread of child sexual abuse material (CSAM), we are concerned that they will be used to censor protected speech, threaten the privacy and security of people around the world, and have disastrous consequences for many children.”

The Center for Democracy & Technology (CDT) announced the letter, with CDT Security & Surveillance Project Co-Director Sharon Bradford Franklin saying, “We can expect governments will take advantage of the surveillance capability Apple is building into iPhones, iPads, and computers. They will demand that Apple scan for and block images of human rights abuses, political protests, and other content that should be protected as free expression, which forms the backbone of a free and democratic society.”

Read 12 remaining paragraphs | Comments

#apple, #csam, #icloud, #policy

Apple’s CSAM detection tech is under fire — again

Apple has encountered monumental backlash to a new child sexual abuse imagery (CSAM) detection technology it announced earlier this month. The system, which Apple calls NeuralHash, has yet to be activated for its billion-plus users, but the technology is already facing heat from security researchers who say the algorithm is producing flawed results.

NeuralHash is designed to identify known CSAM on a user’s device without having to possess the image or knowing the contents of the image. Because a user’s photos stored in iCloud are end-to-end encrypted so that even Apple can’t access the data, NeuralHash instead scans for known CSAM on a user’s device, which Apple claims is more privacy friendly as it limits the scanning to just photos rather than other companies which scan all of a user’s file.

Apple does this by looking for images on a user’s device that have the same hash — a string of letters and numbers that can uniquely identify an image — that are provided by child protection organizations like NCMEC. If NeuralHash finds 30 or more matching hashes, the images are flagged to Apple for a manual review before the account owner is reported to law enforcement. Apple says the chance of a false positive is about one in one trillion accounts.

But security experts and privacy advocates have expressed concern that the system could be abused by highly-resourced actors, like governments, to implicate innocent victims or to manipulate the system to detect other materials that authoritarian nation states find objectionable. NCMEC called critics the “screeching voices of the minority,” according to a leaked memo distributed internally to Apple staff.

Last night, Asuhariet Ygvar reverse-engineered Apple’s NeuralHash into a Python script and published code to GitHub, allowing anyone to test the technology regardless of whether they have an Apple device to test. In a Reddit post, Ygvar said NeuralHash “already exists” in iOS 14.3 as obfuscated code, but was able to reconstruct the technology to help other security researchers understand the algorithm better before it’s rolled out to iOS and macOS devices later this year.

It didn’t take long before others tinkered with the published code and soon came the first reported case of a “hash collision,” which in NeuralHash’s case is where two entirely different images produce the same hash. Cory Cornelius, a well-known research scientist at Intel Labs, discovered the hash collision. Ygvar confirmed the collision a short time later.

Hash collisions can be a death knell to systems that rely on cryptography to keep them secure, such as encryption. Over the years several well-known password hashing algorithms, like MD5 and SHA-1, were retired after collision attacks rendered them ineffective.

Kenneth White, a cryptography expert and founder of the Open Crypto Audit Project, said in a tweet: “I think some people aren’t grasping that the time between the iOS NeuralHash code being found and [the] first collision was not months or days, but a couple of hours.”

When reached, an Apple spokesperson declined to comment on the record. But in a background call where reporters were not allowed to quote executives directly or by name, Apple downplayed the hash collision and argued that the protections it puts in place — such as a manual review of photos before they are reported to law enforcement — are designed to prevent abuses. Apple also said that the version of NeuralHash that was reverse-engineered is a generic version, and not the complete version that will roll out later this year.

It’s not just civil liberties groups and security experts that are expressing concern about the technology. A senior lawmaker in the German parliament sent a letter to Apple chief executive Tim Cook this week saying that the company is walking down a “dangerous path” and urged Apple not to implement the system.

#algorithms, #apple, #apple-inc, #cryptography, #encryption, #github, #hash, #icloud, #law-enforcement, #password, #privacy, #python, #security, #sha-1, #spokesperson, #tim-cook

Are Apple’s Tools Against Child Abuse Bad for Your Privacy?

The backlash to Apple’s efforts to fight child sexual abuse show that in the debate between privacy and security, there are few easy answers.

#apple-inc, #child-abuse-and-neglect, #child-pornography, #computer-security, #computers-and-the-internet, #holder-eric-h-jr, #icloud, #iphone, #kutcher-ashton, #national-center-for-missing-and-exploited-children, #privacy, #sex-crimes, #smartphones, #snowden-edward-j, #surveillance-of-citizens-by-government, #whatsapp-inc

Updated app from Apple brings iCloud Passwords to Windows

The updated iCloud for Windows and new iCloud Passwords app. (We've blacked out some private information here, obviously.)

Enlarge / The updated iCloud for Windows and new iCloud Passwords app. (We’ve blacked out some private information here, obviously.) (credit: Samuel Axon)

Apple has released a new version of iCloud for Windows, numbered 12.5. The update adds the ability to access and manage passwords saved in iCloud from a Windows machine, a feature users have long requested.

Apple has been gradually adding more support for iCloud passwords on non-Apple platforms with mixed results. The company released a Chrome extension that synced iCloud passwords with Chrome. But like this new iCloud Passwords app, it did the bare minimum and not much else.

Still, this addition is welcome for users who primarily live in the Apple ecosystem (and thus use Apple’s iCloud password locker) but who sometimes have to use Windows. For example, some folks use an iPhone or a Mac most of the time but have a Windows PC that is only used to play games that can’t be played on the Mac.

Read 6 remaining paragraphs | Comments

#apple, #apple-keychain, #icloud, #icloud-for-windows, #icloud-passwords, #tech, #windows, #windows-10

Interview: Apple’s Head of Privacy details child abuse detection and Messages safety features

Last week, Apple announced a series of new features targeted at child safety on its devices. Though not live yet, the features will arrive later this year for users. Though the goals of these features are universally accepted to be good ones — the protection of minors and the limit of the spread of Child Sexual Abuse Material (CSAM), there have been some questions about the methods Apple is using.

I spoke to Erik Neuenschwander, Head of Privacy at Apple, about the new features launching for its devices. He shared detailed answers to many of the concerns that people have about the features and talked at length to some of the tactical and strategic issues that could come up once this system rolls out. 

I also asked about the rollout of the features, which come closely intertwined but are really completely separate systems that have similar goals. To be specific, Apple is announcing three different things here, some of which are being confused with one another in coverage and in the minds of the public. 

CSAM detection in iCloud Photos – A detection system called NeuralHash creates identifiers it can compare with IDs from the National Center for Missing and Exploited Children and other entities to detect known CSAM content in iCloud Photo libraries. Most cloud providers already scan user libraries for this information — Apple’s system is different in that it does the matching on device rather than in the cloud.

Communication Safety in Messages – A feature that a parent opts to turn on for a minor on their iCloud Family account. It will alert children when an image they are going to view has been detected to be explicit and it tells them that it will also alert the parent.

Interventions in Siri and search – A feature that will intervene when a user tries to search for CSAM-related terms through Siri and search and will inform the user of the intervention and offer resources.

For more on all of these features you can read our articles linked above or Apple’s new FAQ that it posted this weekend.

From personal experience, I know that there are people who don’t understand the difference between those first two systems, or assume that there will be some possibility that they may come under scrutiny for innocent pictures of their own children that may trigger some filter. It’s led to confusion in what is already a complex rollout of announcements. These two systems are completely separate, of course, with CSAM detection looking for precise matches with content that is already known to organizations to be abuse imagery. Communication Safety in Messages takes place entirely on the device and reports nothing externally — it’s just there to flag to a child that they are or could be about to be viewing explicit images. This feature is opt-in by the parent and transparent to both parent and child that it is enabled.

Apple’s Communication Safety in Messages feature. Image Credits: Apple

There have also been questions about the on-device hashing of photos to create identifiers that can be compared with the database. Though NeuralHash is a technology that can be used for other kinds of features like faster search in photos, it’s not currently used for anything else on iPhone aside from CSAM detection. When iCloud Photos is disabled, the feature stops working completely. This offers an opt-out for people but at an admittedly steep cost given the convenience and integration of iCloud Photos with Apple’s operating systems.

Though this interview won’t answer every possible question related to these new features, this is the most extensive on-the-record discussion by Apple’s senior privacy member. It seems clear from Apple’s willingness to provide access and its ongoing FAQ’s and press briefings (there have been at least 3 so far and likely many more to come) that it feels that it has a good solution here. 

Despite the concerns and resistance, it seems as if it is willing to take as much time as is necessary to convince everyone of that. 

This interview has been lightly edited for clarity.

TC: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?

Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through user’s iCloud Photos. This system doesn’t change that either, it neither looks through data on the device, nor does it look through all photos in iCloud Photos. Instead what it does is gives us a new ability to identify accounts which are starting collections of known CSAM.

So the development of this new CSAM detection technology is the watershed that makes now the time to launch this. And Apple feels that it can do it in a way that it feels comfortable with and that is ‘good’ for your users?

That’s exactly right. We have two co-equal goals here. One is to improve child safety on the platform and the second is to preserve user privacy, And what we’ve been able to do across all three of the features, is bring together technologies that let us deliver on both of those goals.

Announcing the Communications safety in Messages features and the CSAM detection in iCloud Photos system at the same time seems to have created confusion about their capabilities and goals. Was it a good idea to announce them concurrently? And why were they announced concurrently, if they are separate systems?

Well, while they are [two] systems they are also of a piece along with our increased interventions that will be coming in Siri and search. As important as it is to identify collections of known CSAM where they are stored in Apple’s iCloud Photos service, It’s also important to try to get upstream of that already horrible situation. So CSAM detection means that there’s already known CSAM that has been through the reporting process, and is being shared widely re-victimizing children on top of the abuse that had to happen to create that material in the first place. for the creator of that material in the first place. And so to do that, I think is an important step, but it is also important to do things to intervene earlier on when people are beginning to enter into this problematic and harmful area, or if there are already abusers trying to groom or to bring children into situations where abuse can take place, and Communication Safety in Messages and our interventions in Siri and search actually strike at those parts of the process. So we’re really trying to disrupt the cycles that lead to CSAM that then ultimately might get detected by our system.

The process of Apple’s CSAM detection in iCloud Photos system. Image Credits: Apple

Governments and agencies worldwide are constantly pressuring all large organizations that have any sort of end-to-end or even partial encryption enabled for their users. They often lean on CSAM and possible terrorism activities as rationale to argue for backdoors or encryption defeat measures. Is launching the feature and this capability with on-device hash matching an effort to stave off those requests and say, look, we can provide you with the information that you require to track down and prevent CSAM activity — but without compromising a user’s privacy?

So, first, you talked about the device matching so I just want to underscore that the system as designed doesn’t reveal — in the way that people might traditionally think of a match — the result of the match to the device or, even if you consider the vouchers that the device creates, to Apple. Apple is unable to process individual vouchers; instead, all the properties of our system mean that it’s only once an account has accumulated a collection of vouchers associated with illegal, known CSAM images that we are able to learn anything about the user’s account. 

Now, why to do it is because, as you said, this is something that will provide that detection capability while preserving user privacy. We’re motivated by the need to do more for child safety across the digital ecosystem, and all three of our features, I think, take very positive steps in that direction. At the same time we’re going to leave privacy undisturbed for everyone not engaged in the illegal activity.

Does this, creating a framework to allow scanning and matching of on-device content, create a framework for outside law enforcement to counter with, ‘we can give you a list, we don’t want to look at all of the user’s data but we can give you a list of content that we’d like you to match’. And if you can match it with this content you can match it with other content we want to search for. How does it not undermine Apple’s current position of ‘hey, we can’t decrypt the user’s device, it’s encrypted, we don’t hold the key?’

It doesn’t change that one iota. The device is still encrypted, we still don’t hold the key, and the system is designed to function on on-device data. What we’ve designed has a device side component — and it has the device side component by the way, for privacy improvements. The alternative of just processing by going through and trying to evaluate users data on a server is actually more amenable to changes [without user knowledge], and less protective of user privacy.

Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature. I understand that it’s a complex attribute that a feature of the service has a portion where the voucher is generated on the device, but again, nothing’s learned about the content on the device. The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers which we’ve never done for iCloud Photos. It’s those sorts of systems that I think are more troubling when it comes to the privacy properties — or how they could be changed without any user insight or knowledge to do things other than what they were designed to do.

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference If pressured or asked by a government to compromise the system?

Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the US when they speak in that way, and the therefore it seems to be the case that people agree US law doesn’t offer these kinds of capabilities to our government. 

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. And secondly, the system requires the threshold of images to be exceeded so trying to seek out even a single image from a person’s device or set of people’s devices won’t work because the system simply does not provide any knowledge to Apple for single photos stored in our service. And then, thirdly, the system has built into it a stage of manual review where, if an account is flagged with a collection of illegal CSAM material, an Apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity. And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don’t believe that there’s a basis on which people will be able to make that request in the US. And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled no part of the system is functional.

So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?

If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you’re not using iCloud Photos. 

In recent years, Apple has often leaned into the fact that on-device processing preserves user privacy. And in nearly every previous case and I can think of that’s true. Scanning photos to identify their content and allow me to search them, for instance. I’d rather that be done locally and never sent to a server. However, in this case, it seems like there may actually be a sort of anti-effect in that you’re scanning locally, but for external use cases, rather than scanning for personal use — creating a ‘less trust’ scenario in the minds of some users. Add to this that every other cloud provider scans it on their servers and the question becomes why should this implementation being different from most others engender more trust in the user rather than less?

I think we’re raising the bar, compared to the industry standard way to do this. Any sort of server side algorithm that’s processing all users photos is putting that data at more risk of disclosure and is, by definition, less transparent in terms of what it’s doing on top of the user’s library. So, by building this into our operating system, we gain the same properties that the integrity of the operating system provides already across so many other features, the one global operating system that’s the same for all users who download it and install it, and so it in one property is much more challenging, even how it would be targeted to an individual user. On the server side that’s actually quite easy — trivial. To be able to have some of the properties and building it into the device and ensuring it’s the same for all users with the features enable give a strong privacy property. 

Secondly, you point out how use of on device technology is privacy preserving, and in this case, that’s a representation that I would make to you, again. That it’s really the alternative to where users’ libraries have to be processed on a server that is less private.

The things that we can say with this system is that it leaves privacy completely undisturbed for every other user who’s not into this illegal behavior, Apple gain no additional knowledge about any users cloud library. No user’s iCloud Library has to be processed as a result of this feature. Instead what we’re able to do is to create these cryptographic safety vouchers. They have mathematical properties that say, Apple will only be able to decrypt the contents or learn anything about the images and users specifically that collect photos that match illegal, known CSAM hashes, and that’s just not something anyone can say about a cloud processing scanning service, where every single image has to be processed in a clear decrypted form and run by routine to determine who knows what? At that point it’s very easy to determine anything you want [about a user’s images] versus our system only what is determined to be those images that match a set of known CSAM hashes that came directly from NCMEC and and other child safety organizations. 

Can this CSAM detection feature stay holistic when the device is physically compromised? Sometimes cryptography gets bypassed locally, somebody has the device in hand — are there any additional layers there?

I think it’s important to underscore how very challenging and expensive and rare this is. It’s not a practical concern for most users though it’s one we take very seriously, because the protection of data on the device is paramount for us. And so if we engage in the hypothetical where we say that there has been an attack on someone’s device: that is such a powerful attack that there are many things that that attacker could attempt to do to that user. There’s a lot of a user’s data that they could potentially get access to. And the idea that the most valuable thing that an attacker — who’s undergone such an extremely difficult action as breaching someone’s device — was that they would want to trigger a manual review of an account doesn’t make much sense. 

Because, let’s remember, even if the threshold is met, and we have some vouchers that are decrypted by Apple. The next stage is a manual review to determine if that account should be referred to NCMEC or not, and that is something that we want to only occur in cases where it’s a legitimate high value report. We’ve designed the system in that way, but if we consider the attack scenario you brought up, I think that’s not a very compelling outcome to an attacker.

Why is there a threshold of images for reporting, isn’t one piece of CSAM content too many?

We want to ensure that the reports that we make to NCMEC are high value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched, And so the threshold allows us to reach that point where we expect a false reporting rate for review of one in 1 trillion accounts per year. So, working against the idea that we do not have any interest in looking through users’ photo libraries outside those that are holding collections of known CSAM the threshold allows us to have high confidence that those accounts that we review are ones that when we refer to NCMEC, law enforcement will be able to take up and effectively investigate, prosecute and convict.

#apple, #apple-inc, #apple-photos, #china, #cloud-applications, #cloud-computing, #cloud-services, #computing, #cryptography, #encryption, #european-union, #head, #icloud, #ios, #iphone, #law-enforcement, #operating-system, #operating-systems, #privacy, #private, #siri, #software, #united-states, #webmail

New Apple technology will warn parents and children about sexually explicit photos in Messages

Apple later this year will roll out new tools that will warn children and parents if the child sends or receives sexually explicit photos through the Messages app. The feature is part of a handful of new technologies Apple is introducing that aim to limit the spread of Child Sexual Abuse Material (CSAM) across Apple’s platforms and services.

As part of these developments, Apple will be able to detect known CSAM images on its mobile devices, like iPhone and iPad, and in photos uploaded to iCloud, while still respecting consumer privacy.

The new Messages feature, meanwhile, is meant to enable parents to play a more active and informed role when it comes to helping their children learn to navigate online communication. Through a software update rolling out later this year, Messages will be able to use on-device machine learning to analyze image attachments and determine if a photo being shared is sexually explicit. This technology does not require Apple to access or read the child’s private communications, as all the processing happens on the device. Nothing is passed back to Apple’s servers in the cloud.

If a sensitive photo is discovered in a message thread, the image will be blocked and a label will appear below the photo that states, “this may be sensitive” with a link to click to view the photo. If the child chooses to view the photo, another screen appears with more information. Here, a message informs the child that sensitive photos and videos “show the private body parts that you cover with bathing suits” and “it’s not your fault, but sensitive photos and videos can be used to harm you.”

It also suggests that the person in the photo or video may not want it to be seen and it could have been shared without their knowing.

Image Credits: Apple

These warnings aim to help guide the child to make the right decision by choosing not to view the content.

However, if the child clicks through to view the photo anyway, they’ll then be shown an additional screen that informs them that if they choose to view the photo, their parents will be notified. The screen also explains that their parents want them to be safe and suggests that the child talk to someone if they feel pressured. It offers a link to more resources for getting help, as well.

There’s still an option at the bottom of the screen to view the photo, but again, it’s not the default choice. Instead, the screen is designed in a way where the option to not view the photo is highlighted.

These types of features could help protect children from sexual predators, not only by introducing technology that interrupts the communications and offers advice and resources, but also because the system will alert parents. In many cases where a child is hurt by a predator, parents didn’t even realize the child had begun to talk to that person online or by phone. This is because child predators are very manipulative and will attempt to gain the child’s trust, then isolate the child from their parents so they’ll keep the communications a secret. In other cases, the predators have groomed the parents, too.

Apple’s technology could help in both cases by intervening, identifying and alerting to explicit materials being shared.

However, a growing amount of CSAM material is what’s known as self-generated CSAM, or imagery that is taken by the child, which may be then shared consensually with the child’s partner or peers. In other words, sexting or sharing “nudes.” According to a 2019 survey from Thorn, a company developing technology to fight the sexual exploitation of children, this practice has become so common that 1 in 5 girls ages 13 to 17 said they have shared their own nudes, and 1 in 10 boys have done the same. But the child may not fully understand how sharing that imagery puts them at risk of sexual abuse and exploitation.

The new Messages feature will offer a similar set of protections here, too. In this case, if a child attempts to send an explicit photo, they’ll be warned before the photo is sent. Parents can also receive a message if the child chooses to send the photo anyway.

Apple says the new technology will arrive as part of a software update later this year to accounts set up as families in iCloud for iOS 15, iPadOS 15, and macOS Monterey in the U.S.

This update will also include updates to Siri and Search that will offer expanded guidance and resources to help children and parents stay safe online and get help in unsafe situations. For example, users will be able to ask Siri how to report CSAM or child exploitation. Siri and Search will also intervene when users search for queries related to CSAM to explain that the topic is harmful and provide resources to get help.

#apple, #apple-inc, #apple-photos, #apps, #computing, #icloud, #ios, #ipad, #iphone, #machine-learning, #messages, #mobile-devices, #operating-systems, #privacy, #security, #sexting, #siri, #software, #united-states

Apple says it will begin scanning iCloud Photos for child abuse images

Later this year, Apple will roll out a technology that will allow the company to detect and report known child sexual abuse material to law enforcement in a way it says will preserve user privacy.

Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology — NeuralHash — instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.

News of Apple’s effort leaked Wednesday when Matthew Green, a cryptography professor at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news was met with some resistance from some security experts and privacy advocates, but also users who are accustomed to Apple’s approach to security and privacy that most other companies don’t have.

Apple is trying to calm fears by baking in privacy through multiple layers of encryption, fashioned in a way that requires multiple steps before it ever makes it into the hands of Apple’s final manual review.

NeuralHash will land in iOS 15 and macOS Monterey, slated to be released in the next month or two, and works by converting the photos on a user’s iPhone or Mac into a unique string of letters and numbers, known as a hash. Any time you modify an image slightly, it changes the hash and can prevent matching. Apple says NeuralHash tries to ensure that identical and visually similar images — such as cropped or edited images — result in the same hash.

Before an image is uploaded to iCloud Photos, those hashes are matched on the device against a database of known hashes of child abuse imagery, provided by child protection organizations like the National Center for Missing & Exploited Children (NCMEC) and others. NeuralHash uses a cryptographic technique called private set intersection to detect a hash match without revealing what the image is or alerting the user.

The results are uploaded to Apple but cannot be read on their own. Apple uses another cryptographic principle called threshold secret sharing that allows it only to decrypt the contents if a user crosses a threshold of known child abuse imagery in their iCloud Photos. Apple would not say what that threshold was, but said — for example — that if a secret is split into a thousand pieces and the threshold is ten images of child abuse content, the secret can be reconstructed from any of those ten images.

Read more on TechCrunch

It’s at that point Apple can decrypt the matching images, manually verify the contents, disable a user’s account and report the imagery to NCMEC, which is then passed to law enforcement. Apple says this process is more privacy mindful than scanning files in the cloud as NeuralHash only searches for known and not new child abuse imagery. Apple said that there is a one in one trillion chance of a false positive, but there is an appeals process in place in the event an account is mistakenly flagged.

Apple has published technical details on its website about how NeuralHash works, which was reviewed by cryptography experts.

But despite the wide support of efforts to combat child sexual abuse, there is still a component of surveillance that many would feel uncomfortable handing over to an algorithm, and some security experts are calling for more public discussion before Apple rolls the technology out to users.

A big question is why now and not sooner. Apple said its privacy-preserving CSAM detection did not exist until now. But companies like Apple have also faced considerable pressure from the U.S. government and its allies to weaken or backdoor the encryption used to protect their users’ data to allow law enforcement to investigate serious crime.

Tech giants have refused efforts to backdoor their systems, but have faced resistance against efforts to further shut out government access. Although data stored in iCloud is encrypted in a way that even Apple cannot access it, Reuters reported last year that Apple dropped a plan for encrypting users’ full phone backups to iCloud after the FBI complained that it would harm investigations.

The news about Apple’s new CSAM detection tool, without public discussion, also sparked concerns that the technology could be abused to flood victims with child abuse imagery that could result in their account getting flagged and shuttered, but Apple downplayed the concerns and said a manual review would review the evidence for possible misuse.

Apple said NeuralHash will roll out in the U.S. at first, but would not say if, or when, it would be rolled out internationally. Until recently, companies like Facebook were forced to switch off its child abuse detection tools across the bloc after the practice was inadvertently banned. Apple said the feature is technically optional in that you don’t have to use iCloud Photos, but will be a requirement if users do. After all, your device belongs to you but Apple’s cloud does not.

#apple, #apple-inc, #cloud-applications, #cloud-services, #computing, #cryptography, #encryption, #facebook, #federal-bureau-of-investigation, #icloud, #ios, #iphone, #johns-hopkins-university, #law-enforcement, #macos, #privacy, #security, #technology, #u-s-government, #united-states, #webmail

7 new security features Apple quietly announced at WWDC

Apple went big on privacy during its Worldwide Developer Conference (WWDC) keynote this week, showcasing features from on-device Siri audio processing to a new privacy dashboard for iOS that makes it easier than ever to see which apps are collecting your data and when.

While typically vocal about security during the Memoji-filled, two-hour-long(!) keynote, the company also quietly introduced several new security and privacy-focused features during its WWDC developer sessions. We’ve rounded up some of the most interesting — and important.

Passwordless login with iCloud Keychain

Apple is the latest tech company taking steps to ditch the password. During its “Move beyond passwords” developer session, it previewed Passkeys in iCloud Keychain, a method of passwordless authentication powered by WebAuthn, and Face ID and Touch ID.

The feature, which will ultimately be available in both iOS 15 and macOS Monterey, means you no longer have to set a password when creating an account or a website or app. Instead, you’ll simply pick a username, and then use Face ID or Touch ID to confirm it’s you. The passkey is then stored in your keychain and then synced across your Apple devices using iCloud — so you don’t have to remember it, nor do you have to carry around a hardware authenticator key.

“Because it’s just a single tap to sign in, it’s simultaneously easier, faster and more secure than almost all common forms of authentication today,” said Garrett Davidson, an Apple authentication experience engineer. 

While it’s unlikely to be available on your iPhone or Mac any time soon — Apple says the feature is still in its ‘early stages’ and it’s currently disabled by default — the move is another sign of the growing momentum behind eliminating passwords, which are prone to being forgotten, reused across multiple services, and — ultimately — phishing attacks. Microsoft previously announced plans to make Windows 10 password-free, and Google recently confirmed that it’s working towards “creating a future where one day you won’t need a password at all”.

Microphone indicator in macOS

macOS has a new indicator to tell you when the microhpone is on. (Image: Apple)

Since the introduction of iOS 14, iPhone users have been able to keep an eye on which apps are accessing their microphone via a green or orange dot in the status bar. Now it’s coming to the desktop too.

In macOS Monterey, users will be able to see which apps are accessing their Mac’s microphone in Control Center, MacRumors reports, which will complement the existing hardware-based green light that appears next to a Mac’s webcam when the camera is in use.

Secure paste

iOS 15, which will include a bunch of privacy-bolstering tools from Mail Privacy Protection to App Privacy Reports, is also getting a feature called Secure Paste that will help to shield your clipboard data from other apps.

This feature will enable users to paste content from one app to another, without the second app being able to access the information on the clipboard until you paste it. This is a significant improvement over iOS 14, which would notify when an app took data from the clipboard but did nothing to prevent it from happening.

With secure paste, developers can let users paste from a different app without having access to what was copied until the user takes action to paste it into their app,” Apple explains. “When developers use secure paste, users will be able to paste without being alerted via the [clipboard] transparency notification, helping give them peace of mind.”

While this feature sounds somewhat insignificant, it’s being introduced following a major privacy issue that came to light last year. In March 2020, security researchers revealed that dozens of popular iOS apps — including TikTok — were “snooping” on users’ clipboard without their consent, potentially accessing highly sensitive data.

Advanced Fraud Protection for Apple Card

Payments fraud is more prevalent than ever as a result of the pandemic, and Apple is looking to do something about it. As first reported by 9to5Mac, the company has previewed Advanced Fraud Protection, a feature that will let Apple Card users generate new card numbers in the Wallet app.

While details remain thin — the feature isn’t live in the first iOS 15 developer beta — Apple’s explanation suggests that Advanced Fraud Protection will make it possible to generate new security codes — the three-digit number you enter at checkout – when making online purchases. 

“With Advanced Fraud Protection, Apple Card users can have a security code that changes regularly to make online Card Number transactions even more secure,” the brief explainer reads. We’ve asked Apple for some more information. 

‘Unlock with Apple Watch’ for Siri requests

As a result of the widespread mask-wearing necessitated by the pandemic, Apple introduced an ‘Unlock with Apple Watch’ in iOS 14.5 that let enabled users to unlock their iPhone and authenticate Apple Pay payments using an Apple Watch instead of Face ID.

The scope of this feature is expanding with iOS 15, as the company has confirmed that users will soon be able to use this alternative authentication method for Siri requests, such as adjusting phone settings or reading messages. Currently, users have to enter a PIN, password or use Face ID to do so.

“Use the secure connection to your Apple Watch for Siri requests or to unlock your iPhone when an obstruction, like a mask, prevents Face ID from recognizing your Face,” Apple explains. Your watch must be passcode protected, unlocked, and on your wrist close by.”

Standalone security patches

To ensure iPhone users who don’t want to upgrade to iOS 15 straight away are up to date with security updates, Apple is going to start decoupling patches from feature updates. When iOS 15 lands later this year, users will be given the option to update to the latest version of iOS or to stick with iOS 14 and simply install the latest security fixes. 

“iOS now offers a choice between two software update versions in the Settings app,” Apple explains (via MacRumors). “You can update to the latest version of iOS 15 as soon as it’s released for the latest features and most complete set of security updates. Or continue on ‌iOS 14‌ and still get important security updates until you’re ready to upgrade to the next major version.”

This feature sees Apple following in the footsteps of Google, which has long rolled out monthly security patches to Android users.

‘Erase all contents and settings’ for Mac

Wiping a Mac has been a laborious task that has required you to erase your device completely then reinstall macOS. Thankfully, that’s going to change. Apple is bringing the “erase all contents and settings” option that’s been on iPhones and iPads for years to macOS Monterey.

The option will let you factory reset your MacBook with just a click. “System Preferences now offers an option to erase all user data and user-installed apps from the system, while maintaining the operating system currently installed,” Apple says. “Because storage is always encrypted on Mac systems with Apple Silicon or the T2 chip, the system is instantly and securely ‘erased’ by destroying the encryption keys.”

#android, #apple, #apple-inc, #clipboard, #computing, #control-center, #encryption, #face-id, #google, #icloud, #ios, #ios-14, #ipads, #iphone, #keychain, #microsoft, #microsoft-windows, #online-purchases, #operating-system, #operating-systems, #privacy, #security, #siri, #software

Apple announces iCloud+ with privacy-focused features

Apple is rolling out some updates to iCloud under the name iCloud+. The company is announcing those features at its developer conference. Existing paid iCloud users are going to get those iCloud+ features for the same monthly subscription price.

In Safari, Apple is going to launch a new privacy feature called Private Relay. It sounds a bit like the new DNS feature that Apple has been developing with Cloudflare. Originally named Oblivious DNS-over-HTTPS, Private Relay could be a better name for something quite simple — a combination of DNS-over-HTTPS with proxy servers.

When Private Relay is turned on, nobody can track your browsing history — not your internet service provider, anyone standing in the middle of your request between your device and the server you’re requesting information from. We’ll have to wait a bit to learn more about how it works exactly.

The second iCloud+ feature is ‘Hide my email’. It lets you generate random email addresses when you sign up to a newsletter or when you create an account on a website. If you’ve used ‘Sign in with Apple’, you know that Apple offers you the option to use fake iCloud email addresses. This works similarly, but for any app.

Finally, Apple is overhauling HomeKit Secure Video. With the name iCloud+, Apple is separating free iCloud users from paid iCloud users. Basically, you used to pay for more storage. Now, you pay for more storage and more features. Subscriptions start at $0.99 per month for 50GB (and iCloud+ features).

More generally, Apple is adding two much needed to iCloud accounts. Now, you can add a friend for account recovery. This way, you can request access to your data to your friend. But that doesn’t mean that your friend can access your iCloud data — it’s just a way to recover your account.

The last much-needed update is a legacy feature. You’ll soon be able to add one or several legacy contacts. Data can be passed along when you pass away. And this is a much needed feature as many photo libraries become inaccessible when someone close to you passes away.

read more about Apple's WWDC 2021 on TechCrunch

#apple, #apps, #gadgets, #icloud, #mobile, #wwdc, #wwdc-2021

Apple’s Compromises in China: 5 Takeaways

To stay on the good side of the Chinese authorities, the company has made decisions that contradict its carefully curated image.

#apple-inc, #censorship, #cloud-computing, #communist-party-of-china, #computer-security, #computers-and-the-internet, #data-centers, #guizhou-china, #guizhou-cloud-big-data-industry-co-ltd, #guo-wengui, #icloud

Censorship, Surveillance and Profits: A Hard Bargain for Apple in China

Apple built the world’s most valuable business on top of China. Now it has to answer to the Chinese government.

#apple-inc, #censorship, #cloud-computing, #computer-security, #computers-and-the-internet, #cook-timothy-d, #cue-eddy, #data-storage, #guizhou-cloud-big-data-industry-co-ltd, #guo-wengui, #icloud, #inner-mongolia, #iphone, #mobile-applications, #privacy, #software, #suits-and-litigation-civil, #surveillance-of-citizens-by-government

First findings with Apple’s new AirTag location devices

I’ve been playing around with Apple’s new AirTag location devices for a few hours now and they seem to work pretty much as advertised. The setup flow is simple and clean, taking clear inspiration from the one Apple developed for AirPods. The precision finding feature enabled by the U1 chip works as a solid example of utility-driven augmented reality, popping up a virtual arrow and other visual identifiers on the screen to make finding a tag quicker.

The basic way that AirTags work, if you’re not familiar, is that they use Bluetooth beaconing technology to announce their presence to any nearby devices running iOS 14.5 and above. These quiet pings are encrypted and invisible (usually) to any passer by, especially if they are with their owners. This means that no one ever knows what device actually ‘located’ your AirTag, not even Apple.

With you, by the way, means in relative proximity to a device signed in to the iCloud account that the AirTags are registered to. Bluetooth range is typically in the ~40 foot range depending on local conditions and signal bounce. 

In my very limited testing so far, AirTag location range fits in with that basic Bluetooth expectation. Which means that it can be foiled by a lot of obstructions or walls or an unflattering signal bounce. It often took 30 seconds or more to get an initial location from an AirTag in another room, for instance. Once the location was received, however, the instructions to locate the device seemed to update quickly and were extremely accurate down to a few inches.

The AirTags run for a year on a standard CR2032 battery that’s user replaceable. They offer some water resistance including submersion for some time. There are a host of accessories that seem nicely designed like leather straps for bags, luggage tags and key rings.

So far so good. More testing to come. 

Some protections

As with anything to do with location, security and privacy are a top of mind situation for AirTags, and Apple has some protections in place.

You cannot share AirTags — they are meant to be owned by one person. The only special privileges offered by people in your iCloud Family Sharing Group is that they can silence the ‘unknown AirTag nearby’ alerts indefinitely. This makes AirTags useful for things like shared sets of keys or maybe even a family pet. This means that AirTags will not show up on your family Find My section like other iOS devices might. There is now a discrete section within the app just for ‘Items’ including those with Find My functionality built in. 

The other privacy features include a ‘warning’ that will trigger after some time that a tag is in your proximity and NOT in the proximity of its owner (aka, traveling with you perhaps in a bag or car). Your choices are then to make the tag play a sound to locate it — look at its information including serial number and to disable it by removing its battery. 

Any AirTag that has been away from its owner for a while — this time is variable and Apple will tweak it over time as it observes how AirTags work — will start playing a sound whenever it is moved. This will alert people to its presence. 

You can, of course, also place an AirTag into Lost Mode, offering a choice to share personal information with anyone who locates it as it plays an alert sound. Anyone with any smart device with NFC, Android included, can tap the device to see a webpage with information that you choose to share. Or just a serial number if you do not choose to do so. 

This scenario addresses what happens if you don’t have an iOS device to alert you to a foreign AirTag in your presence, as it will eventually play a sound even if it is not in lost mode and the owner has no control over that.

It’s clear that Apple has thought through many of the edge cases, but some could still crop up as it rolls out, we’ll have to see.

Apple has some distinct market advantages here:

  • Nearly a billion devices out in the world that can help to locate an AirTag.
  • A built-in U1 wideband chip that communicates with a similar U1 chip in iPhones to enable super precise (down to inches) location.
  • A bunch of privacy features that don’t appear on competing tags.

Important to note that Apple has announced the development of a specification for chipset makers that lets third-party devices with Ultra Wideband radios access the U1 chip onboard iPhones ‘later this Spring’. This should approximate the Precision Finding feature’s utility in accessories that don’t have the advantage of having a U1 built in like the AirTags do. And, of course, Apple has opened up the entire Find My mesh network to third party devices from Belkin, Chipolo and VanMoof that want to offer a similar basic finding function as offered by AirTags. Tile has announced plans to offer a UWB version of its tracker as well, even as it testified in Congress yesterday that Apple’s advantages made its entry into this market unfair. 

It will be interesting to see these play out once AirTags are out getting lost in the wild. I have had them for under 12 hours so I’ve not been able to test edge cases, general utility in public spaces or anything like that. 

The devices go on sale on April 23rd.

#airpods, #airtag, #airtags, #android, #apple, #apple-inc, #belkin, #bluetooth, #congress, #find-my, #icloud, #ios, #ios-14, #iphone, #mesh-network, #smart-device, #tc, #technology, #telecommunications, #u1, #u1-chip, #ultra-wideband

Apple launches an iCloud Passwords extension for Chrome users on Windows

Apple has introduced an iCloud Passwords Chrome extension that will make life easier for those who use both Windows computers and other Apple devices, like a Macbook or an iPhone. The new browser extension lets you access the passwords you saved in Safari on your other Apple devices, then use them within Chrome when you’re on a Windows PC.

You can also save any new passwords you create in Chrome to your iCloud keychain, so it’s synced across your Apple devices.

Image Credits: Apple

Apple didn’t formally announce the new feature, but reports of an iCloud Passwords extension had already been referenced in the release notes of the new iCloud for Windows 10 (ver 12), which arrived at the end of January. After the update, a “Passwords” section appeared in the app designated by the iCloud Keychain logo. This directed users to download the new extension, but the link was broken, as the extension was not yet live.

That changed on Sunday, according a report from 9to5Google, which found the new Chrome add-on had been published to the Chrome Web Store late on Sunday evening. Now, when Windows users access the new Passwords section, the dialog box that prompts the download will properly function.

Once installed, Chrome users on Windows will be able to access any passwords they saved or allowed iCloud Keychain to securely generate for them within Safari for macOS or iOS. Meanwhile, as Windows users create new credentials, these, too, will be synced to their iCloud Keychain so they can later be pulled up on Mac, iPhone, and iPad devices, when needed.

This is the first Chrome extension to support iCloud Keychain on Windows, as before Apple had only offered an iCloud Bookmarks tool for older Windows 7 and 8 PCs, which reached over 7 million users.

Image Credits: Apple

Some users who have tried the extension are reporting problems, but it seems that’s related to their PCs not having been first updated to iCloud for Windows 12.0, which is a prerequisite for the new extension to work.

Though Apple typically locks users into its own platforms, it has slowly expanded some of its services to Windows and even Android, where it makes sense. Today, Apple offers its entertainment apps like Apple Music and Apple TV on other platforms, including Android, and has launched Apple TV on its media player rival, Amazon Fire TV, among others. And 9to5Mac notes that Apple appears to be working to bring Music and Podcasts to the Microsoft Store in the future, as well.

#apple, #browser, #chrome, #icloud, #icloud-keychain, #microsoft, #passwords, #safari, #security, #windows

Apple One services subscription bundles start launching tomorrow

Apple is launching its Apple One services bundle tomorrow, though the company’s workout service Fitness+ isn’t quite ready yet.

On an earnings call today, CEO Tim Cook revealed tomorrow’s rollout and called the service the “easiest way for users to enjoy Apple services.” In a conversation with Bloomberg, Apple CFO Luca Maestri revealed the launch timing for Fitness+ as well. The company also detailed that it has 585 million total paid services subscriptions and expects to reach 600 million before the end of the 2020 calendar year.

The subscription bundle is designed around bringing more users into more Apple Services. It’s a big play to get subscribers to switch from Spotify to Apple Music as that is likely the crown jewel of the offering.

The company’s $14.99 per month individual plan includes Apple Music, Apple TV+, Apple Arcade and 50GB of iCloud storage. Apple also sells $19.99 family plans that bump up the storage to 200GB and is planning to debut a “premiere” plan for $29.99 that includes Fitness+ and Apple News+.

Apple’s Services division is growing in importance to the company’s bottom line, with the group reaching an all-time-high in revenue and reaching past half of the quarter’s iPhone revenues. You can read more on their earnings release below.

#apple, #apple-arcade, #apple-inc, #apple-music, #apple-news, #apple-one, #apple-services, #apple-tv, #ceo, #cfo, #computing, #e-commerce, #icloud, #iphone, #luca-maestri, #spotify, #tc, #tim-cook

Apple pays $288,000 to white-hat hackers who had run of company’s network

Inside a black-and-white Apple logo, a computer screen silhouettes someone typing.

Enlarge (credit: Nick Wright. Used by permission.)

For months, Apple’s corporate network was at risk of hacks that could have stolen sensitive data from potentially millions of its customers and executed malicious code on their phones and computers, a security researcher said on Thursday.

Sam Curry, a 20-year-old researcher who specializes in website security, said that, in total, he and his team found 55 vulnerabilities. He rated 11 of them critical because they allowed him to take control of core Apple infrastructure and from there steal private emails, iCloud data, and other private information.

The 11 critical bugs were:

Read 16 remaining paragraphs | Comments

#apple, #biz-it, #bug-bounties, #hacking, #icloud, #tech

Apple opens up — slightly — on Hong Kong’s national security law

After Beijing unilaterally imposed a new national security law on Hong Kong on July 1, many saw the move as an effort by Beijing to crack down on dissent and protests in the semi-autonomous region.

Soon after, a number of tech giants — including Microsoft, Twitter and Google — said they would stop processing requests for user data from Hong Kong authorities, fearing that the requested data could end up in the hands of Beijing.

But Apple was noticeably absent from the list. Instead, Apple said it was “assessing” the new law.

When reached by TechCrunch, Apple did not say how many requests for user data it had received from Hong Kong authorities since the new national security law went into effect. But the company reiterated that it doesn’t receive requests for user content directly from Hong Kong. Instead, it relies on a long-established so-called mutual legal assistance treaty, allowing U.S. authorities to first review requests from foreign governments.

Apple said it stores iCloud data for Hong Kong users in the United States, so any requests by Hong Kong authorities for user content has to be first approved by the Justice Department, and a warrant has to be issued by a U.S. federal judge before the data can be handed over to Hong Kong.

The company said that it received a limited number of non-content requests from Hong Kong related to fraud or stolen devices, and that the number of requests it received from Hong Kong authorities since the introduction of the national security law will be included in an upcoming transparency report.

Hong Kong authorities made 604 requests for device information, 310 requests for financial data, and 10 requests for user account data during 2019.

The report also said that Apple received 5,295 requests from U.S. authorities during the second half of last year for data related to 80,235 devices, a seven-fold increase from the previous six months.

Apple also received 4,095 requests from U.S. authorities for user data stored in iCloud on 31,780 accounts, twice the number of accounts affected during the previous six months.

Most of the requests related to ongoing return and repair fraud investigations, Apple said.

The report said it received 2,522 requests from U.S. authorities to preserve data on 6,741 user accounts, allowing law enforcement to obtain the right legal process to access the data.

Apple also said it received between 0-499 national security requests for non-content data on between 15,500 and 15,999 users or accounts, an increase of 40% on the previous report.

Tech companies are only allowed to report the number of national security requests in ranges, per rules set out by the Justice Department.

The company also published two FBI national security letters, or NSLs, from 2019, which the company petitioned to make public. These letters are subpoenas issued by the FBI with no judicial oversight and often with a gag order preventing the company from disclosing their existence. Since the introduction of the Freedom Act in 2015, the FBI was required to periodically review the gag orders and lift them when they were no longer deemed necessary.

Apple also said it received 54 requests from governments to remove 258 apps from its app store. China filed the vast majority of requests.

#apple, #department-of-justice, #government, #icloud, #law-enforcement, #operating-systems, #security, #transparency-report

Apple said to soon offer subscription bundles combining multiple of its services

Apple is reportedly getting ready to launch new bundles of its various subscription services, according to Bloomberg. The bundled services packages, said to be potentially called ‘Apple One,’ will include Apple services including Apple Music, Apple Arcade, Apple TV+, Apple News+ and iCloud in a number of different tiered offerings, all for one fee lower that would be lower than subscribing to each individually.

Bloomberg says that these could launch as early as October, which is when the new iPhone is said to be coming to market. Different package options will include one entry-level offering with Apple Music and Apple TV+, alongside an upgrade option that adds Apple Arcade, and other that also includes Apple News+. A higher-priced option will also bundle in extra iCloud storage, according to the report, though Bloomberg also claims that these arrangements and plans could still change prior to launch.

While the final pricing isn’t included in the report, it does say that the aim is to save subscribers between $2 and $5 per month depending on the tier, vs. the standard cost of subscribing to those services currently. All subscriptions would also work with Apple’s existing Family Sharing system, meaning up to six members of a single household can have access through Apple’s existing shared family digital goods infrastructure.

Apple is also said to be planning to continue its strategy of bundling free subscriptions to its services with new hardware purchases – a tactic it used last year with the introduction of Apple TV+, which it offered free for a year to customers who bought recently-released Apple hardware.

Service subscription bundling is move that a lot of Apple observers have been calling for basically ever since Apple started investing more seriously in its service options. The strategy makes a lot of sense, especially in terms of helping Apple boost adoption of its services which aren’t necessarily as popular as some of the others. It also provides a way for the company to begin to build out a more comprehensive and potentially stable recurring revenue business similar to something like Amazon Prime, which is a regular standout success story for Amazon in terms of its fiscal performance.

#amazon, #apple, #apple-arcade, #apple-inc, #apple-music, #apple-news, #apple-tv, #apps, #cloud-applications, #computing, #icloud, #ios, #iphone, #itunes, #subscription-services, #tc, #webmail

Google One now offers free phone backups up to 15GB on Android and iOS

Google One, Google’s subscription program for buying additional storage and live support, is getting an update today that will bring free phone backups for Android and iOS devices to anybody who installs the app — even if they don’t have a paid membership. The catch: While the feature is free, the backups count against your free Google storage allowance of 15GB. If you need more you need — you guessed it — a Google One membership to buy more storage or delete data you no longer need. Paid memberships start at $1.99/month for 100GB.

Image Credits: Google

Last year, paid members already got access to this feature on Android, which stores your texts, contacts, apps, photos and videos in Google’s cloud. The “free” backups are now available to Android users. iOS users will get access to it once the Google One app rolls out on iOS in the near future.

Image Credits: Google

With this update, Google is also introducing a new storage manager tool in Google One, which is available in the app and on the web, and which allows you to delete files and backups as needed. The tool works across Google properties and lets you find emails with very large attachments or large files in your Google Drive storage, for example.

With this free backup feature, Google is clearly trying to get more people onto Google One. The free 15GB storage limit is pretty easy to hit, after all (and that’s for your overall storage on Google, including Gmail and other services) and paying $1.99 for 100GB isn’t exactly a major expense, especially if you are already part of the Google ecosystem and use apps like Google Photos already.

#android, #cloud-storage, #computing, #google, #google-drive, #icloud, #ios, #ios-devices, #mobile-app, #operating-systems, #smartphones, #tc

The FBI is mad because it keeps getting into locked iPhones without Apple’s help

The debate over encryption continues to drag on without end.

In recent months, the discourse has largely swung away from encrypted smartphones to focus instead on end-to-end encrypted messaging. But a recent press conference by the heads of the Department of Justice (DOJ) and the Federal Bureau of Investigation (FBI) showed that the debate over device encryption isn’t dead, it was merely resting. And it just won’t go away.

At the presser, Attorney General William Barr and FBI Director Chris Wray announced that after months of work, FBI technicians had succeeded in unlocking the two iPhones used by the Saudi military officer who carried out a terrorist shooting at the Pensacola Naval Air Station in Florida in December 2019. The shooter died in the attack, which was quickly claimed by Al Qaeda in the Arabian Peninsula.

Early this year — a solid month after the shooting — Barr had asked Apple to help unlock the phones (one of which was damaged by a bullet), which were older iPhone 5 and 7 models. Apple provided “gigabytes of information” to investigators, including “iCloud backups, account information and transactional data for multiple accounts,” but drew the line at assisting with the devices. The situation threatened to revive the 2016 “Apple versus FBI” showdown over another locked iPhone following the San Bernardino terror attack.

After the government went to federal court to try to dragoon Apple into doing investigators’ job for them, the dispute ended anticlimactically when the government got into the phone itself after purchasing an exploit from an outside vendor the government refused to identify. The Pensacola case culminated much the same way, except that the FBI apparently used an in-house solution instead of a third party’s exploit.

You’d think the FBI’s success at a tricky task (remember, one of the phones had been shot) would be good news for the Bureau. Yet an unmistakable note of bitterness tinged the laudatory remarks at the press conference for the technicians who made it happen. Despite the Bureau’s impressive achievement, and despite the gobs of data Apple had provided, Barr and Wray devoted much of their remarks to maligning Apple, with Wray going so far as to say the government “received effectively no help” from the company.

This diversion tactic worked: in news stories covering the press conference, headline after headline after headline highlighted the FBI’s slam against Apple instead of focusing on what the press conference was nominally about: the fact that federal law enforcement agencies can get into locked iPhones without Apple’s assistance.

That should be the headline news, because it’s important. That inconvenient truth undercuts the agencies’ longstanding claim that they’re helpless in the face of Apple’s encryption and thus the company should be legally forced to weaken its device encryption for law enforcement access. No wonder Wray and Barr are so mad that their employees keep being good at their jobs.

By reviving the old blame-Apple routine, the two officials managed to evade a number of questions that their press conference left unanswered. What exactly are the FBI’s capabilities when it comes to accessing locked, encrypted smartphones? Wray claimed the technique developed by FBI technicians is “of pretty limited application” beyond the Pensacola iPhones. How limited? What other phone-cracking techniques does the FBI have, and which handset models and which mobile OS versions do those techniques reliably work on? In what kinds of cases, for what kinds of crimes, are these tools being used?

We also don’t know what’s changed internally at the Bureau since that damning 2018 Inspector General postmortem on the San Bernardino affair. Whatever happened with the FBI’s plans, announced in the IG report, to lower the barrier within the agency to using national security tools and techniques in criminal cases? Did that change come to pass, and did it play a role in the Pensacola success? Is the FBI cracking into criminal suspects’ phones using classified techniques from the national security context that might not pass muster in a court proceeding (were their use to be acknowledged at all)?

Further, how do the FBI’s in-house capabilities complement the larger ecosystem of tools and techniques for law enforcement to access locked phones? Those include third-party vendors GrayShift and Cellebrite’s devices, which, in addition to the FBI, count numerous U.S. state and local police departments and federal immigration authorities among their clients. When plugged into a locked phone, these devices can bypass the phone’s encryption to yield up its contents, and (in the case of GrayShift) can plant spyware on an iPhone to log its passcode when police trick a phone’s owner into entering it. These devices work on very recent iPhone models: Cellebrite claims it can unlock any iPhone for law enforcement, and the FBI has unlocked an iPhone 11 Pro Max using GrayShift’s GrayKey device.

In addition to Cellebrite and GrayShift, which have a well-established U.S. customer base, the ecosystem of third-party phone-hacking companies includes entities that market remote-access phone-hacking software to governments around the world. Perhaps the most notorious example is the Israel-based NSO Group, whose Pegasus software has been used by foreign governments against dissidents, journalists, lawyers and human rights activists. The company’s U.S. arm has attempted to market Pegasus domestically to American police departments under another name. Which third-party vendors are supplying phone-hacking solutions to the FBI, and at what price?

Finally, who else besides the FBI will be the beneficiary of the technique that worked on the Pensacola phones? Does the FBI share the vendor tools it purchases, or its own home-rolled ones, with other agencies (federal, state, tribal or local)? Which tools, which agencies and for what kinds of cases? Even if it doesn’t share the techniques directly, will it use them to unlock phones for other agencies, as it did for a state prosecutor soon after purchasing the exploit for the San Bernardino iPhone?

We have little idea of the answers to any of these questions, because the FBI’s capabilities are a closely held secret. What advances and breakthroughs it has achieved, and which vendors it has paid, we (who provide the taxpayer dollars to fund this work) aren’t allowed to know. And the agency refuses to answer questions about encryption’s impact on its investigations even from members of Congress, who can be privy to confidential information denied to the general public.

The only public information coming out of the FBI’s phone-hacking black box is nothingburgers like the recent press conference. At an event all about the FBI’s phone-hacking capabilities, Director Wray and AG Barr cunningly managed to deflect the press’s attention onto Apple, dodging any difficult questions, such as what the FBI’s abilities mean for Americans’ privacy, civil liberties and data security, or even basic questions like how much the Pensacola phone-cracking operation cost.

As the recent PR spectacle demonstrated, a press conference isn’t oversight. And instead of exerting its oversight power, mandating more transparency, or requiring an accounting and cost/benefit analysis of the FBI’s phone-hacking expenditures — instead of demanding a straight and conclusive answer to the eternal question of whether, in light of the agency’s continually-evolving capabilities, there’s really any need to force smartphone makers to weaken their device encryption — Congress is instead coming up with dangerous legislation such as the EARN IT Act, which risks undermining encryption right when a population forced by COVID-19 to do everything online from home can least afford it.

The bestcase scenario now is that the federal agency that proved its untrustworthiness by lying to the Foreign Intelligence Surveillance Court can crack into our smartphones, but maybe not all of them; that maybe it isn’t sharing its toys with state and local police departments (which are rife with domestic abusers who’d love to get access to their victims’ phones); that unlike third-party vendor devices, maybe the FBI’s tools won’t end up on eBay where criminals can buy them; and that hopefully it hasn’t paid taxpayer money to the spyware company whose best-known government customer murdered and dismembered a journalist.

The worst-case scenario would be that, between in-house and third-party tools, pretty much any law enforcement agency can now reliably crack into everybody’s phones, and yet nevertheless this turns out to be the year they finally get their legislative victory over encryption anyway. I can’t wait to see what else 2020 has in store.

#apple, #cellebrite, #column, #encryption, #federal-bureau-of-investigation, #gadgets, #icloud, #ios, #iphone, #nso-group, #opinion, #policy, #privacy, #security, #smartphone, #surveillance, #william-barr

Apple just announced one of its biggest regional expansions for the App Store ever

Closeup photo of a hand holding the iPhone 11

Enlarge / The iPhone 11. (credit: Samuel Axon)

Apple says that Apple Music will now be available in 52 new countries, and other services including App Store, Arcade, Podcasts, and iCloud will hit 20 more countries.

These are the countries and regions that are getting access to services for the first time, according to Apple:
The App Store, Apple Arcade, Apple Music, Apple Podcasts, and iCloud are now available in the following countries and regions:
  • Africa: Cameroon, Côte d’Ivoire, Democratic Republic of the Congo, Gabon, Libya, Morocco, Rwanda, and Zambia
  • Asia-Pacific: Maldives and Myanmar
  • Europe: Bosnia and Herzegovina, Georgia, Kosovo, Montenegro, and Serbia
  • Middle East: Afghanistan (excluding Apple Music) and Iraq
  • Oceania: Nauru (excluding Apple Music), Tonga, and Vanuatu
Apple Music is also expanding to the following countries and regions:
  • Africa: Algeria, Angola, Benin, Chad, Liberia, Madagascar, Malawi, Mali, Mauritania, Mozambique, Namibia, Republic of the Congo, Senegal, Seychelles, Sierra Leone, Tanzania, and Tunisia
  • Asia-Pacific: Bhutan
  • Europe: Croatia, Iceland, and North Macedonia
  • Latin America and the Caribbean: the Bahamas, Guyana, Jamaica, Montserrat, St. Lucia, St. Vincent and the Grenadines, Suriname, Turks and Caicos, and Uruguay
  • Middle East: Kuwait, Qatar, and Yemen
  • Oceania: Solomon Islands

Users in the countries that are getting Apple Music for the first time will be offered a six-month free trial option. Also, Apple will extend its efforts to curate locally relevant playlists to these countries with titles like Africa Now, Afrobeats Hits, and Ghana Bounce.

That brings the total count for the App Store to 175 and Music to 167 out of the 193 United Nations-recognized countries in the world. If you want to see the full count, Apple has a support page that lists which “Apple Media Services” are available in which countries.

Apple made the announcement to the press via its newsroom website and to developers via its developer support portal. Apple maintains a page of resources for developers dedicated to localization efforts, and from what we’ve heard from developers, the company often makes an effort to prioritize promoting apps that are widely localized because of the global reach of Apple’s platforms and services.

And since we’re on the topic of developers, here’s a side note that also just happened: Apple yesterday sent invitations to members of its developer community inviting them to an online session dedicated to implementing accessibility features in apps. The invite reads:

At Apple, we believe that technology is most powerful when it empowers everyone. Join us for an online event to learn how you can take advantage of the award-winning accessibility features that come standard on Apple devices. You’ll be able to ask questions during and after the sessions, and sign up for individual consultations.

The session will take place on Thursday and precedes the company’s developer conference—at which it normally offers many sessions like this—in June. The developer conference will be online-only due to the COVID-19 pandemic.

Read on Ars Technica | Comments

#app-store, #apple, #apple-app-store, #apple-music, #icloud, #tech

Apple expands App Store, Music, iCloud and other services to dozens of additional markets

Apple said today it is launching its services App Store, Apple Podcasts, iCloud, and Apple Music to dozens of additional markets in Africa, Europe, Asia-Pacific, and Middle East among others in what is one of the biggest geographical expansions for one of the world’s biggest firms.

The App Store, Apple Arcade, Apple Podcasts, and iCloud are now available in 20 additional nations, whereas the iPhone-maker’s music streaming service, Apple Music, has launched in an additional 52 countries.

Apple said Music streaming service includes locally curated playlists including Africa Now, Afrobeats Hits, Ghana Bounce in new markets and, as an introductory offer, it is offering a six-month free trial on Music in the newly launched markets.

The App Store, Apple Arcade, Apple Music, Apple Podcasts and iCloud are now available in the following countries and regions:

  • Africa: Cameroon, Côte d’Ivoire, Democratic Republic of the Congo, Gabon, Libya, Morocco, Rwanda, and Zambia.
  • Asia-Pacific: Maldives and Myanmar.
  • Europe: Bosnia and Herzegovina, Georgia, Kosovo, Montenegro, and Serbia.
  • Middle East: Afghanistan (excluding Apple Music) and Iraq.
  • Oceania: Nauru (excluding Apple Music), Tonga, and Vanuatu.

Apple Music is expanding to the following countries and regions:

  • Africa: Algeria, Angola, Benin, Chad, Liberia, Madagascar, Malawi, Mali, Mauritania, Mozambique, Namibia, Republic of the Congo, Senegal, Seychelles, Sierra Leone, Tanzania, and Tunisia.
  • Asia-Pacific: Bhutan.
  • Europe: Croatia, Iceland, and North Macedonia.
  • Latin America and the Caribbean: the Bahamas, Guyana, Jamaica, Montserrat, St. Lucia, St. Vincent and the Grenadines, Suriname, Turks and Caicos, and Uruguay.
  • Middle East: Kuwait, Qatar, and Yemen.
  • Oceania: Solomon Islands.

“We’re delighted to bring many of Apple’s most beloved Services to users in more countries than ever before,” said Oliver Schusser, Apple’s vice president of Apple Music and International Content, in a statement.

“We hope our customers can discover their new favorite apps, games, music, and podcasts as we continue to celebrate the world’s best creators, artists, and developers,” he added.

App Store is now available in 175 countries and regions, whereas Apple Music has reached 167 markets. In comparison, music streaming service giant Spotify is available in fewer than 100 nations.

The availability of the aforementioned services in dozens of new markets should help Apple further grow sales in its services segment, which already clocks more revenue than the Mac, iPad, and wearables and accessories.

Their availability should also persuade more users to explore Apple’s products. iPhone users in the past have expressed their disappointment when they don’t have access to the wider services ecosystem.

#app-store, #apple, #apple-arcade, #apple-music, #apple-podcasts, #apps, #asia, #icloud, #ios