Apple’s dangerous path

Hello friends, and welcome back to Week in Review.

Last week, we dove into the truly bizarre machinations of the NFT market. This week, we’re talking about something that’s a little bit more impactful on the current state of the web — Apple’s NeuralHash kerfuffle.

If you’re reading this on the TechCrunch site, you can get this in your inbox from the newsletter page, and follow my tweets @lucasmtny


the big thing

In the past month, Apple did something it generally has done an exceptional job avoiding — the company made what seemed to be an entirely unforced error.

In early August — seemingly out of nowhere** — the company announced that by the end of the year they would be rolling out a technology called NeuralHash that actively scanned the libraries of all iCloud Photos users, seeking out image hashes that matched known images of child sexual abuse material (CSAM). For obvious reasons, the on-device scanning could not be opted out of.

This announcement was not coordinated with other major consumer tech giants, Apple pushed forward on the announcement alone.

Researchers and advocacy groups had almost unilaterally negative feedback for the effort, raising concerns that this could create new abuse channels for actors like governments to detect on-device information that they regarded as objectionable. As my colleague Zach noted in a recent story, “The Electronic Frontier Foundation said this week it had amassed more than 25,000 signatures from consumers. On top of that, close to 100 policy and rights groups, including the American Civil Liberties Union, also called on Apple to abandon plans to roll out the technology.”

(The announcement also reportedly generated some controversy inside of Apple.)

The issue — of course — wasn’t that Apple was looking at find ways that prevented the proliferation of CSAM while making as few device security concessions as possible. The issue was that Apple was unilaterally making a massive choice that would affect billions of customers (while likely pushing competitors towards similar solutions), and was doing so without external public input about possible ramifications or necessary safeguards.

A long story short, over the past month researchers discovered Apple’s NeuralHash wasn’t as air tight as hoped and the company announced Friday that it was delaying the rollout “to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

Having spent several years in the tech media, I will say that the only reason to release news on a Friday morning ahead of a long weekend is to ensure that the announcement is read and seen by as few people as possible, and it’s clear why they’d want that. It’s a major embarrassment for Apple, and as with any delayed rollout like this, it’s a sign that their internal teams weren’t adequately prepared and lacked the ideological diversity to gauge the scope of the issue that they were tackling. This isn’t really a dig at Apple’s team building this so much as it’s a dig on Apple trying to solve a problem like this inside the Apple Park vacuum while adhering to its annual iOS release schedule.

illustration of key over cloud icon

Image Credits: Bryce Durbin / TechCrunch /

Apple is increasingly looking to make privacy a key selling point for the iOS ecosystem, and as a result of this productization, has pushed development of privacy-centric features towards the same secrecy its surface-level design changes command. In June, Apple announced iCloud+ and raised some eyebrows when they shared that certain new privacy-centric features would only be available to iPhone users who paid for additional subscription services.

You obviously can’t tap public opinion for every product update, but perhaps wide-ranging and trail-blazing security and privacy features should be treated a bit differently than the average product update. Apple’s lack of engagement with research and advocacy groups on NeuralHash was pretty egregious and certainly raises some questions about whether the company fully respects how the choices they make for iOS affect the broader internet.

Delaying the feature’s rollout is a good thing, but let’s all hope they take that time to reflect more broadly as well.

** Though the announcement was a surprise to many, Apple’s development of this feature wasn’t coming completely out of nowhere. Those at the top of Apple likely felt that the winds of global tech regulation might be shifting towards outright bans of some methods of encryption in some of its biggest markets.

Back in October of 2020, then United States AG Bill Barr joined representatives from the UK, New Zealand, Australia, Canada, India and Japan in signing a letter raising major concerns about how implementations of encryption tech posed “significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children.” The letter effectively called on tech industry companies to get creative in how they tackled this problem.


other things

Here are the TechCrunch news stories that especially caught my eye this week:

LinkedIn kills Stories
You may be shocked to hear that LinkedIn even had a Stories-like product on their platform, but if you did already know that they were testing Stories, you likely won’t be so surprised to hear that the test didn’t pan out too well. The company announced this week that they’ll be suspending the feature at the end of the month. RIP.

FAA grounds Virgin Galactic over questions about Branson flight
While all appeared to go swimmingly for Richard Branson’s trip to space last month, the FAA has some questions regarding why the flight seemed to unexpectedly veer so far off the cleared route. The FAA is preventing the company from further launches until they find out what the deal is.

Apple buys a classical music streaming service
While Spotify makes news every month or two for spending a massive amount acquiring a popular podcast, Apple seems to have eyes on a different market for Apple Music, announcing this week that they’re bringing the classical music streaming service Primephonic onto the Apple Music team.

TikTok parent company buys a VR startup
It isn’t a huge secret that ByteDance and Facebook have been trying to copy each other’s success at times, but many probably weren’t expecting TikTok’s parent company to wander into the virtual reality game. The Chinese company bought the startup Pico which makes consumer VR headsets for China and enterprise VR products for North American customers.

Twitter tests an anti-abuse ‘Safety Mode’
The same features that make Twitter an incredibly cool product for some users can also make the experience awful for others, a realization that Twitter has seemingly been very slow to make. Their latest solution is more individual user controls, which Twitter is testing out with a new “safety mode” which pairs algorithmic intelligence with new user inputs.


extra things

Some of my favorite reads from our Extra Crunch subscription service this week:

Our favorite startups from YC’s Demo Day, Part 1 
“Y Combinator kicked off its fourth-ever virtual Demo Day today, revealing the first half of its nearly 400-company batch. The presentation, YC’s biggest yet, offers a snapshot into where innovation is heading, from not-so-simple seaweed to a Clearco for creators….”

…Part 2
“…Yesterday, the TechCrunch team covered the first half of this batch, as well as the startups with one-minute pitches that stood out to us. We even podcasted about it! Today, we’re doing it all over again. Here’s our full list of all startups that presented on the record today, and below, you’ll find our votes for the best Y Combinator pitches of Day Two. The ones that, as people who sift through a few hundred pitches a day, made us go ‘oh wait, what’s this?’

All the reasons why you should launch a credit card
“… if your company somehow hasn’t yet found its way to launch a debit or credit card, we have good news: It’s easier than ever to do so and there’s actual money to be made. Just know that if you do, you’ve got plenty of competition and that actual customer usage will probably depend on how sticky your service is and how valuable the rewards are that you offer to your most active users….”


Thanks for reading, and again, if you’re reading this on the TechCrunch site, you can get this in your inbox from the newsletter page, and follow my tweets @lucasmtny

Lucas Matney

#american-civil-liberties-union, #apple, #apple-inc, #apple-music, #artificial-intelligence, #australia, #bryce-durbin, #bytedance, #canada, #china, #computing, #electronic-frontier-foundation, #encryption, #extra-crunch, #facebook, #federal-aviation-administration, #icloud, #india, #ios, #iphone, #japan, #linkedin, #new-zealand, #pico, #richard-branson, #siri, #spotify, #tech-media, #technology, #united-kingdom, #united-states, #virgin-galactic, #virtual-reality, #y-combinator

Apple launches a new iOS app, ‘Siri Speech Study,’ to gather feedback for Siri improvements

Apple recently began a research study designed to collect speech data from study participants. Earlier this month, the company launched a new iOS app called “Siri Speech Study” on the App Store, which allows participants who have opted in to share their voice requests and other feedback with Apple. The app is available in a number of worldwide markets but does not register on the App Store’s charts, including under the “Utilities” category where it’s published.

According to data from Sensor Tower, the iOS app first launched on August 9 and was updated to a new version on August 18. It’s currently available in the U.S., Canada, Germany, France, Hong Kong, India, Ireland, Italy, Japan, Mexico, New Zealand, and Taiwan — an indication of the study’s global reach. However, the app will not appear when searching the App Store by keyword or when browsing through the list of Apple’s published apps.

The Siri Speech Study app itself offers little information about the study’s specific goals, nor does it explain how someone could become a participant. Instead, it only provides a link to a fairly standard license agreement and a screen where a participant would enter their ID number to get started.

Reached for comment, Apple told TechCrunch the app is only being used for Siri product improvements, by offering a way for participants to share feedback directly with Apple. The company also explained people have to be invited to the study — there’s not a way for consumers to sign up to join.

Image Credits: App Store screenshot

The app is only one of many ways Apple is working to improve Siri.

In the past, Apple had tried to learn more about Siri’s mistakes by sending some small portion of consumers’ voice recordings to contractors for manual grading and review. But a whistleblower alerted media outlet The Guardian that the process had allowed them to listen in on confidential details at times. Apple shortly thereafter made manual review an opt-in process and brought audio grading in-house. This type of consumer data collection continues, but has a different aim that what a research study would involve.

Unlike this broader, more generalized data collection, a focus group-like study allows Apple to better understand Siri’s mistakes because it combines the collected data with human feedback. With the Siri Speech Study app, participants provide explicit feedback on per request basis, Apple said. For instance, if Siri misheard a question, users could explain what they were trying to ask. If Siri was triggered when the user hadn’t said “Hey Siri,” that could be noted. Or if Siri on HomePod misidentified the speaker in a multi-person household, the participant could note that, too.

Another differentiator is that none of the participants’ data is being automatically shared with Apple. Rather, users can see a list of the Siri requests they’ve made and then select which to send to Apple with their feedback. Apple also noted no user information is collected or used in the app, except the data directly provided by participants.

WWDC 2021 on device privacy

Image Credits: Apple WWDC 2021

Apple understands that an intelligent virtual assistant that understands you is a competitive advantage.

This year, the company scooped up ex-Google A.I. scientist Samy Bengio to help make Siri a stronger rival to Google Assistant, whose advanced capabilities are often a key selling point for Android devices. In the home, meanwhile, Alexa-powered smart speakers are dominating the U.S. market and compete with Google in the global landscape, outside China. Apple’s HomePod has a long way to go to catch up.

But despite the rapid progress in voice-based computing in recent years, virtual assistants can still have a hard time understanding certain types of speech. Earlier this year, for example, Apple said it would use a bank of audio clips from podcasts where users had stuttered to help it improve its understanding of this kind of speech pattern. Assistants can also stumble when there are multiple devices in a home that are listening for voice commands from across several rooms. And assistants can mess up when trying to differentiate between different family members’ voices or when trying to understand a child’s voice.

In other words, there are still many avenues a speech study could pursue over time, even if these aren’t its current focus.

That Apple is running a Siri speech study isn’t necessarily new. The company has historically run evaluations and studies like this in some form. But it’s less common to find Apple’s studies published directly on the App Store.

Though Apple could have published the app through the enterprise distribution process to keep it more under wraps, it chose to use its public marketplace. This more closely follows the App Store’s rules, as the research study is not an internally-facing app meant only for Apple employees.

Still, it’s not likely consumers will stumble across the app and be confused — the Siri Speech Study app is hidden from discovery. You have to have the app’s direct link to find it. (Good thing we’re nosy!)

#android, #app-store, #apple, #apple-inc, #apps, #artificial-intelligence, #assistant, #bank, #canada, #france, #germany, #google, #google-assistant, #google-now, #homekit, #homepod, #india, #ireland, #italy, #itunes, #japan, #mexico, #new-zealand, #sensor-tower, #siri, #smart-speaker, #software, #speaker, #taiwan, #the-guardian, #united-states, #virtual-assistant

Interview: Apple’s Head of Privacy details child abuse detection and Messages safety features

Last week, Apple announced a series of new features targeted at child safety on its devices. Though not live yet, the features will arrive later this year for users. Though the goals of these features are universally accepted to be good ones — the protection of minors and the limit of the spread of Child Sexual Abuse Material (CSAM), there have been some questions about the methods Apple is using.

I spoke to Erik Neuenschwander, Head of Privacy at Apple, about the new features launching for its devices. He shared detailed answers to many of the concerns that people have about the features and talked at length to some of the tactical and strategic issues that could come up once this system rolls out. 

I also asked about the rollout of the features, which come closely intertwined but are really completely separate systems that have similar goals. To be specific, Apple is announcing three different things here, some of which are being confused with one another in coverage and in the minds of the public. 

CSAM detection in iCloud Photos – A detection system called NeuralHash creates identifiers it can compare with IDs from the National Center for Missing and Exploited Children and other entities to detect known CSAM content in iCloud Photo libraries. Most cloud providers already scan user libraries for this information — Apple’s system is different in that it does the matching on device rather than in the cloud.

Communication Safety in Messages – A feature that a parent opts to turn on for a minor on their iCloud Family account. It will alert children when an image they are going to view has been detected to be explicit and it tells them that it will also alert the parent.

Interventions in Siri and search – A feature that will intervene when a user tries to search for CSAM-related terms through Siri and search and will inform the user of the intervention and offer resources.

For more on all of these features you can read our articles linked above or Apple’s new FAQ that it posted this weekend.

From personal experience, I know that there are people who don’t understand the difference between those first two systems, or assume that there will be some possibility that they may come under scrutiny for innocent pictures of their own children that may trigger some filter. It’s led to confusion in what is already a complex rollout of announcements. These two systems are completely separate, of course, with CSAM detection looking for precise matches with content that is already known to organizations to be abuse imagery. Communication Safety in Messages takes place entirely on the device and reports nothing externally — it’s just there to flag to a child that they are or could be about to be viewing explicit images. This feature is opt-in by the parent and transparent to both parent and child that it is enabled.

Apple’s Communication Safety in Messages feature. Image Credits: Apple

There have also been questions about the on-device hashing of photos to create identifiers that can be compared with the database. Though NeuralHash is a technology that can be used for other kinds of features like faster search in photos, it’s not currently used for anything else on iPhone aside from CSAM detection. When iCloud Photos is disabled, the feature stops working completely. This offers an opt-out for people but at an admittedly steep cost given the convenience and integration of iCloud Photos with Apple’s operating systems.

Though this interview won’t answer every possible question related to these new features, this is the most extensive on-the-record discussion by Apple’s senior privacy member. It seems clear from Apple’s willingness to provide access and its ongoing FAQ’s and press briefings (there have been at least 3 so far and likely many more to come) that it feels that it has a good solution here. 

Despite the concerns and resistance, it seems as if it is willing to take as much time as is necessary to convince everyone of that. 

This interview has been lightly edited for clarity.

TC: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?

Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through user’s iCloud Photos. This system doesn’t change that either, it neither looks through data on the device, nor does it look through all photos in iCloud Photos. Instead what it does is gives us a new ability to identify accounts which are starting collections of known CSAM.

So the development of this new CSAM detection technology is the watershed that makes now the time to launch this. And Apple feels that it can do it in a way that it feels comfortable with and that is ‘good’ for your users?

That’s exactly right. We have two co-equal goals here. One is to improve child safety on the platform and the second is to preserve user privacy, And what we’ve been able to do across all three of the features, is bring together technologies that let us deliver on both of those goals.

Announcing the Communications safety in Messages features and the CSAM detection in iCloud Photos system at the same time seems to have created confusion about their capabilities and goals. Was it a good idea to announce them concurrently? And why were they announced concurrently, if they are separate systems?

Well, while they are [two] systems they are also of a piece along with our increased interventions that will be coming in Siri and search. As important as it is to identify collections of known CSAM where they are stored in Apple’s iCloud Photos service, It’s also important to try to get upstream of that already horrible situation. So CSAM detection means that there’s already known CSAM that has been through the reporting process, and is being shared widely re-victimizing children on top of the abuse that had to happen to create that material in the first place. for the creator of that material in the first place. And so to do that, I think is an important step, but it is also important to do things to intervene earlier on when people are beginning to enter into this problematic and harmful area, or if there are already abusers trying to groom or to bring children into situations where abuse can take place, and Communication Safety in Messages and our interventions in Siri and search actually strike at those parts of the process. So we’re really trying to disrupt the cycles that lead to CSAM that then ultimately might get detected by our system.

The process of Apple’s CSAM detection in iCloud Photos system. Image Credits: Apple

Governments and agencies worldwide are constantly pressuring all large organizations that have any sort of end-to-end or even partial encryption enabled for their users. They often lean on CSAM and possible terrorism activities as rationale to argue for backdoors or encryption defeat measures. Is launching the feature and this capability with on-device hash matching an effort to stave off those requests and say, look, we can provide you with the information that you require to track down and prevent CSAM activity — but without compromising a user’s privacy?

So, first, you talked about the device matching so I just want to underscore that the system as designed doesn’t reveal — in the way that people might traditionally think of a match — the result of the match to the device or, even if you consider the vouchers that the device creates, to Apple. Apple is unable to process individual vouchers; instead, all the properties of our system mean that it’s only once an account has accumulated a collection of vouchers associated with illegal, known CSAM images that we are able to learn anything about the user’s account. 

Now, why to do it is because, as you said, this is something that will provide that detection capability while preserving user privacy. We’re motivated by the need to do more for child safety across the digital ecosystem, and all three of our features, I think, take very positive steps in that direction. At the same time we’re going to leave privacy undisturbed for everyone not engaged in the illegal activity.

Does this, creating a framework to allow scanning and matching of on-device content, create a framework for outside law enforcement to counter with, ‘we can give you a list, we don’t want to look at all of the user’s data but we can give you a list of content that we’d like you to match’. And if you can match it with this content you can match it with other content we want to search for. How does it not undermine Apple’s current position of ‘hey, we can’t decrypt the user’s device, it’s encrypted, we don’t hold the key?’

It doesn’t change that one iota. The device is still encrypted, we still don’t hold the key, and the system is designed to function on on-device data. What we’ve designed has a device side component — and it has the device side component by the way, for privacy improvements. The alternative of just processing by going through and trying to evaluate users data on a server is actually more amenable to changes [without user knowledge], and less protective of user privacy.

Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature. I understand that it’s a complex attribute that a feature of the service has a portion where the voucher is generated on the device, but again, nothing’s learned about the content on the device. The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers which we’ve never done for iCloud Photos. It’s those sorts of systems that I think are more troubling when it comes to the privacy properties — or how they could be changed without any user insight or knowledge to do things other than what they were designed to do.

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference If pressured or asked by a government to compromise the system?

Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the US when they speak in that way, and the therefore it seems to be the case that people agree US law doesn’t offer these kinds of capabilities to our government. 

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. And secondly, the system requires the threshold of images to be exceeded so trying to seek out even a single image from a person’s device or set of people’s devices won’t work because the system simply does not provide any knowledge to Apple for single photos stored in our service. And then, thirdly, the system has built into it a stage of manual review where, if an account is flagged with a collection of illegal CSAM material, an Apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity. And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don’t believe that there’s a basis on which people will be able to make that request in the US. And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled no part of the system is functional.

So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?

If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you’re not using iCloud Photos. 

In recent years, Apple has often leaned into the fact that on-device processing preserves user privacy. And in nearly every previous case and I can think of that’s true. Scanning photos to identify their content and allow me to search them, for instance. I’d rather that be done locally and never sent to a server. However, in this case, it seems like there may actually be a sort of anti-effect in that you’re scanning locally, but for external use cases, rather than scanning for personal use — creating a ‘less trust’ scenario in the minds of some users. Add to this that every other cloud provider scans it on their servers and the question becomes why should this implementation being different from most others engender more trust in the user rather than less?

I think we’re raising the bar, compared to the industry standard way to do this. Any sort of server side algorithm that’s processing all users photos is putting that data at more risk of disclosure and is, by definition, less transparent in terms of what it’s doing on top of the user’s library. So, by building this into our operating system, we gain the same properties that the integrity of the operating system provides already across so many other features, the one global operating system that’s the same for all users who download it and install it, and so it in one property is much more challenging, even how it would be targeted to an individual user. On the server side that’s actually quite easy — trivial. To be able to have some of the properties and building it into the device and ensuring it’s the same for all users with the features enable give a strong privacy property. 

Secondly, you point out how use of on device technology is privacy preserving, and in this case, that’s a representation that I would make to you, again. That it’s really the alternative to where users’ libraries have to be processed on a server that is less private.

The things that we can say with this system is that it leaves privacy completely undisturbed for every other user who’s not into this illegal behavior, Apple gain no additional knowledge about any users cloud library. No user’s iCloud Library has to be processed as a result of this feature. Instead what we’re able to do is to create these cryptographic safety vouchers. They have mathematical properties that say, Apple will only be able to decrypt the contents or learn anything about the images and users specifically that collect photos that match illegal, known CSAM hashes, and that’s just not something anyone can say about a cloud processing scanning service, where every single image has to be processed in a clear decrypted form and run by routine to determine who knows what? At that point it’s very easy to determine anything you want [about a user’s images] versus our system only what is determined to be those images that match a set of known CSAM hashes that came directly from NCMEC and and other child safety organizations. 

Can this CSAM detection feature stay holistic when the device is physically compromised? Sometimes cryptography gets bypassed locally, somebody has the device in hand — are there any additional layers there?

I think it’s important to underscore how very challenging and expensive and rare this is. It’s not a practical concern for most users though it’s one we take very seriously, because the protection of data on the device is paramount for us. And so if we engage in the hypothetical where we say that there has been an attack on someone’s device: that is such a powerful attack that there are many things that that attacker could attempt to do to that user. There’s a lot of a user’s data that they could potentially get access to. And the idea that the most valuable thing that an attacker — who’s undergone such an extremely difficult action as breaching someone’s device — was that they would want to trigger a manual review of an account doesn’t make much sense. 

Because, let’s remember, even if the threshold is met, and we have some vouchers that are decrypted by Apple. The next stage is a manual review to determine if that account should be referred to NCMEC or not, and that is something that we want to only occur in cases where it’s a legitimate high value report. We’ve designed the system in that way, but if we consider the attack scenario you brought up, I think that’s not a very compelling outcome to an attacker.

Why is there a threshold of images for reporting, isn’t one piece of CSAM content too many?

We want to ensure that the reports that we make to NCMEC are high value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched, And so the threshold allows us to reach that point where we expect a false reporting rate for review of one in 1 trillion accounts per year. So, working against the idea that we do not have any interest in looking through users’ photo libraries outside those that are holding collections of known CSAM the threshold allows us to have high confidence that those accounts that we review are ones that when we refer to NCMEC, law enforcement will be able to take up and effectively investigate, prosecute and convict.

#apple, #apple-inc, #apple-photos, #china, #cloud-applications, #cloud-computing, #cloud-services, #computing, #cryptography, #encryption, #european-union, #head, #icloud, #ios, #iphone, #law-enforcement, #operating-system, #operating-systems, #privacy, #private, #siri, #software, #united-states, #webmail

New Apple technology will warn parents and children about sexually explicit photos in Messages

Apple later this year will roll out new tools that will warn children and parents if the child sends or receives sexually explicit photos through the Messages app. The feature is part of a handful of new technologies Apple is introducing that aim to limit the spread of Child Sexual Abuse Material (CSAM) across Apple’s platforms and services.

As part of these developments, Apple will be able to detect known CSAM images on its mobile devices, like iPhone and iPad, and in photos uploaded to iCloud, while still respecting consumer privacy.

The new Messages feature, meanwhile, is meant to enable parents to play a more active and informed role when it comes to helping their children learn to navigate online communication. Through a software update rolling out later this year, Messages will be able to use on-device machine learning to analyze image attachments and determine if a photo being shared is sexually explicit. This technology does not require Apple to access or read the child’s private communications, as all the processing happens on the device. Nothing is passed back to Apple’s servers in the cloud.

If a sensitive photo is discovered in a message thread, the image will be blocked and a label will appear below the photo that states, “this may be sensitive” with a link to click to view the photo. If the child chooses to view the photo, another screen appears with more information. Here, a message informs the child that sensitive photos and videos “show the private body parts that you cover with bathing suits” and “it’s not your fault, but sensitive photos and videos can be used to harm you.”

It also suggests that the person in the photo or video may not want it to be seen and it could have been shared without their knowing.

Image Credits: Apple

These warnings aim to help guide the child to make the right decision by choosing not to view the content.

However, if the child clicks through to view the photo anyway, they’ll then be shown an additional screen that informs them that if they choose to view the photo, their parents will be notified. The screen also explains that their parents want them to be safe and suggests that the child talk to someone if they feel pressured. It offers a link to more resources for getting help, as well.

There’s still an option at the bottom of the screen to view the photo, but again, it’s not the default choice. Instead, the screen is designed in a way where the option to not view the photo is highlighted.

These types of features could help protect children from sexual predators, not only by introducing technology that interrupts the communications and offers advice and resources, but also because the system will alert parents. In many cases where a child is hurt by a predator, parents didn’t even realize the child had begun to talk to that person online or by phone. This is because child predators are very manipulative and will attempt to gain the child’s trust, then isolate the child from their parents so they’ll keep the communications a secret. In other cases, the predators have groomed the parents, too.

Apple’s technology could help in both cases by intervening, identifying and alerting to explicit materials being shared.

However, a growing amount of CSAM material is what’s known as self-generated CSAM, or imagery that is taken by the child, which may be then shared consensually with the child’s partner or peers. In other words, sexting or sharing “nudes.” According to a 2019 survey from Thorn, a company developing technology to fight the sexual exploitation of children, this practice has become so common that 1 in 5 girls ages 13 to 17 said they have shared their own nudes, and 1 in 10 boys have done the same. But the child may not fully understand how sharing that imagery puts them at risk of sexual abuse and exploitation.

The new Messages feature will offer a similar set of protections here, too. In this case, if a child attempts to send an explicit photo, they’ll be warned before the photo is sent. Parents can also receive a message if the child chooses to send the photo anyway.

Apple says the new technology will arrive as part of a software update later this year to accounts set up as families in iCloud for iOS 15, iPadOS 15, and macOS Monterey in the U.S.

This update will also include updates to Siri and Search that will offer expanded guidance and resources to help children and parents stay safe online and get help in unsafe situations. For example, users will be able to ask Siri how to report CSAM or child exploitation. Siri and Search will also intervene when users search for queries related to CSAM to explain that the topic is harmful and provide resources to get help.

#apple, #apple-inc, #apple-photos, #apps, #computing, #icloud, #ios, #ipad, #iphone, #machine-learning, #messages, #mobile-devices, #operating-systems, #privacy, #security, #sexting, #siri, #software, #united-states

Apple and Google’s AI wizardry promises privacy—at a cost

Apple and Google’s AI wizardry promises privacy—at a cost

Enlarge (credit: Getty Images)

Since the dawn of the iPhone, many of the smarts in smartphones have come from elsewhere: the corporate computers known as the cloud. Mobile apps sent user data cloudward for useful tasks like transcribing speech or suggesting message replies. Now Apple and Google say smartphones are smart enough to do some crucial and sensitive machine learning tasks like those on their own.

At Apple’s WWDC event this month, the company said its virtual assistant Siri will transcribe speech without tapping the cloud in some languages on recent and future iPhones and iPads. During its own I/O developer event last month, Google said the latest version of its Android operating system has a feature dedicated to secure, on-device processing of sensitive data, called the Private Compute Core. Its initial uses include powering the version of the company’s Smart Reply feature built into its mobile keyboard that can suggest responses to incoming messages.

Apple and Google both say on-device machine learning offers more privacy and snappier apps. Not transmitting personal data cuts the risk of exposure and saves time spent waiting for data to traverse the internet. At the same time, keeping data on devices aligns with the tech giants’ long-term interest in keeping consumers bound into their ecosystems. People that hear their data can be processed more privately might become more willing to agree to share more data.

Read 16 remaining paragraphs | Comments

#ai, #apple, #google, #ok-google, #policy, #privacy, #siri, #tech

Android announces six new features, emphasizing safety and accessibility

Android shared information today about six features that will roll out this summer. Some of these are just quality of life upgrades, like starring text messages to easily find them later, or getting contextual Emoji Kitchen suggestions depending on what you’re typing. But other aspects of this update emphasize security, safety, and accessibility.

Last summer, Google added a feature on Android that basically uses your phone as a seismometer to create “the world’s largest earthquake detection network.” The system is free, and since testing in California, it’s also launched in New Zealand and Greece. Now, Google will introduce this feature in Turkey, the Philippines, Kazakhstan, Kyrgyz Republic, Tajikistan, Turkmenistan and Uzbekistan. The company says that they’ll continue expanding the feature this year, prioritizing countries with the highest earthquake risk.

Image Credits: Google

Google is also expanding on another feature released last year, which made Google Assistant compatible with Android apps. In the initial update, apps were supported like Spotify, Snapchat, Twitter, Walmart, Discord, Etsy, MyFitnessPal, Mint, Nike Adapt, Nike Run Club, eBay, Kroger, Postmates, and Wayfair. Today’s update mentioned apps like eBay, Yahoo! Finance, Strava, and Capital One. These features are comparable to Apple’s support of Siri with iOS apps, which includes the ability to open apps, perform tasks, and record a custom command.

When it comes to accessibility, Google is ramping up its gaze detection feature, which is now in beta. Gaze detection allows people to ask Voice Access to only respond when they’re looking at their screen, allowing people to naturally move between talking with friends and using their phone. Now, Voice Access will also have enhanced password input — when it detects a password field, it will allow you to input letters, numbers, and symbols by saying “capital P” or “dollar sign,” for example, making it easier for users to more quickly enter this sensitive information. In October, Google Assistant became available on gaze-powered accessible devices, and in the same month, Google researchers debuted a demo that made it so people using sign language could be identified as the “active speaker” in video calls. Apple doesn’t have a comparable gaze detection feature yet that’s widely available, though they acquired SensoMotoric Instruments (SMI), an eye-tracking firm, in 2017. So, hopefully similar accessibility features will be in the works at Apple, especially as Google continues to build out theirs.

Today’s Android update also lets Android Auto users customize more of their experience. Now, you can set your launcher screen from your phone, set dark mode manually, and more easily browse content on media apps with an A-Z scroll bar and “back to top” button. Messaging apps like WhatsApp and Messages will now be compatible on the launch screen – proceed with caution and don’t drive distracted – and EV charging, parking, and navigation apps will now be available for use.

#android, #apps, #assistant, #california, #computing, #ebay, #etsy, #google, #google-assistant, #google-now, #google-play, #greece, #kazakhstan, #kroger, #mobile-linux, #myfitnesspal, #new-zealand, #nike, #operating-systems, #philippines, #postmates, #siri, #smartphones, #snapchat, #software, #spotify, #turkey, #walmart, #wayfair, #whatsapp, #yahoo

Apple’s iPadOS 15 breaks the app barrier

The announcement of new iPad software at this year’s WWDC conference had an abnormally large expectation hung on it. The iPad lineup, especially the larger iPad Pro, has kept up an impressively frantic pace of hardware innovation over the past few years. In that same time frame, the software of the iPad, especially its ability to allow users to use multiple apps at once and in its onramps for professional software makers, has come under scrutiny for an apparently slower pace. 

This year’s announcements about iOS 15 and iPadOS 15 seemed designed to counter that narrative with the introduction of a broad number of quality of life improvements to multitasking as well as a suite of system-wide features that nearly all come complete with their own developer-facing APIs to build on. I had the chance to speak to Bob Borchers, Apple’s VP of Worldwide Product Marketing, and Sebastien (Seb) Mariners-Mes, VP, Intelligent System Experience at Apple about the release of iPadOS 15 to discuss a variety of these improvements. 

Mariners-Mes works on the team of Apple software SVP Craig Federighi and was pivotal in the development of this new version.

iPad has a bunch of new core features including SharePlay, Live Text, Focuses, Universal Control, on-device Siri processing and a new edition of Swift Playgrounds designed to be a prototyping tool. Among the most hotly anticipated for iPad Pro users, however, are improvements to Apple’s multitasking system. 

If you’ve been following along, you’ll know that the gesture-focused multitasking interface of iPadOS has had its share of critics, including me. Though it can be useful in the right circumstances, the un-discoverable gesture system and confusing hierarchy of the different kinds of combinations of apps made it a sort of floppy affair to utilize correctly for an apt user much less a beginner. 

Since the iPad stands alone as pretty much the only successful tablet device on the market, Apple has a unique position in the industry to determine what kinds of paradigms are established as standard. It’s a very unique opportunity to say, hey, this is what working on a device like this feels like; looks like; should be.

 

So I ask Borchers and Mariners-Mes to talk a little bit about multitasking. Specifically Apple’s philosophy in the design of multitasking on iPadOS 15 and the update from the old version, which required a lot of acrobatics of the finger and a strong sense of spatial awareness of objects hovering out off the edges of the screen. 

“I think you’ve got it,” Borchers says when I mention the spatial gymnastics, “but the way that we think about this is that the step forward and multitasking makes it easier discover, easier to use even more powerful. And, while pros I think were the ones who were using multitasking in the past, we really want to take it more broadly because we think there’s applicability to many, many folks. And that’s why the, the discovery and the ease of use I think were critical.”

“You had a great point there when you talked about the spatial model and one of our goals was to actually make the spatial model more explicit in the experience,” says Mariners-Mes, “where, for example, if you’ve got a split view, and you’re replacing one of the windows, we kind of open the curtain and tuck the other app to the side, you can see it — it’s not a hidden hidden mental model, it’s one that’s very explicit.

Another great example of it is when you go into the app, switcher to reconfigure your windows, you’re actually doing drag and drop as you rearrange your new split views, or you dismiss apps and so on. So it’s not a hidden model, it’s one where we really try to reinforce a spatial model with an explicit one for the user through all of the animations and all of the kinds of affordances.”

Apple’s goal this time around, he says, was to add affordances for the user to understand that multitasking was even an option — like the small series of dots at the top of every app and window that now allows you to explicitly choose an available configuration, rather than the app-and-dock-juggling method of the past. He goes on to say that consistency was a key metric for them on this version of the OS. The appearance of Slide Over apps in the same switcher view as all other apps, for instance. Or the way that you can choose configurations of apps via the button, by drag and drop in the switcher and get the same results.

In the dashboard, Mariners-Mes says, “you get an at a glance view of all of the apps that you’re running and a full model of how you’re navigating that through the iPad’s interface.”

This ‘at a glance’ map of the system should be very welcome by advanced users. Even as a very aggressive Pro user myself, Slide Over apps became more of a nuisance than anything because I couldn’t keep track of how many were open and when to use them. The ability to combine them on the switcher itself is one of those things that Apple has wanted to get into the OS for years but is just now making its way onto iPads. Persistence of organization, really, was the critical problem to tackle.

“I think we believe strongly in building a mental model where people know where things are [on iPad],” says Mariners-Mes. “And I think you’re right when it comes persistence I think it also applies to, for example, home screen. People have a very strong mental model of where things are in the home screen as well as all of the apps that they’ve configured. And so we try to maintain a well maintained that mental model, and also allow people to reorganize again in the switcher.”

He goes on to explain the new ‘shelf’ feature that displays every instance or window that an app has open within itself. They implemented this as a per-app feature rather than a system-wide feature, he says, because the association of that shelf with a particular app fit the overall mental model that they’re trying to build. The value of this shelf may jump into higher relief when more professional apps that may have a dozen documents or windows open at once and active during a project ship later this year.

Another nod to advanced users in iPadOS 15 is the rich keyboard shortcut set offered across the system. The interface can be navigated by arrow keys now, many advanced commands are there and you can even move around on an iPad using a game controller. 

“One of the key goals this year was to make basically everything in the system navigable from the keyboard,” says Mariners-Mes, “so that if you don’t want to, you don’t have to take your hands off the keyboard. All of the new multitasking affordances and features, you can do through the keyboard shortcuts. You’ve got the new keyboard shortcut menu bar where you can see all the shortcuts that are available. It’s great for discoverability. You can search them and we even, you know, and this is a subtle point, but we even made a very conscious effort to rationalize the shortcuts across Mac and iPadOS. So that if you’re using universal control, for example, you’re going to go from one environment to the other seamlessly. You want to ensure that consistency as you go across.”

The gestures, however, are staying as a nod to consistency for existing users that may be used to those. 

To me, one of the more interesting and potentially powerful developments is the introduction of the Center Window and its accompanying API. A handful of Apple apps like Mail, Notes and Messages now allow items to pop out into an overlapping window.

“It was a very deliberate decision on our part,” says Mariners-Mes about adding this new element. “This really brings a new level of productivity where you can have, you know, this floating window. You can have content behind it. You can seamlessly cut and paste. And that’s something that’s just not possible with the traditional [iPadOS] model. And we also really strive to make it consistent with the rest of multitasking where that center window can also become one of the windows in your split view, or full size, and then go back to to being a center window. We think it’s a cool addition to the model and we look really look forward to 3rd parties embracing it.”

Early reception of the loop Apple gave at iPadOS 15 has an element of reservation about it still given that many of the most powerful creative apps are made by third parties that must adopt these technologies in order for them to be truly useful. But Apple, Borchers says, is working hard to make sure that pro apps adopt as many of these new paradigms and technologies as possible, so that come fall, the iPad will feel like a more hospitable host for the kinds of advanced work pros want to do there.

One of the nods to this multi-modal universe that the iPad exists in is Universal Control. This new feature uses Bluetooth beaconing, peer-to-peer WiFi and the iPad’s touchpad support to allow you to place your devices close to one another and — in a clever use of reading user intent — slide your mouse to the edge of a screen and onto your Mac or iPad seamlessly. 

CUPERTINO, CALIFORNIA – June 7, 2021: Apple’s senior vice president of Software Engineering Craig Federighi showcases the ease of Universal Control, as seen in this still image from the keynote video of AppleÕs Worldwide Developers Conference at Apple Park. (Photo Credit: Apple Inc.)Ê

“I think what we have seen and observed from our users, both pro and and otherwise, is that we have lots of people who have Macs and they have iPads, and they have other iPhones and and we believe in making these things work together in ways that are that are powerful,” says Borchers. “And it just felt like a natural place to be able to go and extend our Continuity model so that you could make use of this incredible platform that is iPadOS while working with your Mac, right next to it. And I think the big challenge was, how do you do that in kind of a magical, simple way. And that’s what Seb and his team and been able to accomplish.

“It really builds on the foundation we made with Continuity and Sidecar,” adds Mariners-Mes. “We really thought a lot about how do you make the experience — the set up experience — as seamless as possible. How do you discover that you’ve got devices side by side.?

The other thing we thought about was what are the workflows that people want to have and what capabilities that will be essential for that. That’s where thinks like the ability to seamlessly drag content across the platforms or cut and paste was we felt to be really, really important. Because I think that’s really what brings to the magic to the experience.”

Borchers adds that it makes all the continuity features that much more discoverable. Continuity’s shared clipboard, for instance, is an always on but invisible presence. Expanding that to visual and mouse-driven models made some natural sense.

“It’s just like, oh, of course, I can drag that all the way across all the way across here,” he says.

“Bob, you say, of course,” Mariners-Mes laughs. “And yet for those of us working in platforms for a long time, the ‘of course’, is technically very, very challenging. Totally non obvious.”

Another area where iPadOS 15 is showing some promising expansionary behavior is in system-wide activities that allow you to break out of the box of in-app thinking. These include embedded recommendations that seed themselves into various apps, Shareplay, which makes an appearance wherever video calls are found and Live Text, which turns all of your photos into indexed archives searchable with a keyboard. 

Another is Quick Note, a system extension that lets you swipe from the bottom corner of your screen wherever you are in the system.

“There are, I think a few interesting things that we did with with Quick Note,” says Mariners-Mes. “One is this idea of linking. So, that if I’m working in Safari or Yelp or another app, I can quickly insert a link to whatever content I’m viewing. I don’t know about you, but it’s something that I certainly do a lot when I do research. 

“The old way was, like, cut and paste and maybe take a screenshot, create a note and jot down some notes. And now we’ve made that very, very seamless and fluid across the whole system. It even works the other way where, if I’m now in Safari and I have a note that refers to that page in Safari, you’ll see it revealed as a thumbnail at the bottom of the screen’s right hand side. So, we’ve really tried to bring the notes experience to be something that just permeates the system and is easily accessible from, from everywhere.” 

Many of the system-wide capabilities that Apple is introducing in iPadOS 15 and iOS 15 have an API that developers can tap into. That is not always the case with Apple’s newest toys, which in years past have often been left to linger in the private section of its list of frameworks rather than be offered to developers as a way to enhance their apps. Borchers says that this is an intentional move that offers a ‘broader foundation of intelligence’ across the entire system. 

This broader intelligence includes Siri moving a ton of commands to its local scope. This involved having to move a big chunk of Apple’s speech recognition to an on-device configuration in the new OS as well. The results, says Borchers, are a vastly improved day-to-day Siri experience, with many common commands executing immediately upon request — something that was a bit of a dice roll in days of Siri past. The removal of the reputational hit that Siri was taking from commands that went up to the cloud never to return could be the beginning of a turnaround for the public perception of Siri’s usefulness.

The on-device weaving of the intelligence provided by the Apple Neural Engine (ANE) also includes the indexing of text across photos in the entire system, past, present and in-the-moment.

“We could have done live text only in camera and photos, but we wanted it to apply to anywhere we’ve got images, whether it be in in Safari or quick look or wherever,” says Mariners-Mes. “One of my favorite demos of live text is actually when you’ve got that long complicated field for a password for a Wi-Fi network. You can just actually bring it up within the keyboard and take a picture of it, get the text in it and copy and paste it into into the field. It’s one of those things that’s just kind of magical.”

On the developer service front of iPadOS 15, I ask specifically about Swift Playgrounds, which add the ability to write, compile and ship apps on the App Store for the first time completely on iPad. It’s not the native Xcode some developers were hoping for, but, Borchers says, Playgrounds has moved beyond just ‘teaching people how to code’ and into a real part of many developer pipelines.

“ think one of the big insights here was that we also saw a number of kind of pro developers using it as a prototyping platform, and a way to be able to be on the bus, or in the park, or wherever if you wanted to get in and give something a try, this was super accessible and easy way to get there and could be a nice adjunct to hey, I want to learn to code.”

“If you’re a developer,” adds Mariners-Mes, “it’s actually more productive to be able to run that app on the device that you’re working on because you really get great fidelity. And with the open project format, you can go back and forth between Xcode and Playgrounds. So, as Bob said, we can really envision people using this for a lot of rapid prototyping on the go without having to bring along the rest of their development environment so we think it’s a really, really powerful addition to our development development tools this year.”

Way back in 2018 I profiled a new team at Apple that was building out a testing apparatus that would help them to make sure they were addressing real-world use cases for flows of process that included machines like the (at the time un-revealed) new Mac Pro, iMacs, MacBooks and iPads. One of the demos that stood out at the time was a deep integration with music apps like Logic that would allow the input models of iPad to complement the core app. Tapping out a rhythm on a pad, brightening or adjusting sound more intuitively with the touch interface. More of Apple’s work these days seems to be aimed at allowing users to move seamlessly back and forth between its various computing platforms, taking advantage of the strengths of each (raw power, portability, touch, etc) to complement a workflow. A lot of iPadOS 15 appears to be geared this way.

Whether it will be enough to turn the corner on the perception of iPad as a work device that is being held back by software, I’ll reserve judgement until it ships later this year. But, in the near term, I am cautiously optimistic that this set of enhancements that break out of the ‘app box’, the clearer affordances for multitasking both in and out of single apps and the dedication to API support are pointing towards an expansionist mentality on the iPad software team. A good sign in general.

#api, #app-store, #apple-inc, #california, #computing, #craig-federighi, #cupertino, #game-controller, #ios, #ios-11, #ipad, #ipados, #ipads, #peer-to-peer, #portable-media-players, #safari, #sidecar, #siri, #speech-recognition, #tablet-computers, #tc, #touchscreens, #wi-fi

7 new security features Apple quietly announced at WWDC

Apple went big on privacy during its Worldwide Developer Conference (WWDC) keynote this week, showcasing features from on-device Siri audio processing to a new privacy dashboard for iOS that makes it easier than ever to see which apps are collecting your data and when.

While typically vocal about security during the Memoji-filled, two-hour-long(!) keynote, the company also quietly introduced several new security and privacy-focused features during its WWDC developer sessions. We’ve rounded up some of the most interesting — and important.

Passwordless login with iCloud Keychain

Apple is the latest tech company taking steps to ditch the password. During its “Move beyond passwords” developer session, it previewed Passkeys in iCloud Keychain, a method of passwordless authentication powered by WebAuthn, and Face ID and Touch ID.

The feature, which will ultimately be available in both iOS 15 and macOS Monterey, means you no longer have to set a password when creating an account or a website or app. Instead, you’ll simply pick a username, and then use Face ID or Touch ID to confirm it’s you. The passkey is then stored in your keychain and then synced across your Apple devices using iCloud — so you don’t have to remember it, nor do you have to carry around a hardware authenticator key.

“Because it’s just a single tap to sign in, it’s simultaneously easier, faster and more secure than almost all common forms of authentication today,” said Garrett Davidson, an Apple authentication experience engineer. 

While it’s unlikely to be available on your iPhone or Mac any time soon — Apple says the feature is still in its ‘early stages’ and it’s currently disabled by default — the move is another sign of the growing momentum behind eliminating passwords, which are prone to being forgotten, reused across multiple services, and — ultimately — phishing attacks. Microsoft previously announced plans to make Windows 10 password-free, and Google recently confirmed that it’s working towards “creating a future where one day you won’t need a password at all”.

Microphone indicator in macOS

macOS has a new indicator to tell you when the microhpone is on. (Image: Apple)

Since the introduction of iOS 14, iPhone users have been able to keep an eye on which apps are accessing their microphone via a green or orange dot in the status bar. Now it’s coming to the desktop too.

In macOS Monterey, users will be able to see which apps are accessing their Mac’s microphone in Control Center, MacRumors reports, which will complement the existing hardware-based green light that appears next to a Mac’s webcam when the camera is in use.

Secure paste

iOS 15, which will include a bunch of privacy-bolstering tools from Mail Privacy Protection to App Privacy Reports, is also getting a feature called Secure Paste that will help to shield your clipboard data from other apps.

This feature will enable users to paste content from one app to another, without the second app being able to access the information on the clipboard until you paste it. This is a significant improvement over iOS 14, which would notify when an app took data from the clipboard but did nothing to prevent it from happening.

With secure paste, developers can let users paste from a different app without having access to what was copied until the user takes action to paste it into their app,” Apple explains. “When developers use secure paste, users will be able to paste without being alerted via the [clipboard] transparency notification, helping give them peace of mind.”

While this feature sounds somewhat insignificant, it’s being introduced following a major privacy issue that came to light last year. In March 2020, security researchers revealed that dozens of popular iOS apps — including TikTok — were “snooping” on users’ clipboard without their consent, potentially accessing highly sensitive data.

Advanced Fraud Protection for Apple Card

Payments fraud is more prevalent than ever as a result of the pandemic, and Apple is looking to do something about it. As first reported by 9to5Mac, the company has previewed Advanced Fraud Protection, a feature that will let Apple Card users generate new card numbers in the Wallet app.

While details remain thin — the feature isn’t live in the first iOS 15 developer beta — Apple’s explanation suggests that Advanced Fraud Protection will make it possible to generate new security codes — the three-digit number you enter at checkout – when making online purchases. 

“With Advanced Fraud Protection, Apple Card users can have a security code that changes regularly to make online Card Number transactions even more secure,” the brief explainer reads. We’ve asked Apple for some more information. 

‘Unlock with Apple Watch’ for Siri requests

As a result of the widespread mask-wearing necessitated by the pandemic, Apple introduced an ‘Unlock with Apple Watch’ in iOS 14.5 that let enabled users to unlock their iPhone and authenticate Apple Pay payments using an Apple Watch instead of Face ID.

The scope of this feature is expanding with iOS 15, as the company has confirmed that users will soon be able to use this alternative authentication method for Siri requests, such as adjusting phone settings or reading messages. Currently, users have to enter a PIN, password or use Face ID to do so.

“Use the secure connection to your Apple Watch for Siri requests or to unlock your iPhone when an obstruction, like a mask, prevents Face ID from recognizing your Face,” Apple explains. Your watch must be passcode protected, unlocked, and on your wrist close by.”

Standalone security patches

To ensure iPhone users who don’t want to upgrade to iOS 15 straight away are up to date with security updates, Apple is going to start decoupling patches from feature updates. When iOS 15 lands later this year, users will be given the option to update to the latest version of iOS or to stick with iOS 14 and simply install the latest security fixes. 

“iOS now offers a choice between two software update versions in the Settings app,” Apple explains (via MacRumors). “You can update to the latest version of iOS 15 as soon as it’s released for the latest features and most complete set of security updates. Or continue on ‌iOS 14‌ and still get important security updates until you’re ready to upgrade to the next major version.”

This feature sees Apple following in the footsteps of Google, which has long rolled out monthly security patches to Android users.

‘Erase all contents and settings’ for Mac

Wiping a Mac has been a laborious task that has required you to erase your device completely then reinstall macOS. Thankfully, that’s going to change. Apple is bringing the “erase all contents and settings” option that’s been on iPhones and iPads for years to macOS Monterey.

The option will let you factory reset your MacBook with just a click. “System Preferences now offers an option to erase all user data and user-installed apps from the system, while maintaining the operating system currently installed,” Apple says. “Because storage is always encrypted on Mac systems with Apple Silicon or the T2 chip, the system is instantly and securely ‘erased’ by destroying the encryption keys.”

#android, #apple, #apple-inc, #clipboard, #computing, #control-center, #encryption, #face-id, #google, #icloud, #ios, #ios-14, #ipads, #iphone, #keychain, #microsoft, #microsoft-windows, #online-purchases, #operating-system, #operating-systems, #privacy, #security, #siri, #software

Spotlight gets more powerful in iOS 15, even lets you install apps

With the upcoming release of iOS 15 for Apple mobile devices, Apple’s built-in search feature known as Spotlight will become a lot more functional. In what may be one of its bigger updates since it introduced Siri Suggestions, the new version of Spotlight is becoming an alternative to Google for several key queries, including web images and information about actors, musicians, TV shows and movies. It will also now be able to search across your photo library, deliver richer results for contacts, and connect you more directly with apps and the information they contain. It even allows you to install apps from the App Store without leaving Spotlight itself.

Spotlight is also more accessible than ever before.

Years ago, Spotlight moved from its location to the left of the Home screen to become available with a swipe down in the middle of any screen in iOS 7, which helped grow user adoption. Now, it’s available with the same swipe down gesture on the iPhone’s Lock Screen, too.

Apple showed off a few of Spotlight’s improvements during its keynote address at its Worldwide Developer Conference, including the search feature’s new cards for looking up information on actors, movies and shows, as well as musicians. This change alone could redirect a good portion of web searches away from Google or dedicated apps like IMDb.

For years, Google has been offering quick access to common searches through its Knowledge Graph, a knowledge base that allows it to gather information from across sources and then use that to add informational panels above and the side of its standard search results. Panels on actors, musicians, shows and movies are available as part of that effort.

But now, iPhone users can just pull up this info on their home screen.

The new cards include more than the typical Wikipedia bio and background information you may expect — they also showcase links to where you can listen or watch content from the artist or actor or movie or show in question. They include news articles, social media links, official websites, and even direct you to where the searched person or topic may be found inside your own apps. (E.g. a search for “Billie Eilish” may direct you to her tour tickets inside SeatGeek, or a podcast where she’s a guest).

Image Credits: Apple

For web image searches, Spotlight also now allows you to search for people, places, animals, and more from the web — eating into another search vertical Google today provides.

Image Credits: iOS 15 screenshot

Your personal searches have been upgraded with richer results, too, in iOS 15.

When you search for a contact, you’ll be taken to a card that does more than show their name and how to reach them. You’ll also see their current status (thanks to another iOS 15 feature), as well as their location from FindMy, your recent conversations on Messages, your shared photos, calendar appointments, emails, notes, and files. It’s almost like a personal CRM system.

Image Credits: Apple

Personal photo searches have also been improved. Spotlight now uses Siri intelligence to allow you to search your photos by the people, scenes, elements in your photos, as well as by location. And it’s able to leverage the new Live Text feature in iOS 15 to find the text in your photos to return relevant results.

This could make it easier to pull up photos where you’ve screenshot a recipe, a store receipt, or even a handwritten note, Apple said.

Image Credits: Apple

A couple of features related to Spotlight’s integration with apps weren’t mentioned during the keynote.

Spotlight will now display action buttons on the Maps results for businesses that will prompt users to engage with that business’s app. In this case, the feature is leveraging App Clips, which are small parts of a developer’s app that let you quickly perform a task even without downloading or installing the app in question. For example, from Spotlight you may be prompted to pull up a restaurant’s menu, buy tickets, make an appointment, order takeout, join a waitlist, see showtimes, pay for parking, check prices and more.

The feature will require the business to support App Clips in order to work.

Image Credits: iOS 15 screenshot

Another under-the-radar change — but a significant one — is the new ability to install apps from the App Store directly from Spotlight.

This could prompt more app installs, as it reduces the steps from a search to a download, and makes querying the App Store more broadly available across the operating system.

Developers can additionally choose to insert a few lines of code to their app to make data from the app discoverable within Spotlight and customize how it’s presented to users. This means Spotlight can work as a tool for searching content from inside apps — another way Apple is redirecting users away from traditional web searches in favor of apps.

However, unlike Google’s search engine, which relies on crawlers that browse the web to index the data it contains, Spotlight’s in-app search requires developer adoption.

Still, it’s clear Apple sees Spotlight as a potential rival to web search engines, including Google’s.

“Spotlight is the universal place to start all your searches,” said Apple SVP of Software Engineering Craig Federighi during the keynote event.

Spotlight, of course, can’t handle “all” your searches just yet, but it appears to be steadily working towards that goal.

read more about Apple's WWDC 2021 on TechCrunch

#app-store, #apple, #apps, #craig-federighi, #google, #imdb, #ios, #ios-15, #iphone, #mobile, #mobile-devices, #operating-systems, #search-engine, #search-results, #siri, #smartphones, #spotlight, #wwdc-2021

Spring Loaded: Apple announces April 20 event

Apple will host its first product unveiling event in more than five months, the company announced Tuesday. Invitations that went out this morning state that the event will take place at 10:00 am PST on Tuesday, April 20, 2021.

Many people didn’t learn about this event from the invitation that went out this morning. Rather, they learned hours before when Siri began answering the question “Hey Siri, when is Apple’s next event?” with “The special event is on Tuesday, April 20, at Apple Park in Cupertino, CA. You can get all the details at Apple.com.”

As has become the custom, the event has a tagline: “Spring Loaded.” The taglines usually harbor subtle clues about what products might be updated or how, as well as the general theme of the event.

Read 3 remaining paragraphs | Comments

#airtags, #apple, #apple-silicon, #imac, #ipad, #ipad-pro, #m1, #macbook, #macbook-air, #macbook-pro, #mini-led, #siri, #tech

Apple said to be developing Apple TV/HomePod combo and iPad-like smart speaker display

Apple is reportedly working on a couple of new options for a renewed entry into the smart home, including a mash-up of the Apple TV with a HomePod speaker, and an integrated camera for video chat, according to Bloomberg. It’s also said to be working on a smart speaker that basically combines a HomePod with an iPad, providing something similar to Amazon’s Echo Show or Google’s Nest Hub in functionality.

The Apple TV/HomePod hybrid would still connect to a television for outputting video, and would offer similar access to all the video and gaming services that the current Apple TV does, while the speaker component would provide sound output, music playback, and Siri integration. It would also include a built-in camera for using video conferencing apps on the TV itself, the report says.

That second device would be much more like existing smart assistant display devices on the market today, with an iPad-like screen providing integrated visuals. The project could involve attaching the iPad via a “robotic arm” according to Bloomberg, that would allow it to move to accommodate a user moving around, with the ability to keep them in frame during video chat sessions.

Bloomberg doesn’t provide any specific timelines for release of any of these potential products, and it sounds like they’re still very much in the development phase, which means Apple could easily abandon these plans depending on its evaluation of their potential. Apple just recently discontinued its original HomePod, the $300 smart speaker it debuted in 2018.

Rumors abound about a refreshed Apple TV arriving sometime this year, which should boast a faster processor and also an updated remote control. It could bring other hardware improvements, like support for a faster 120Hz refresh rate available on more modern TVs.

#apple, #apple-inc, #apple-tv, #assistant, #computing, #hardware, #homepod, #ios, #ipad, #portable-media-players, #siri, #smart-speaker, #speaker, #tablet-computers, #tc, #touchscreens, #video-conferencing

Apple adds two brand new Siri voices and will no longer default to a female or male voice in iOS

Apple is adding two new voices to Siri’s English offerings, and eliminating the default ‘female voice’ selection in the latest beta version of iOS. This means that every person setting up Siri will choose a voice for themselves and it will no longer default to the voice assistant being female, a topic that has come up quite a bit with regards to bias in voice interfaces over the past few years.

The beta version should be live now and available to program participants.

I believe that this is the first of these assistants to make the choice completely agnostic with no default selection made. This is a positive step forward as it allows people to choose the voice that they prefer without the defaults bias coming into play. The two new voices also bring some much needed variety to the voices of Siri, offering more diversity in speech sound and pattern to a user picking a voice that speaks to them.

in some countries and languages Siri already defaults to a male voice. But this change makes the choice the users’ for the first time.

“We’re excited to introduce two new Siri voices for English speakers and the option for Siri users to select the voice they want when they set up their device,” a statement from Apple reads. “This is a continuation of Apple’s long-standing commitment to diversity and inclusion, and products and services that are designed to better reflect the diversity of the world we live in.”

The two new voices use source talent recordings that are then run through Apple’s Neural text to speech engine, making the voices flow more organically through phrases that are actually being generated on the fly.

I’ve heard the new voices and they sound pretty fantastic, with natural inflection and smooth transitions. They’ll be a welcome addition of choice to iOS users. I’ll embed some samples here after the beta drops.

This latest beta also upgrades the Siri voices in Ireland, Russia and Italy to Neural TTS, bringing the total voices using the new tech to 38. Siri now handles 25 billion requests per month on over 500M devices and supports 21 languages in 36 countries.

The new voices are available to English speaking users around the world and Siri users can select a personal preference of voice in 16 languages.

It seems very likely that these two new voices are just the first expansion in Siri’s voice selections. More diversity in voice, tone and regional dialect can only be a positive development for how inclusive smart devices feel. Over the past few years we have finally begun to see some movement from Amazon, Google and Apple to aggressively correct situations where the assistants have revealed bias in their responses to queries that use negative or abusive language. Improvements there, as well as in queries on social justice topics and overall accessibility improvements are incredibly key as we continue to see an explosion of voice-first or voice-native interfaces. These kinds of choices matter, especially at a scale of hundreds of millions of people.

 

Article updated to note that in some countries and languages Siri currently defaults to a male voice. 

#amazon, #apple, #apple-inc, #artificial-intelligence, #computing, #google, #google-now, #ios, #ireland, #italy, #mach, #russia, #siri, #software, #speech-synthesis, #voice-assistant

Apple Maps adds COVID-19 travel guidance for over 300 airports worldwide

Apple has updated its native Maps app with more helpful information designed to assist with travel while mitigating the spread of COVID-19. Apple Maps on iPhone, iPad and Mac will now show COVID-19 health measure information for airports when searched via the app, either through a link to the airport’s own COVID-19 advisory page, or directly on the in-app location card itself.

The new information is made available through a partnership with the Airports Council International, and provides details on COVID-19 safety guidelines in effect at over 300 airports worldwide. The type of information provided includes requirements around COVID-19 testing, mask usage, screening procedures and any quarantine measures in effect, and generally hopes to help make the process of travelling while the global pandemic continues, and as vaccination programs and other counter-efforts are set to prompt a global travel recovery.

Earlier this month, Apple also added COVID-19 vaccination locations within the U.S. to Apple Maps, which can be found when searching either via text, with Siri, or using the ‘Find nearby’ location-based feature. Last year, the company added testing sites in various locations around the world, and added COVID-19 information modules to cards for other types of businesses.

#apple, #apple-inc, #apple-maps, #apps, #computing, #covid-19, #ios, #ipad, #iphone, #operating-systems, #siri, #software, #tc, #united-states

Forget medicine, in the future you might get prescribed apps

Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast, where we unpack the numbers behind the headlines.

Natasha and Danny and Alex and Grace were all here to chat through the week’s biggest tech happenings. This time around we had whatever passes for a quiet week as far as news volume. But that still meant we had to cut stuff and move the rest around. But, once we got done editing the notes doc down, here’s what was leftover:

The show wraps with a teaser for next week that we won’t spoil here.

Equity drops every Monday at 7:00 a.m. PST, Wednesday, and Friday at 6:00 AM PST, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts!

#airtable, #copy-ai, #dutchie, #equity, #equity-podcast, #etoro, #fit-analytics, #gpt-3, #gumroad, #robinhood, #siri, #snap, #squarespace, #tc

Riva Health wants to turn your smartphone into a blood pressure monitor

Riva, founded by scientist Tuhin Sinha and Siri co-founder Dag Kittlaus, wants to help people measure their blood pressure in a clinically-approved way. Blood pressure can help indicate at-risk patients before they are actually at risk, showing early signs of heart disease. And while other hardware solutions on the market promise the same end-goal, Riva wants to be a purely software solution that integrates with hardware that it thinks its end-user has anyways: their smartphone.

The company, launching out of stealth today, has raised $15.5 million in seed funding in a round led by Menlo Ventures, with participation from True Ventures. UC Health and University of Colorado Innovation Fund accounted for $5 million of the round, with other angels including GoHealth’s Brandon Cruz and Madison Industries Larry Gies. Greg Yap of Menlo, who talked to Sinha for three years before investing, will be joining the board.

 

Kittlaus, who also founded AI-assistant Viv, says that he began thinking about how to make a difference in digital health after undergoing his own severe health issues. Kittlaus was diagnosed with pancreatic neuroendocrine cancer in 2016, the same type of cancer that late Apple CEO Steve Jobs died from.

“I spend time researching ideas on it, but I was missing the thing that I’ve had in both my previous companies, which was some amazing technical innovation that could form a wedge that you can move the world with,” he said.

Kittlaus mentioned this internal conversation with his friend, who was the first investor in Siri, this past summer. The friend introduced him to Tuhin Sinha, the scientist who spent years developing the technology that is used to power Riva.

To use Riva, all a person needs to do is open the app on their phone and tap ‘Go’, which triggers the camera flash on the back of the phone. The app will then guide the user to place their finger over the right camera, and help them adjust positioning until it locks into place. After that, Riva will use the light to track blood pressure change and create a rendering of it on screen.

Riva Health planned design, subject to change.

“The well-known part of this technology is shining a light on a blood vessel and getting a wave out of it,” Sinha said. “The novelty is the shape of the wave, how it relates to blood pressure, and our secret sauce is looking at those waveshape changes and validating them in a rigorous and comprehensive way.” Sinha declined to share how they are validating exactly, but said that it is key that a startup has to measure blood pressure in a variety of different scenarios – think standing or sitting – to see if it is effective.

Once Riva tracks five to seven heartbeats worth of data, it has a comprehensive understanding of someone’s blood pressure at that moment.

The data, which is HIPAA compliant, can then be sent to a family physician or doctor’s office to be analyzed if a risk is present, starting with hypertension.

“Moving to a platform like the smartphone is mobilizing the measurement and management of [health and disease management],” Sinha said. Riva Health is a purely software solution.

The company is currently in the process of verifying its software with Android phones, but Kittalus says that “any modern phone should be able to acquire the signal needed” to work.

A big hurdle for any health tech company, and especially Riva, is whether it can get FDA approval for clinical usage. The company is currently engaged in that process with the FDA, and will ask users who pilot its free app, coming out this summer, to participate in the trial and data-gathering to bolster its approval process.

Right now, Riva is using its technology to track the changes in blood pressure in a clinical setting, and the second-half will be in a home setting. By tracking the use of its system in the real world, it can prove that it works in a laboratory and home set-up, helping it prove that its disease management technology is effective.

“There are a lot of gizmos and gadgets that will claim to do blood pressure reading,” Kiittlaus said. “I call it blood pressure as a novelty where it gives you blood pressure reading but not a clinic.” For example, a Fitbit does track blood pressure but consistently underestimates the number, a study says. Other solutions like the Apple Watch or cuffless wearables measure blood pressure for non-clinical usage, which means that it isn’t super accurate and only notifies obvious blood pressure problems.

Riva wants to be daily and precise enough to be relied on by doctors, which is part of its overall route to make money. While the team wouldn’t share any published research or proof about its scientific method, Dr. Richard Zane, the chief innovation officer of UCHealth said that it is “bulletproof” technology. Over 700 companies in the past three years have tried to work with Zane’s team, and Riva is one of the few that met the bar.

“When our team tested it, it actually worked out of the box the first time, which basically never happens,” Zane said. He added that one of the biggest barriers to entry is that people have needed devices in the past, and Riva brings “a novel technology that actually works that will be embedded in something that patients already carry around with them and allow them to manage their heart disease or hypertension.”

“The core product of the company is healthcare outcomes,” Sinha said. The company is part of the wave of startups that believe outcome-based healthcare is the future, a model where doctors are paid for results instead of the number of visits they complete in a day. With this vision, Riva plans to monetize by selling outcomes to hospital systems and providers: if it can provide a tool to help doctors keep people out of surgery and indicate issues earlier than before, it can make a solid argument as to why systems should adopt it.

Only 20% of healthcare works on the value-based model, so this will be a hurdle even with the right sentiment. In the meantime, Kittlaus says that it is working with insurers to pay for its service. Riva would get reimbursed for treating and managing hypertension.

“We want to keep it free for the consumer, free for the doctor, and insurance will cover it,” he said.

If and when the FDA clears this technology, Kittlaus says that doctors and medics “will still be skeptical about it” but will ultimately be convinced of the outcomes being more accurate, and ongoing, than the cuff.

“You’re prescribing an app,” he said. “Instead of medicine.”

The app, pending FDA approval, will be available for public late this year or early next year. The next few months for Riva will be key in determining its success and validity – and Sinha, the chief scientist, say it will be rigorous, but fast. He has a personal tie to the company’s success.

Sinha has lost five brothers, one sister, and his father before the age of 59 to heart disease. Now, his app has the ability to track the condition that made him lose these family members in the first place.

“I feel like I have a ticking time bomb in my chest,” he said. “And if anything, I’m going to do this for myself.”

#dag-kittlaus, #digital-health, #health, #menlo, #riva, #siri, #tc

Apple Maps updated with Covid-19 vaccination locations in the U.S.

Google earlier this year announced an update to Google Maps to help people find Covid-19 vaccination sites nearby, and now Apple is doing the same. Apple device owners can either ask Siri or search within Apple Maps to find nearby Covid-19 vaccine providers within the U.S., the company says. These results will include key information, like operating hours, addresses, phone numbers and links to the provider’s website.

To access this information through a voice command, users can ask Siri something like “where can I get a Covid vaccination?,” which will direct them to Maps.

In addition to Siri or searching directly within Apple Maps for vaccine info, the option “Covid-19 vaccinations” will also be available in Apple Maps’ “Find Nearby” menu.

Apple says its vaccination location data is being sourced from VaccineFinder, an initiative led by Boston Children’s Hospital. This data has also been helping to power Google Maps’ vaccine finding capabilities, Google earlier said. Apple notes that healthcare providers, labs and other businesses can also choose submit their information about either Covid-19 testing or vaccination locations via the Apple Business Register page. After doing so, Apple will validate the information and then display it to users who are searching for Covid-19 resources in their local area.

At launch, there’s information about over 20,000 vaccine locations being provided through Apple Maps. Apple says more sites will be added in the weeks to come.

Throughout the pandemic, Apple has integrated other Covid-related health resources into Apple Maps both in the U.S. and internationally. Last year, for example, it updated Apple Maps to display Covid-19 testing sites in Australia, Canada, France, Germany, Japan, the Netherlands, New Zealand, Portugal, Singapore, Taiwan, Thailand, and the U.S. It also added Covid-19 modules to business pages, and updated Siri with more knowledge about Covid-19, testing sites, and, now, vaccination locations.


Early Stage is the premier “how-to” event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company building: Fundraising, recruiting, sales, product-market fit, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included for audience questions and discussion. Use code “TCARTICLE at checkout to get 20% off tickets right here.

#apple, #apple-inc, #apple-maps, #covid-19-vaccine, #health, #siri, #united-states, #vaccination

Apple pulls the plug on the HomePod

Apple has discontinued production of the pricey HomePod smart speaker that it began shipping in 2018.

The speaker will still be sold until supplies run out, but the space gray version is already gone from Apple’s online store. Only the white version remains, though you might be able to find the space gray one at a third-party retailer.

Regardless of the production change, HomePod owners will continue to get software updates and support from Apple for an unspecified amount of time.

Read 7 remaining paragraphs | Comments

#apple, #audio, #homepod, #homepod-mini, #siri, #smart-speaker, #speaker, #tech

Apple discontinues original HomePod, will focus on mini

After 4 years on the market, Apple has discontinued its original HomePod. It says that it will continue to produce and focus on the HomePod mini, introduced last year. The larger HomePod offered a beefier sound space but the mini has been very well received and clearly accomplishes many of the duties that the larger version was tasked with. The sound is super solid (especially for the size) and it offers access to Siri, Apple’s assistant feature.

The original HomePod was a feat of audio engineering that Apple spent over 5 years developing. In order to accomplish its development, the team at Apple built out a full development center near its headquarters in Cupertino, with a world-class development environment with a dozen anechoic chambers, including one of the bigger anechoic chambers outside of academic use in the US. I visited the center before its release, noting that Apple took it the extra mile to get the incredibly complex series of tweeters and woofer that built its soundspace:

But slathered on top of that is a bunch of typically Apple extra-mile jelly. Apple says that its largest test chamber is one of the biggest in the US, on a pad, suspended from the outside world with nothing to pollute its tests of audio purity. Beyond testing for the acoustic qualities of the speaker, these chambers allowed Apple to burrow down to account for and mitigate the issues that typically arise from having a high excursion subwoofer in such a small cabinet. Going even further, there are smaller chambers that allow them to isolate the hum from electronic components (there is a computer on board after all) and make attempts to insulate and control that noise so it doesn’t show up in the final output.

I found it to be one of the best speakers ever made for the home when I reviewed it in 2018. From the booming base and well-shaped nature of the tweeter assembly inside; the cloth cover that was specially shaped to avoid interfering with sound quality in any way; the way that it sensed the way that audio was being shaped by walls and other obstructions and adjusted its output to compensate. It was the definition of ‘no effort spared’ in the speaker department.

The major gripe for the speaker at the time was the $349 price, which was at the top end of the home speaker market, especially those with embedded home assistants. A price drop to $299 mitigated that somewhat, but still put it at the top of the pricing umbrella for the class. Apple’s HomePod mini, launched last year, has been well received. Our Brian Heater said that it had ‘remarkably big sound’ for the $99 price.

Apple gave TechCrunch a statement about the discontinuation:

HomePod mini has been a hit since its debut last fall, offering customers amazing sound, an intelligent assistant, and smart home control all for just $99. We are focusing our efforts on HomePod mini. We are discontinuing the original HomePod, it will continue to be available while supplies last through the Apple Online Store, Apple Retail Stores, and Apple Authorized Resellers. Apple will provide HomePod customers with software updates and service and support through Apple Care.

Existing HomePods will continue to be sold but Apple’s website is already out of Space Gray. It will continue to provide support for existing HomePods. Apple seems to be betting on the mini going forward, which could point to their desire to fill every room with ‘good enough’ sound rather than to focus on the living room with ‘truly unbelievable’ sound. The HomePod itself never quite got to the level where it could act as a full home theater replacement, though paired in their multi-speaker configurations.

The HomePod research and production efforts will live on in some ways through Apple’s advanced audio rendering systems that led to things like Spatial Audio in AirPods. I quite enjoy the ones in my home and have yet to add any minis to the mix. Maybe a last minute hunt is in order.

#airpods, #apple, #apple-inc, #applecare, #assistant, #computing, #cupertino, #homekit, #homepod, #intelligent-assistant, #siri, #smart-speakers, #speaker, #tc, #united-states

Apple clarifies iOS default music app feature, and it’s not what people thought

Siri in iOS 14.

Enlarge / Siri in iOS 14. (credit: Samuel Axon)

Over the past several weeks, there have been several reports (including one of our own) on a feature found in recent beta releases of iOS 14.5 that appeared to allow users to change the default music app on their iPhones. However, Apple just clarified to TechCrunch that the feature is not as it first seemed.

In the initial reports, users claimed that they were prompted to select a preferred music app, such as Spotify or Apple Music, when they asked Siri to play a song. They then found that Siri seemed to honor that choice on future requests.

Further, they noticed that using the usual command “Hey Siri, play [song name] on Spotify” would cause Siri to use Spotify again in the future when they spoke the same request sans the “on Spotify” part. (In the current public version of iOS, users must say “on Spotify” every single time to play songs in that app instead of Apple Music.)

Read 7 remaining paragraphs | Comments

#apple, #apple-music, #ios, #ios-14-5, #siri, #spotify, #tech

Apple clarifies you can’t actually set a ‘default’ music service in iOS 14.5

Apple has clarified that the iOS 14.5 beta is not actually allowing users to select a new default music service, as has been reported. Following the beta’s release back in February, a number of beta testers noticed that Siri would now ask what music service they would like to use when they asked Siri to play music. But Apple doesn’t consider this feature the equivalent to “setting a default” — an option it more recently began to allow for email and browser apps.

Instead, the feature is Siri intelligence-based, meaning it can improve and even change over time as Siri learns to better understand your listening habits.

For example, if you tell Siri to play a song, album or artist, it may ask you which service you want to use to listen to this sort of content. However, your response to Siri is not making that particular service your “default,” Apple says. In fact, Siri may ask you again at some point — a request that could confuse users if they thought their preferences had already been set.

Image Credits: iOS 14.5 screenshot

Apple also points out there’s no specific setting in iOS where users can configure a “default” music service, the way there is with email and browser apps. While many earlier reports did note this difference, they still referred to the feature as “setting a default,” which is technically incorrect. 

More broadly, the feature is an attempt to help Siri to learn the listening apps you want to use for different types of audio content — not just music. Perhaps you want to use Spotify to listen to music, but prefer to keep up with your podcasts in Apple Podcasts or some other third-party podcasts app. And you may want to listen to audiobooks in yet another app.

When Siri asks you the which service you want to use for these sorts of audio requests, it will present a list of the audio apps you have installed for you to choose from.

Image Credits: iOS 14.5 screenshot

In addition to Siri’s understanding of your habits — which are based on your responses and choices — app developers can optionally use APIs to provide Siri with access to more intelligence about what people listen to in their app and why. This could allow Siri to fulfill users’ requests with more accuracy. And all this processing takes place on the device.

The audio choice feature, of course, doesn’t prevent users from requesting a particular service by name, even if it’s not their usual preference.

For instance, you can still say something like “play smooth jazz radio on Pandora” to launch that app instead. However, if you continued to request Pandora by name for music requests — even though you had initially specified Apple Music or Spotify or some other service when Siri had first prompted you — then the next time you asked Siri to play music without specifically a service, the assistant may ask you again to choose a service.

Image Credits: iOS 14.5 screenshot

Although this may seem like a minor clarification, it has a greater importance given the increased regulatory scrutiny Apple is under these days over how its App Store and app ecosystem work. Spotify, in particular, has alleged that Apple is behaving in anti-competitive ways — for instance by requiring a commission on Spotify’s in-app purchases even though Apple runs a rival music service that Spotify claims has first-party advantages.

The audio choice feature had first appeared in iOS 14.5 beta 1, but had been pulled in beta 2. It has since returned with the release of beta 3, which again drew attention and headlines — as well as Apple’s response.

Although it’s not technically allowing you to set a “default,” the Siri-powered feature could eventually feel like one for users with consistent listening behavior. The iPhone will simply become smarter about how to play what you want to hear, without necessarily forcing you to use Apple’s own apps if you don’t want to.

 

#apple, #apple-music, #apps, #beta, #default, #ios, #ios-14-5, #music, #siri, #tc

Soon, you may be able to change the default music service in iOS

It had long been an inescapable fact about Apple’s iOS operating system for iPhones that you couldn’t change your default apps away from those made by Apple itself. But only a few months after Apple changed course and allowed users to change the default email or browser apps, it now appears that same choice is coming for music-streaming services.

After Apple recently pushed out the first public beta of iOS 14.5, a Reddit user quickly discovered that the first time they asked Siri to play a song after updating, they were given a prompt to pick which streaming service to use. Subsequent prompts then obeyed that selection.

Other users on Reddit and MacRumors confirmed similar experiences. They also confirmed that it works if you specify the streaming service verbally, for example, by saying, “Hey Siri, play Heroes by David Bowie on Spotify.”

Read 4 remaining paragraphs | Comments

#apple, #apple-music, #beta, #homepod, #homepod-mini, #ios, #ios-14, #ios-14-5, #siri, #spotify, #tech

Augmented reality and the next century of the web

Howdy friends, this is the web version of my Week in Review newsletter, it’s here to entice you to sign up and get it in your inbox every week.

Last week, I showcased how Twitter was looking at the future of the web with a decentralized approach so that they wouldn’t be stuck unilaterally de-platforming the next world leader. This week, I scribbled some thoughts on another aspect of the future web, the ongoing battle between Facebook and Apple to own augmented reality. Releasing the hardware will only be the start of a very messy transition from smartphone-first to glasses-first mobile computing.

Again, if you so desire you can get this in your inbox from the newsletter page, and follow my tweets @lucasmtny


The Big Thing

If the last few years of new “reality” tech has telegraphed anything, it’s that tech companies won’t be able to skip past augmented reality’s awkward phase, they’re going to have to barrel through it and it’s probably going to take a long-ass time.

The clearest reality is that in 2021 everyday users still don’t seem quite as interested in AR as the next generation of platform owners stand to benefit from a massive transition. There’s some element of skating to where the puck is going among the soothsayers that believe AR is the inevitable platform heir etc. etc., but the battle to reinvent mobile is at its core a battle to kill the smartphone before its time has come.

A war to remake mobile in the winner’s image

It’s fitting that the primary backers of this AR future are Apple and Facebook, ambitious companies that are deeply in touch with the opportunities they could’ve capitalized on if they could do it all over again.

While Apple and Facebook both have thousands of employees toiling quietly in the background building out their AR tech moats, we’ve seen and heard much more on Facebook’s efforts. The company has already served up several iterations of their VR hardware through Oculus and has discussed publicly over the years how they view virtual reality and augmented reality hardware converging. 

Facebook’s hardware and software experiments have been experimentations in plain sight, an advantage afforded to a company that didn’t sell any hardware before they started selling VR headsets. Meanwhile Apple has offered up a developer platform and a few well-timed keynote slots for developers harnessing their tools, but the most ambitious first-party AR project they’ve launched publicly on iOS has been a measuring tape app. Everything else has taken place behind closed doors.

That secrecy tends to make any reporting on Apple’s plans particularly juicy. This week, a story from Bloomberg’s Mark Gurman highlights some of Apple’s next steps towards a long-rumored AR glasses product, reporting that Apple plans to release a high-end niche VR device with some AR capabilities as early as next year. It’s not the most surprising but showcases how desperate today’s mobile kingpins are to ease the introduction of a technology that has the potential to turn existing tech stacks and the broader web on their heads.

Both Facebook and Apple have a handful of problems getting AR products out into the world, and they’re not exactly low-key issues:

  1. hardware isn’t ready
  2. platforms aren’t ready
  3. developers aren’t ready
  4. users don’t want it yet

This is a daunting wall, but isn’t uncommon among hardware moonshots. Facebook has already worked its way through this cycle once with virtual reality over several generations of hardware, though there were some key difference and few would call VR a mainstream success quite yet.

Nevertheless, there’s a distinct advantage to tackling VR before AR for both Facebook and Apple, they can invest in hardware that’s adjacent to the technologies their AR products will need to capitalize on, they can entice developers to build for a platform that’s more similar to what’s coming and they can set base line expectations for consumers for a more immersive platform. At least this would all be the case for Apple with a mass market VR device closer to Facebook’s $300 Quest 2, but a pricey niche device as Gurman’s report details doesn’t seem to fit that bill quite so cleanly.

The AR/VR content problem 

The scenario I’d imagine both Facebook and Apple are losing sleep over is that they release serviceable AR hardware into a world where they are wholly responsible for coming up with all the primary use cases.

The AR/VR world already has a hefty backlog of burnt developers who might be long-term bullish on the tech but are also tired of getting whipped around by companies that seem to view the development of content ecosystems simply as a means to ship their next device. If Apple is truly expecting the sales numbers of this device that Bloomberg suggests — similar to Valve’s early Index headset sales — then color me doubtful that there will be much developer interest at all in building for a stopgap device, I’d expect ports of Quest 2 content and a few shining stars from Apple-funded partners.

I don’t think this will me much of a shortcut for them.

True AR hardware is likely going to have different standards of input, different standards of interaction and a much different approach to use cases compared to a device built for the home or smartphone. Apple has already taken every available chance to entice mobile developers to embrace phone-based AR on iPhones through ARKit, a push they have seemed to back off from at recent developer-centric events. As someone who has kept a close eye on early projects, I’d say that most players in the space have been very underwhelmed by what existing platforms enable and what has been produced widely.

That’s really not great for Apple or Facebook and suggests that both of these companies are going to have to guide users and developers through use cases they design. I think there’s a convincing argument that early AR glasses applications will be dominated by first-party tech and may eschew full third-party native apps in favor of tightly controlled data integrations more similar to how Apple has approached developer integrations inside Siri.

But giving developers a platform built with Apple or Facebook’s own dominance in mind is going to be tough to sell, underscoring the fact that mobile and mobile AR are going to be platforms that will have to live alongside each other for quite a bit. There will be rich opportunities for developers to create experiences that play with 3D and space, but there are also plenty of reasons to expect they’ll be more resistant to move off of a mutually enriching mobile platform onto one where Facebook or Apple will have the pioneer’s pick of platform advantages. What’s in it for them?

Mobile’s OS-level winners captured plenty of value from top-of-funnel apps marketplaces, but the down-stream opportunities found mobile’s true prize, a vastly expanded market for digital ads. With the opportunity of a mobile do-over, expect to find pioneering tech giants pitching proprietary digital ad infrastructure for their devices. Advertising will likely be augmented reality’s greatest opportunity allowing the digital ads market to create an infinite global canvas for geo-targeted customized ad content. A boring future, yes, but a predictable one.

For Facebook, being a platform owner in the 2020s means getting to set their own limitations on use cases, not being confined by App Store regulations and designing hardware with social integrations closer to the silicon. For Apple, reinventing the mobile OS in the 2020s likely means an opportunity to more meaningfully dominate mobile advertising.

It’s a do-over to the tune of trillions in potential revenues.

What comes next

The AR/VR industry has been stuck in a cycle of seeking out saviors. Facebook has been the dearest friend to proponents after startup after startup has failed to find a speedy win. Apple’s long-awaited AR glasses are probably where most die-hards are currently placing their faith.

I don’t think there are any misgivings from Apple or Facebook in terms of what a wild opportunity this to win, it’s why they each have more people working on this than any other future-minded project. AR will probably be massive and change the web in a fundamental way, a true Web 3.0 that’s the biggest shift of the internet to date.

That’s doesn’t sound like something that will happen particularly smoothly.

I’m sure that these early devices will arrive later than we expect, do less than we expect and that things will be more and less different from the smartphone era’s mobile paradigms in ways we don’t anticipate. I’m also sure that it’s going to be tough for these companies to strong-arm themselves into a more seamless transition. This is going to be a very messy for tech platforms and is a transition that won’t happen overnight, not by a long shot.


Other things

The Loon is dead
One of tech’s stranger moonshots is dead, as Google announced this week that Loon, it’s internet balloon project is being shut down. It was an ambitious attempt to bring high-speed internet to remote corners of the world, but the team says it wasn’t sustainable to provide a high-cost service at a low price. More

Facebook Oversight Board tasked with Trump removal
I talked a couple weeks ago — what feels like a lifetime ago — about how Facebook’s temporary ban of Trump was going to be a nightmare for the company. I wasn’t sure how they’d stall for more time of a banned Trump before he made Facebook and Instagram his central platform, but they made a brilliant move, purposefully tying the case up in PR-favorable bureaucracy, tossing the case to their independent Oversight Board for their biggest case to date. More

Jack is Back
Alibaba’s head honcho is back in action. Alibaba shares jumped this week when the Chinese e-commerce giant’s billionaire CEO Jack Ma reappeared in public after more than three months after his last public appearance, something that stoked plenty of conspiracies. Where he was during all this time isn’t clear, but I sort of doubt we’ll be finding out. More

Trump pardons Anthony Levandowski
Trump is no longer President, but in one of his final acts, he surprisingly opted to grant a full pardon to one Anthony Levandowski, the former Google engineer convicted of stealing trade secrets regarding their self-driving car program. It was a surprising end to one of the more dramatic big tech lawsuits in recent years. More

Xbox raises Live prices
I’m not sure how this stacks in importance relative to what else is listed here, but I’m personally pissed that Microsoft is hiking the price of their streaming subscription Xbox Live Gold. It’s no secret that the gaming industry is embracing a subscription economy, it will be interesting to see what the divide looks like in terms of gamer dollars going towards platform owners versus studios. More

Musk offers up $100M donation to carbon capture tech
Elon Musk, who is currently the world’s richest person, tweeted out this week that he will be donating $100 million towards a contest to build the best technology for carbon capture. TechCrunch learned that this is connected to the Xprize organization. More details


Extra Things

I’m adding a section going forward to highlight some of our Extra Crunch coverage from the week, which dives a bit deeper into the money and minds of the moneymakers.

Hot IPOs hang onto gains as investors keep betting on tech
“After setting a $35 to $39 per-share IPO price range, Poshmark sold shares in its IPO at $42 apiece. Then it opened at $97.50. Such was the exuberance of the stock market regarding the used goods marketplace’s debut.
But today it’s worth a more modest $76.30 — for this piece we’re using all Yahoo Finance data, and all current prices are those from yesterday’s close ahead of the start of today’s trading — which sparked a question: How many recent tech IPOs are also down from their opening price?” More

How VCs invested in Asia and Europe in 2020
“Wrapping our look at how the venture capital asset class invested in 2020, today we’re taking a peek at Europe’s impressive year, and Asia’s slightly less invigorating set of results. (We’re speaking soon with folks who may have data on African VC activity in 2020; if those bear out, we’ll do a final entry in our series concerning the continent.)” More

Hello, Extra Crunch Community!
“We’re going to be trying out some new things around here with the Extra Crunch staff front and center, as well as turning your feedback into action more than ever. We quite literally work for you, the subscriber, and want to make sure you’re getting your money’s worth, as it were.” More


Until next week,
Lucas Matney

#alibaba, #anthony-levandowski, #app-store, #apple, #apple-inc, #ar, #arkansas, #asia, #augmented-reality, #ceo, #computing, #engineer, #europe, #facebook, #google, #head, #high-speed-internet, #instagram, #itunes, #jack-ma, #lucas-matney, #microsoft, #mobile-computing, #mobile-developers, #oculus, #oversight-board, #poshmark, #president, #siri, #smartphone, #smartphones, #software, #tc, #technology, #trump, #twitter, #virtual-reality, #vr, #xprize, #yahoo

Apple HomePod Mini review: Remarkably big sound

It’s hard to shake the sense that the smart speaker market would look considerably different had the HomePod Mini arrived several years back. It’s not so much that the device is transformative on the face of it, but it’s impossible to deny that it marks a dramatically different approach to the category than the one Apple took almost three years ago with the launch of the original model.

Apple has never been a particular budget-conscious company when it comes to hardware — terms like “Apple tax” don’t spring out of nothing. But the last few years have seen the company soften that approach in an effort to appeal to users outside its traditional core of creative professionals. The iPhone and Apple Watch have both seen the company more aggressively pushing to appeal to entry-level users. It only follows that it would follow suit with its smart speaker.

Couple that with the fact that the Echo Dot and Google/Nest Home minis pretty consistently rate as the best-selling smart speakers for their respective company, and arrival of a HomePod Mini was all but inevitable, as Apple looks to take a bite out of the global smart speaker market, which currently ranks Amazon and Google at around 40% a piece. It’s going to be an uphill battle for the HomePod, but the Mini is, simply put, its strongest push in that direction to date.

Launched in early 2018 (after delays), the HomePod was a lot of things — but no one ever claimed it was cheap (though no doubt they found a way to spin it as a good deal). The $349 price tag (since reduced to $299) was hundreds of dollars more than the most expensive models from Amazon and Google. The HomePod was a premium device, and that was precisely the point. Music has always been a cornerstone of Apple’s philosophy, and the HomePod was the company’s way of embracing the medium without cutting corners.

Image Credits: Brian Heater

As Matthew wrote in a David Foster Wallacesque “four sentence” review, “Apple’s HomePod is easily the best sounding mainstream smart speaker ever. It’s got better separation and bass response than anything else in its size and boasts a nuance and subtlety of sound that pays off the seven years Apple has been working on it.”

He called it “incredibly over-designed and radically impressive,” while bemoaning limited Siri functionality. On the whole, the HomePod did a good job in being what it set out to be — but it was never destined to be the world’s best-selling smart speaker. Not at that price. What it did do, however, was help convince the rest of the industry that a smart speaker should be, above all, a speaker, rather than simply a smart assistant delivery device. The last several generations of Amazon and Google products have, accordingly, mostly brought sound to the forefront of product concerns.

Essentially, Amazon and Google have become more focused on sound and Apple more conscious of price. That’s not to say, however, that the companies have met somewhere in the middle. This is not, simply put, the Apple Echo Dot. The HomePod Mini is still, in many ways, a uniquely Apple product. There’s a focus on little touches that offer a comparably premium experience for its price point.

That price point being $99. That puts the device in league with the standard Amazon Echo and Google Nest, rather than their respective budget-level counterparts. Those devices run roughly half that price and are both fairly frequently — and quite deeply — discounted. In fact, those devices could nearly fall into the category of loss leaders for their respective companies — dirt-cheap ways to get their smart assistants into users’ homes. Apple doesn’t appear particularly interested in that approach. Not for the time being, at least. Apple wants to sell you a good speaker.

And you know what? The HomePod Mini is a surprisingly good speaker. Not just for its price, but also its size. The Mini is nearly exactly the same size as the new, round Echo Dot — which is to say, roughly the size of a softball. There are, however, some key differences in their respective designs. For starters, Amazon moved the Echo’s status ring to the bottom of the device, so as to not impede on its perfectly spherical design. Apple, on the other hand, simply lopped off the top. I was trying to figure out what it reminds me of, and this was the best I came up with.

Image Credits: Brian Heater

The design decision keeps the product more in line with the original HomePod, with an Aurora Borealis of swirling lights up top to show you when Siri is doing her thing. It also allows for the inclusion of touch-sensitive volume buttons and the ability to tap the surface to play/pause music. Rather than the fabric-style covering that has dominated the last several generations of Google and Amazon products, the Mini is covered in the same sort of audio-conductive mesh material as the full-size HomePod.

The device comes in white or space gray, and unlike other smart speakers, seems to be less about blending in than showing off. Of course, being significantly smaller than the HomePod makes it considerably more versatile. I’ve been using one of the two Minis Apple sent on my desk at home, and it’s an ideal size. On the bottom is a hard plastic base with an Apple logo.

There’s a long, non-detachable fabric cable. It would be nice if the cord was user-detectable, so you can swap it out as needed, but no go. The cable sports a USB-C connector, however, which makes it fairly versatile on that end. There’s also a 20W power adapter in the box (admittedly, not a sure bet with Apple, these days). It’s disappointing — but not surprising that there’s no auxiliary input on-board — there wasn’t one on the standard HomePod, either.

Image Credits: Brian Heater

Where Amazon switched to a front-facing speaker for the new Echo, Apple continues to focus on 360-degree sound. Your preference may depend on where you place the speaker, but this model is more versatile, especially if you’re not just seated in front of the speaker all day. I’ve used a lot of different smart speakers in my day, and honestly, I’m really impressed with the sound the company was able to get out of the 3.3-inch device.

It’s full and clear and impressively powerful for its size. Obviously that goes double if you opt for a stereo pair. Pairing is painless, out of the box. Just set up two devices for the same room of your home and it will ask you whether you want to pair them. From there, you can specify which one handles the right and left channels. If you’d like to spread out, the system will do multiroom audio by simply assigning speakers to different rooms. From there, you can just say, “Hey Siri, play music in the kitchen” or “Hey Siri, play music everywhere.” You get the picture.

In fact, the whole setup process is pretty simple with an iPhone. It’s quite similar to pairing AirPods: hold the phone near the speaker and you’ll get a familiar white popup guiding you through the process of setting it up, choosing the room and enabling voice recognition.

The speakers also get pretty loud, though if you need clear sound at a serious volume, I’d strongly recommend looking at something bigger (and pricier) like the original HomePod. For the living room of my one-bedroom in Queens, however, it does the trick perfectly, and sounds great from pretty much any angle in the room.

As a smart assistant, Siri is up to most of the basic tasks. There are also some neat tricks that leverage Apple’s unique ecosystem. You can, say, ask Siri to send images to your iPhone, and it’ll obl