Artificial intelligence is being woven into an array of the company’s products. But the change — for now — is subtle.
For a decade, a Yale School of Medicine administrator used university funds to buy computer equipment, which she resold to pay for luxury cars, real estate and vacations, the authorities said.
There’s not a one-size-fits-all answer to the question, but here are several things to consider before buying your child their own electronic device.
Hello friends, and welcome back to Week in Review!
Last week, we talked about some sunglasses from a company that many people do not like very much. This week, we’re talking about Apple and the company 1,600 times smaller than it that’s facing similar product problems.
Thanks for joining in — follow my tweets @lucasmtny for more.
the big thing
When you get deep enough into the tech industry, it’s harder to look at things with a consumer’s set of eyes. I’ve felt that way more and more after six years watching Apple events as a TechCrunch reporter, but sometimes memes from random Twitter accounts help me find the consumer truth I’m looking for.
As that dumb little tweet indicates, Apple is charging toward a future where it’s becoming a little harder to distinguish new from old. The off-year “S” period of old is no more for the iPhone, which has seen tweaks and new size variations since 2017’s radical iPhone X redesign. Apple is stretching the periods between major upgrades for its entire product line and it’s also taking longer to roll out those changes.
Apple debuted the current bezel-lite iPad Pro design back in late 2018 and it’s taken three years for the design to work its way down to the iPad mini while the entry-level iPad is still lying in wait. The shift from M1 Macs will likely take years as the company has already detailed. Most of Apple’s substantial updates rely on upgrades to the chipsets that they build, something that increasingly makes them look and feel like a consumer chipset company.
This isn’t a new trend, or even a new take, it’s been written lots of times, but it’s particularly interesting as the company bulks up the number of employees dedicated to future efforts like augmented reality, which will one day soon likely replace the iPhone.
It’s an evolution that’s pushing them into a similar design territory as action camera darling GoPro, which has struggled again and again with getting their core loyalists to upgrade their hardware frequently. These are on laughably different scales, with Apple now worth some $2.41 trillion and GoPro still fighting for a $1.5 billion market cap. The situations are obviously different, and yet they are both facing similar end-of-life innovation questions for categories that they both have mastered.
This week GoPro debuted its HERO10 Black camera, which brings higher frame rates and a better performing processor as it looks to push more of its user audience to subscription services. Sound familiar? This week, Apple debuted its new flagship, the iPhone 13 Pro, with a faster processor and better frame rates (for the display not the camera here, though). They also spent a healthy amount of time pushing users to embrace new services ecosystems.
Apple’s devices are getting so good that they’re starting to reach a critical feature plateau. The company has still managed to churn out device after device and expand their audience to billions while greatly expanding their average revenue per user. Things are clearly going pretty well for the most valuable company on earth, but while the stock has nearly quadrupled since the iPhone X launch, the consumer iPhone experience feels pretty consistent. That’s clearly not a bad thing, but it is — for lack of a better term — boring.
The clear difference, among 2.4 trillion others, is that GoPro doesn’t seem to have a clear escape route from its action camera vertical.
But Apple has been pushing thousands of employees toward an escape route in augmented reality, even if the technology is clearly not ready for consumers and they’re forced to lead with what has been rumored to be a several-thousand-dollar AR/VR headset with plenty of limitations. One of the questions I’m most interested in is what the iPhone device category looks likes once its unwieldy successor has reared its head. Most likely is that the AR-centric devices will be shipped as wildly expensive iPhone accessories and a way to piggy back off the accessibility of the mobile category while providing access to new — and more exciting — experiences. In short, AR is the future of the iPhone until AR doesn’t need the iPhone anymore.
Here are the TechCrunch news stories that especially caught my eye this week:
Everything Apple announced this week
Was it the most exciting event Apple has ever had? Nah. Are you still going to click that link to read about their new stuff? Yah.
GoPro launches the HERO10 Black
I have a very soft spot in my heart for GoPro, which has taken a niche corner of hardware and made a device and ecosystem that’s really quite good. As I mentioned above, the company has some issues making significant updates every year, but they made a fairly sizable upgrade this year with the second-generation of their customer processor and some performance bumps across the board.
Tesla will open FSD beta to drivers with good driving record
Elon Musk is pressing ahead with expanding its “Full Self-Driving” software to more Tesla drivers, saying that users who paid for the FSD system can apply to use the beta and will be analyzed by the company’s insurance calculator bot. After 7 days of good driving behavior, Musk says users will be approved.
OpenSea exec resigns after ‘insider trading’ scandal
NFTs are a curious business; there’s an intense amount of money pulsating through these markets — and little oversight. This week OpenSea, the so-called “eBay of NFTs,” detailed that its own VP of Product had been trading on insider information. He was later pushed to resign.
Apple and Google bow to the Kremlin
Apple and Google are trying to keep happy the governments of most every market in which they operate. That leads to some uncomfortable situations in markets like Russia, where both tech giants were forced by the Kremlin to remove a political app from the country’s major opposition party.
Some of my favorite reads from our Extra Crunch subscription service this week:
What could stop the startup boom?
“…We’ve seen record results from cities, countries and regions. There’s so much money sloshing around the venture capital and startup worlds that it’s hard to recall what they were like in leaner times. We’ve been in a bull market for tech upstarts for so long that it feels like the only possible state of affairs. It’s not…”
The value of software revenue may have finally stopped rising
“…I’ve held back from covering the value of software (SaaS, largely) revenues for a few months after spending a bit too much time on it in preceding quarters — when VCs begin to point out that you could just swap out numbers quarter to quarter and write the same post, it’s time for a break. But the value of software revenues posted a simply incredible run, and I can’t say “no” to a chart…“
Inside GitLab’s IPO filing
“…The company’s IPO has therefore been long expected. In its last primary transaction, GitLab raised $286 million at a post-money valuation of $2.75 billion, per PitchbBook data. The same information source also notes that GitLab executed a secondary transaction earlier this year worth $195 million, which gave the company a $6 billion valuation…”
The TechCrunch crew is hard at work writing up the latest from Apple’s iPhone, iPad and Apple Watch event. They have good notes on the megacorp’s hardware updates. But what are the markets saying about the same array of products?
For those of us more concerned with effective S&P dividend yields than screen nit levels, events like Apple’s confab are more interesting for what they might mean for the value of the hosting company than how many GPUs a particular smartphone model has. And, for once, Apple’s stock may have done something a little interesting during the event!
Observe the following chart:
This is a one-day chart, mind, so we’re looking at intraday changes. We’re zoomed in. And Apple kinda took a bit of a dive during its event that kicked off at 1 p.m. in the above chart.
Normally nothing of import happens to Apple’s shares during its presentations. Which feels weird, frankly, as Apple events detail the product mix that will generate hundreds of billions in revenue. You’d think that they would have more impact than their usual zero.
But today, we had real share price movement when the event wrapped around 2 p.m. ET. Perhaps investors were hoping for more pricey devices? Or were hoping Apple had more up its sleeve? How you rate that holiday Apple product lineup is a matter of personal preference, but investors appear to have weighed in slightly to the negative.
Worth around $2.5 trillion, each 1% that Apple’s stock moves is worth $10 billion. Apple’s loss of 1.5% today — more or less; trading continues as I write this — is worth more than Mailchimp. It’s a lot of money.
You can read the rest of our coverage from the Apple event here. Enjoy!
The announcement of new iPad software at this year’s WWDC conference had an abnormally large expectation hung on it. The iPad lineup, especially the larger iPad Pro, has kept up an impressively frantic pace of hardware innovation over the past few years. In that same time frame, the software of the iPad, especially its ability to allow users to use multiple apps at once and in its onramps for professional software makers, has come under scrutiny for an apparently slower pace.
This year’s announcements about iOS 15 and iPadOS 15 seemed designed to counter that narrative with the introduction of a broad number of quality of life improvements to multitasking as well as a suite of system-wide features that nearly all come complete with their own developer-facing APIs to build on. I had the chance to speak to Bob Borchers, Apple’s VP of Worldwide Product Marketing, and Sebastien (Seb) Mariners-Mes, VP, Intelligent System Experience at Apple about the release of iPadOS 15 to discuss a variety of these improvements.
Mariners-Mes works on the team of Apple software SVP Craig Federighi and was pivotal in the development of this new version.
iPad has a bunch of new core features including SharePlay, Live Text, Focuses, Universal Control, on-device Siri processing and a new edition of Swift Playgrounds designed to be a prototyping tool. Among the most hotly anticipated for iPad Pro users, however, are improvements to Apple’s multitasking system.
If you’ve been following along, you’ll know that the gesture-focused multitasking interface of iPadOS has had its share of critics, including me. Though it can be useful in the right circumstances, the un-discoverable gesture system and confusing hierarchy of the different kinds of combinations of apps made it a sort of floppy affair to utilize correctly for an apt user much less a beginner.
Since the iPad stands alone as pretty much the only successful tablet device on the market, Apple has a unique position in the industry to determine what kinds of paradigms are established as standard. It’s a very unique opportunity to say, hey, this is what working on a device like this feels like; looks like; should be.
So I ask Borchers and Mariners-Mes to talk a little bit about multitasking. Specifically Apple’s philosophy in the design of multitasking on iPadOS 15 and the update from the old version, which required a lot of acrobatics of the finger and a strong sense of spatial awareness of objects hovering out off the edges of the screen.
“I think you’ve got it,” Borchers says when I mention the spatial gymnastics, “but the way that we think about this is that the step forward and multitasking makes it easier discover, easier to use even more powerful. And, while pros I think were the ones who were using multitasking in the past, we really want to take it more broadly because we think there’s applicability to many, many folks. And that’s why the, the discovery and the ease of use I think were critical.”
“You had a great point there when you talked about the spatial model and one of our goals was to actually make the spatial model more explicit in the experience,” says Mariners-Mes, “where, for example, if you’ve got a split view, and you’re replacing one of the windows, we kind of open the curtain and tuck the other app to the side, you can see it — it’s not a hidden hidden mental model, it’s one that’s very explicit.
Another great example of it is when you go into the app, switcher to reconfigure your windows, you’re actually doing drag and drop as you rearrange your new split views, or you dismiss apps and so on. So it’s not a hidden model, it’s one where we really try to reinforce a spatial model with an explicit one for the user through all of the animations and all of the kinds of affordances.”
Apple’s goal this time around, he says, was to add affordances for the user to understand that multitasking was even an option — like the small series of dots at the top of every app and window that now allows you to explicitly choose an available configuration, rather than the app-and-dock-juggling method of the past. He goes on to say that consistency was a key metric for them on this version of the OS. The appearance of Slide Over apps in the same switcher view as all other apps, for instance. Or the way that you can choose configurations of apps via the button, by drag and drop in the switcher and get the same results.
In the dashboard, Mariners-Mes says, “you get an at a glance view of all of the apps that you’re running and a full model of how you’re navigating that through the iPad’s interface.”
This ‘at a glance’ map of the system should be very welcome by advanced users. Even as a very aggressive Pro user myself, Slide Over apps became more of a nuisance than anything because I couldn’t keep track of how many were open and when to use them. The ability to combine them on the switcher itself is one of those things that Apple has wanted to get into the OS for years but is just now making its way onto iPads. Persistence of organization, really, was the critical problem to tackle.
“I think we believe strongly in building a mental model where people know where things are [on iPad],” says Mariners-Mes. “And I think you’re right when it comes persistence I think it also applies to, for example, home screen. People have a very strong mental model of where things are in the home screen as well as all of the apps that they’ve configured. And so we try to maintain a well maintained that mental model, and also allow people to reorganize again in the switcher.”
He goes on to explain the new ‘shelf’ feature that displays every instance or window that an app has open within itself. They implemented this as a per-app feature rather than a system-wide feature, he says, because the association of that shelf with a particular app fit the overall mental model that they’re trying to build. The value of this shelf may jump into higher relief when more professional apps that may have a dozen documents or windows open at once and active during a project ship later this year.
Another nod to advanced users in iPadOS 15 is the rich keyboard shortcut set offered across the system. The interface can be navigated by arrow keys now, many advanced commands are there and you can even move around on an iPad using a game controller.
“One of the key goals this year was to make basically everything in the system navigable from the keyboard,” says Mariners-Mes, “so that if you don’t want to, you don’t have to take your hands off the keyboard. All of the new multitasking affordances and features, you can do through the keyboard shortcuts. You’ve got the new keyboard shortcut menu bar where you can see all the shortcuts that are available. It’s great for discoverability. You can search them and we even, you know, and this is a subtle point, but we even made a very conscious effort to rationalize the shortcuts across Mac and iPadOS. So that if you’re using universal control, for example, you’re going to go from one environment to the other seamlessly. You want to ensure that consistency as you go across.”
The gestures, however, are staying as a nod to consistency for existing users that may be used to those.
To me, one of the more interesting and potentially powerful developments is the introduction of the Center Window and its accompanying API. A handful of Apple apps like Mail, Notes and Messages now allow items to pop out into an overlapping window.
“It was a very deliberate decision on our part,” says Mariners-Mes about adding this new element. “This really brings a new level of productivity where you can have, you know, this floating window. You can have content behind it. You can seamlessly cut and paste. And that’s something that’s just not possible with the traditional [iPadOS] model. And we also really strive to make it consistent with the rest of multitasking where that center window can also become one of the windows in your split view, or full size, and then go back to to being a center window. We think it’s a cool addition to the model and we look really look forward to 3rd parties embracing it.”
Early reception of the loop Apple gave at iPadOS 15 has an element of reservation about it still given that many of the most powerful creative apps are made by third parties that must adopt these technologies in order for them to be truly useful. But Apple, Borchers says, is working hard to make sure that pro apps adopt as many of these new paradigms and technologies as possible, so that come fall, the iPad will feel like a more hospitable host for the kinds of advanced work pros want to do there.
One of the nods to this multi-modal universe that the iPad exists in is Universal Control. This new feature uses Bluetooth beaconing, peer-to-peer WiFi and the iPad’s touchpad support to allow you to place your devices close to one another and — in a clever use of reading user intent — slide your mouse to the edge of a screen and onto your Mac or iPad seamlessly.
“I think what we have seen and observed from our users, both pro and and otherwise, is that we have lots of people who have Macs and they have iPads, and they have other iPhones and and we believe in making these things work together in ways that are that are powerful,” says Borchers. “And it just felt like a natural place to be able to go and extend our Continuity model so that you could make use of this incredible platform that is iPadOS while working with your Mac, right next to it. And I think the big challenge was, how do you do that in kind of a magical, simple way. And that’s what Seb and his team and been able to accomplish.
“It really builds on the foundation we made with Continuity and Sidecar,” adds Mariners-Mes. “We really thought a lot about how do you make the experience — the set up experience — as seamless as possible. How do you discover that you’ve got devices side by side.?
The other thing we thought about was what are the workflows that people want to have and what capabilities that will be essential for that. That’s where thinks like the ability to seamlessly drag content across the platforms or cut and paste was we felt to be really, really important. Because I think that’s really what brings to the magic to the experience.”
Borchers adds that it makes all the continuity features that much more discoverable. Continuity’s shared clipboard, for instance, is an always on but invisible presence. Expanding that to visual and mouse-driven models made some natural sense.
“It’s just like, oh, of course, I can drag that all the way across all the way across here,” he says.
“Bob, you say, of course,” Mariners-Mes laughs. “And yet for those of us working in platforms for a long time, the ‘of course’, is technically very, very challenging. Totally non obvious.”
Another area where iPadOS 15 is showing some promising expansionary behavior is in system-wide activities that allow you to break out of the box of in-app thinking. These include embedded recommendations that seed themselves into various apps, Shareplay, which makes an appearance wherever video calls are found and Live Text, which turns all of your photos into indexed archives searchable with a keyboard.
Another is Quick Note, a system extension that lets you swipe from the bottom corner of your screen wherever you are in the system.
“There are, I think a few interesting things that we did with with Quick Note,” says Mariners-Mes. “One is this idea of linking. So, that if I’m working in Safari or Yelp or another app, I can quickly insert a link to whatever content I’m viewing. I don’t know about you, but it’s something that I certainly do a lot when I do research.
“The old way was, like, cut and paste and maybe take a screenshot, create a note and jot down some notes. And now we’ve made that very, very seamless and fluid across the whole system. It even works the other way where, if I’m now in Safari and I have a note that refers to that page in Safari, you’ll see it revealed as a thumbnail at the bottom of the screen’s right hand side. So, we’ve really tried to bring the notes experience to be something that just permeates the system and is easily accessible from, from everywhere.”
Many of the system-wide capabilities that Apple is introducing in iPadOS 15 and iOS 15 have an API that developers can tap into. That is not always the case with Apple’s newest toys, which in years past have often been left to linger in the private section of its list of frameworks rather than be offered to developers as a way to enhance their apps. Borchers says that this is an intentional move that offers a ‘broader foundation of intelligence’ across the entire system.
This broader intelligence includes Siri moving a ton of commands to its local scope. This involved having to move a big chunk of Apple’s speech recognition to an on-device configuration in the new OS as well. The results, says Borchers, are a vastly improved day-to-day Siri experience, with many common commands executing immediately upon request — something that was a bit of a dice roll in days of Siri past. The removal of the reputational hit that Siri was taking from commands that went up to the cloud never to return could be the beginning of a turnaround for the public perception of Siri’s usefulness.
The on-device weaving of the intelligence provided by the Apple Neural Engine (ANE) also includes the indexing of text across photos in the entire system, past, present and in-the-moment.
“We could have done live text only in camera and photos, but we wanted it to apply to anywhere we’ve got images, whether it be in in Safari or quick look or wherever,” says Mariners-Mes. “One of my favorite demos of live text is actually when you’ve got that long complicated field for a password for a Wi-Fi network. You can just actually bring it up within the keyboard and take a picture of it, get the text in it and copy and paste it into into the field. It’s one of those things that’s just kind of magical.”
On the developer service front of iPadOS 15, I ask specifically about Swift Playgrounds, which add the ability to write, compile and ship apps on the App Store for the first time completely on iPad. It’s not the native Xcode some developers were hoping for, but, Borchers says, Playgrounds has moved beyond just ‘teaching people how to code’ and into a real part of many developer pipelines.
“ think one of the big insights here was that we also saw a number of kind of pro developers using it as a prototyping platform, and a way to be able to be on the bus, or in the park, or wherever if you wanted to get in and give something a try, this was super accessible and easy way to get there and could be a nice adjunct to hey, I want to learn to code.”
“If you’re a developer,” adds Mariners-Mes, “it’s actually more productive to be able to run that app on the device that you’re working on because you really get great fidelity. And with the open project format, you can go back and forth between Xcode and Playgrounds. So, as Bob said, we can really envision people using this for a lot of rapid prototyping on the go without having to bring along the rest of their development environment so we think it’s a really, really powerful addition to our development development tools this year.”
Way back in 2018 I profiled a new team at Apple that was building out a testing apparatus that would help them to make sure they were addressing real-world use cases for flows of process that included machines like the (at the time un-revealed) new Mac Pro, iMacs, MacBooks and iPads. One of the demos that stood out at the time was a deep integration with music apps like Logic that would allow the input models of iPad to complement the core app. Tapping out a rhythm on a pad, brightening or adjusting sound more intuitively with the touch interface. More of Apple’s work these days seems to be aimed at allowing users to move seamlessly back and forth between its various computing platforms, taking advantage of the strengths of each (raw power, portability, touch, etc) to complement a workflow. A lot of iPadOS 15 appears to be geared this way.
Whether it will be enough to turn the corner on the perception of iPad as a work device that is being held back by software, I’ll reserve judgement until it ships later this year. But, in the near term, I am cautiously optimistic that this set of enhancements that break out of the ‘app box’, the clearer affordances for multitasking both in and out of single apps and the dedication to API support are pointing towards an expansionist mentality on the iPad software team. A good sign in general.
During the WWDC conference today, Apple unveiled the new macOS 12 Monterey. A major feature in the macOS update is Universal Control, which builds upon the Continuity features first introduced in OS X Yosemite. For years, it’s been possible to open a news article on your iPhone and keep reading it on your MacBook, or to copy and paste a link from your iPad to your iMac. But Universal Control takes these features further.
With Universal Control, you can use a single mouse and keyboard to navigate across multiple Apple devices at once. This functionality works across more than two devices – in the demo video, the feature is used to seamlessly move across an iPad, MacBook, and iMac. Users can drag and drop files across multiple devices at once, making it possible, for example, to use a multi-screen setup while editing video on Final Cut Pro.
What’s possible in Universal Control isn’t necessarily new – this has been made possible before through third party apps. Plus, in 2019, Apple debuted Sidecar, which allowed users to connect their iPad as a second monitor for their MacBook or iMac. But, Universal Control improves upon Sidecar – and maybe renders it obsolete – by allowing users to link any Apple devices together, even if it’s not an iPad. Though this update may not be ground-breaking, it’s a useful upgrade to existing features.
Apple didn’t announce that rumored combined Apple TV device that would combine the set-top box with a HomePod speaker during its WWDC keynote, but it did announce a few features that will improve the Apple TV experience — including one that involves a HomePod Mini. Starting this fall, Apple said you’ll be able to select the HomePod Mini as the speaker for your Apple TV 4K. It also introduced a handful of software updates for Apple TV users, including a new way to see shows everyone in the family will like, and support for co-watching shows through FaceTime.
The co-watching feature is actually a part of a larger FaceTime update, which will let users stream music, TV, and screen share through their FaceTime calls. The Apple TV app is one of those that’s supported through this new system, called SharePlay. It will now include a new “Shared with You” row that highlights the shows and movies your friends are sharing, as well.
Another feature called “For All of You” will display a collection of shows and movies based on everyone’s interests within Apple TV’s interface. This is ideal you’re planning to watch something as a family — like for movie night, for example. And you can fine tune the suggestions based on who’s watching.
A new Apple TV widget is also being made available, which now includes iPad support.
And the new support for HomePod Mini will help deliver “rich, balanced sound” and “crystal clear dialog,” when you’re watching Apple TV with the Mii set up as your speakers, Apple said.
If you’ve lived in a few places in your life then you probably have experienced the feeling of moving into a nice new apartment or house — a blank canvas of rooms and spaces filled with possibility. Most times, unless you’ve got that money money, you then fill it up with all of your same, old dinged up furniture.
I have had this experience a bunch over the years. When I moved out of my parent’s house, every single piece of furniture I had was thrifted, rescued or gifted. A mash-mash of centuries and styles that was well lived in the day I got it. Even as I got married and even moved much of that same furniture came with.
Eventually, much of it began to feel too out of place and we passed it on and carefully bought or made things that we felt represented our home and resonated with us. But there is still that odd piece, like that dinged up dutch modern coffee table, that we look at and remember the sticky cocktail clutter of karaoke nights in our 20’s and the —still sticky — kid’s snacks of our 30’s.
That’s where iPad is now. It’s a beautiful new monument to the engineering and hardware teams at Apple that continue to execute at an insane level year over year. But it’s filled with the still functional but feeling its age iPadOS software. And it’s getting more out of place by the day.
This writeup has to be about what Apple has currently shipped, of course. That’s what consumers are getting in the mail when they order one now. But, Apple’s Worldwide Developer’s Conference is coming in just a couple of weeks and I hope to basically review this iPad Pro all over again in a new light.
Ironically, one of the biggest hardware upgrades for this model is going to elicit a relatively muted response in me here. The M1 chip is absolutely blistering. It delivers the same performance as the M1 MacBook Pro did in benchmark tests, making Apple’s lineup more a matter of form factor and use case than raw power. But last year’s model frankly still feels just as fast in almost all of the applications that I tested and certainly in all of my daily workflows.
Yes, it’s super powerful and you’re getting the absolute latest in silicon, but upgraders won’t see any immediate difference. In some ways this is by design. In my interview last month with Apple’s John Ternus and Greg Joswiak about the iPad Pro they noted that the new unified processor strategy and the aggressively good display are about creating overhead that they hope developers will take advantage of.
The new iPad Pro cameras are pretty fantastic, both front and back are now very usable and the new front camera especially benefits from an increase in resolution and new wide-angle optics. This wide angle makes video calling a bit more relaxed and doable on iPad Pro.
That is coupled with Center Stage, Apple’s new ML-driven feature that automatically crops and centers your head and shoulders, following you around with a smooth and forgiving pan and zoom as you lean, stand and even move about a room within the view of the camera. The feature works by using machine learning frameworks to detect a person’s silhouette within the frame and then applies “camera moves” to that view as you shift in the frame. Unlike some other auto-zooming features, Center Stage feels like there is a virtual camera operator there helping you to frame yourself properly. It’s really clever and very nicely done — and it’s one of the biggest upgrades to iPad Pro usability this time around.
And yes, if you’re wondering, this does a lot to mitigate the camera placement issues on iPad Pro. The camera remains in the ‘top center’ of a vertically oriented iPad Pro, which means that it is on the ‘middle left’ of a horizontally oriented keyboard-bound setup. This has led to an awkwardness in iPad video calling. Center Stage doesn’t completely delete these issues, hand placement is still a problem at times, but it goes a long way to making it more usable. The new APIs will make this feature available to all video calling apps by the way, and there are also some slight improvements in multi-tasking which let you use multi-pane setups without blanking out your video call in the process of a Zoom.
I tested the external Thunderbolt connection with Apple’s Pro Display XDR and it works fine. The displays are very close in capability so, aside from the scaling, this arrangement could prove to be very useful for pros working in a pipeline with XDR displays for color correction purposes. The software limitation of the iPad Pro only supporting mirror mode is still in effect, alas, which makes the usability of this feature a bit suspect in most situations.
Speaking of the screen, the mini-LED driven Liquid Retina XDR display is probably the best display that’s ever shipped on a mobile computer. It’s just fantastic. The daily driving brightness is good, with an average of 600 nit max, but full screen HDR content like videos or photos allow the display to boom up to 1,000 nits average with a peak of 1,600 nits. This thing is b r i g h t. Daylight viewing of HDR content is massively improved. And this comes with inclusion of all of the standard stuff like the 120hz ProMotion features.
The 10,000 mini-LEDs in the display allow for much more precise localization of blacks (because they can turn off completely) and less bloom (though extreme tests still show it). They also provide a noticeably better edge-to-edge uniformity in brightness and improved off-axis viewing of content on the screen. It’s just better in every way. Absolute gold standard display here.
Apple’s new Magic Keyboard works essentially exactly the same as the previous version, but it now comes in white. I’m also happy to report that if you have an existing one it works completely fine on the new iPad Pro models. There was some minor uproar because Apple noted that the dimensions of the two devices were not the same so the old keyboard may not fit perfectly. Rest assured that it’s essentially the same exact fit and the functionality is identical. The only time any of it is evident is if you close it and peer very, very carefully at the open end you’ll notice that there is about 1mm less clearance between the rim of the case and the edge of the iPad Pro. Apple probably felt that it was better to over-disclose than anything here but really, it’s not an issue.
The white color is fantastic looking, especially with the silver model of iPad Pro and its white antenna window accents. It’s very 2001 (even though Kubrik’s iPads were black). I would fully, 100% expect this thing to get marred and dingy though. There’s a warning on the box that ‘color may transfer’ and I would believe it. My demo unit has not sustained any noticeable markings yet but I would guess that it’s just a matter of time.
The overall feel of the keyboard is still great, a really nice typing experience and great little piece of kit that should be factored into the price of purchase of any iPad Pro because it’s just essential.
The other side of this coin that isn’t so shiny is the iPad’s aging software. It’s just as good as it was when I wrote my review of the iPad Pro in 2020 — at which point my conclusions were ‘you can adapt to it but it could be better’. That was a year ago. As someone who has used it as my only portable machine for the last two and a half years I can tell you that this is a very long time to wait for a big leap forward in capability.
I have a very simple ask for the iPad’s software. I want to feel the same energy vivacity and pure performance for the sake of peak performance that the hardware side exhibits.
Apple’s iPad Pro hardware is performing like an athlete in peak physical condition that is out three lengths ahead. The M1 chip and mini-LED display are really untouchable – it is exhilarating to see this much excellence packed into one device.
Unfortunately the software can not cash those checks, leaving this iPad Pro to feel like a perfect house with the same old furniture.
Apple has done an incontestably incredible job with the 2021 iPad Pro’s hardware, but it needs to level the software up. As a ‘power’ user that lives on iPad Pro much of the year, I have grown used to my kludges and tic-affordances. But there needs to be some big time commitment to the iPad paradigm here. The pane-style interface currently has so many things to recommend it as a brutally fast, fluid way to work, but there is no follow through. There is no willingness to say ‘this is a new way of working and you will learn it.’
Too much of the iPad Pro’s current software leans into ‘affordance valley’. A place where users are still treated as if they aren’t capable of learning a touch-first way of working. Instead, these affordances actually do the opposite and stand in the way of progress.
This reminds me of the ‘reduce animations’ affordance circa the iOS 7 era. When Apple reinvented iOS it went way overboard in animations in order to make it super clear to the user what was happening when they were tapping around the new pane-based interface. Nothing was ‘slower’ hardware wise but the animated affordances they put in were tuned too far up and made it feel slower. Turning those animations off made the interface feel snappier and more useful almost instantly.
Apple eventually got those animations under control by conceding that maybe people were indeed ready to be more advanced touch users.
This is where the iPad Pro is currently and it’s the disparity that irks the most. This is one of the best computing hardware devices ever made, and you know it’s capable of so much more than it is currently being let do. Apple has always had an editorial point of view when it comes to software and I can appreciate that. But currently, it feels like that stance is far too conservative when it comes to iPad Pro.
That’s why I’m waiting for WWDC with bated breath. With this much high-level execution on the hardware side, you have to imagine that the time is ripe for Apple to really take the next leap forward with iPad software. When that happens and we get a solid view of Apple’s vision for the next wave of iPad work, I’ll come back to the table with another look.
A movement is underway to slow the obsolescence of our electronic gear.
When the third minute of Apple’s first product event of 2021 ticked over and they had already made 3 announcements we knew it was going to be a packed one. In a tight single hour this week, Apple launched a ton of new product including AirTags, new Apple Card family sharing, a new Apple TV, a new set of colorful iMacs, and a purple iPhone 12 shade.
Of the new devices announced, though, Apple’s new 12.9” iPad Pro is the most interesting from a market positioning perspective.
This week I got a chance to speak to Apple Senior Vice President of Worldwide Marketing Greg Joswiak and Senior Vice President of Hardware Engineering John Ternus about this latest version of the iPad Pro and its place in the working universe of computing professionals.
In many ways, this new iPad Pro is the equivalent of a sprinter being lengths ahead going into the last lap and just turning on the afterburners to put a undebatable distance between themselves and the rest of the pack. Last year’s model is still one of the best computers you can buy, with a densely packed offering of powerful computing tools, battery performance and portability. And this year gets upgrades in the M1 processor, RAM, storage speed, Thunderbolt connection, 5G radio, new ultra wide front camera and its Liquid Retina XDR display.
This is a major bump even while the 2020 iPad Pro still dominates the field. And at the center of that is the display.
Apple has essentially ported its enormously good $5,000 Pro Display XDR down to a 12.9” touch version, with some slight improvements. But the specs are flat out incredible. 1,000 nit brightness peaking at 1,600 nits in HDR with 2,500 full array local dimming zones — compared to the Pro Display XDR’s 576 in a much larger scale.
Given that this year’s first product launch from Apple was virtual, the media again got no immediate hands on with the new devices introduced, including iPad Pro. This means that I have not yet seen the XDR display in action. Unfortunately, these specs are so good that estimating them without having seen the screen yet is akin to trying to visualize “a trillion” in your head. It’s intellectually possible but not really practical.
It’s brighter than any Mac or iOS device on the market and could be a big game shifting device for professionals working in HDR video and photography. But even still, this is a major investment to ship a micro-LED display in the millions or tens of millions of units with more density and brightness than any other display on the market.
I ask both of them why there’s a need to do this doubling down on what is already one of the best portable displays ever made — if not one of the best displays period.
“We’ve always tried to have the best display,” says Ternus. “We’re going from the best display on any device like this and making it even better, because that’s what we do and that’s why we, we love coming to work every day is to take that next big step.
“[With the] Pro Display XDR if you remember one thing we talked about was being able to have this display and this capability in more places in the work stream. Because traditionally there was just this one super expensive reference monitor at the end of the line. This is like the next extreme of that now you don’t even have to be in the studio anymore you can take it with you on the go and you can have that capability so from a, from a creative pro standpoint we think this is going to be huge.”
In my use of the Pro Display and my conversations with professionals about it one of the the common themes that I’ve heard is the reduction in overall workload due to the multiple points in the flow where color and image can be managed accurately to spec now. The general system in place puts a reference monitor very late in the production stage which can often lead to expensive and time consuming re-rendering or new color passes. Adding the Liquid Retina XDR display into the mix at an extremely low price point means that a lot more plot points on the production line suddenly get a lot closer to the right curve.
One of the stronger answers on the ‘why the aggressive spec bump’ question comes later in our discussion but is worth mentioning in this context. The point, Joswiak says, is to offer headroom. Headroom for users and headroom for developers.
“One of the things that iPad Pro has done as John [Ternus] has talked about is push the envelope. And by pushing the envelope that has created this space for developers to come in and fill it. When we created the very first iPad Pro, there was no Photoshop,” Joswiak notes. “There was no creative apps that could immediately use it. But now there’s so many you can’t count. Because we created that capability, we created that performance — and, by the way sold a fairly massive number of them — which is a pretty good combination for developers to then come in and say, I can take advantage of that. There’s enough customers here and there’s enough performance. I know how to use that. And that’s the same thing we do with each generation. We create more headroom to performance that developers will figure out how to use.
“The customer is in a great spot because they know they’re buying something that’s got some headroom and developers love it.”
The iPad Pro is now powered by the M1 chip — a move away from the A-series naming. And that processor part is identical (given similar memory configurations) to the one found in the iMac announced this week and MacBooks launched earlier this year.
“It’s the same part, it’s M1,” says Ternus. “iPad Pro has always had the best Apple silicon we make.”
“How crazy is it that you can take a chip that’s in a desktop, and drop it into an iPad,” says Joswiak. “I mean it’s just incredible to have that kind of performance at such amazing power efficiency. And then have all the technologies that come with it. To have the neural engine and ISP and Thunderbolt and all these amazing things that come with it, it’s just miles beyond what anybody else is doing.”
As the M1 was rolling out and I began running my testing, the power per watt aspects really became the story. That really is the big differentiator for M1. For decades, laptop users have been accustomed to saving any heavy or intense workloads for the times when their machines were plugged in due to power consumption. M1 is in the process of resetting those expectations for desktop class processors. In fact, Apple is offering not only the most powerful CPUs but also the most power-efficient CPUs on the market. And it’s doing it in a $700 Mac Mini, a $1,700 iMac and a $1,100 iPad Pro at the same time. It’s a pretty ridiculous display of stunting, but it’s also the product of more than a decade of work building its own architecture and silicon.
“Your battery life is defined by the capacity of your battery and the efficiency of your system right? So we’re always pushing really really hard on the system efficiency and obviously with M1, the team’s done a tremendous job with that. But the display as well. We designed a new mini LED for this display, focusing on efficiency and on package size, obviously, to really to be able to make sure that it could fit into the iPad experience with the iPad experience’s good battery life.
“We weren’t going to compromise on that,” says Ternus.
One of the marquee features of the new iPad Pro is its 12MP ultra-wide camera with Center Stage. An auto-centering and cropping video feature designed to make FaceTime calling more human-centric, literally. It finds humans in the frame and centers their faces, keeping them in the frame even if they move, standing and stretching or leaning to the side. It also includes additional people in the frame automatically if they enter the range of the new ultra-wide 12MP front-facing camera. And yes, it also works with other apps like Zoom and Webex and there will be an API for it.
I’ve gotten to see it in action a bit more and I can say with surety that this will become an industry standard implementation of this kind of subject focusing. The crop mechanic is handled with taste, taking on the characteristics of a smooth zoom pulled by a steady hand rather than an abrupt cut to a smaller, closer framing. It really is like watching a TV show directed by an invisible machine learning engine.
“This is one of the examples of some of our favorite stuff to do because of the way it marries the hardware and software right,” Ternus says. “So, sure it’s the camera but it’s also the SOC and and the algorithms associated with detecting the person and panning and zooming. There’s the kind of the taste aspect, right? Which is; how do we make something that feels good it doesn’t move too fast and doesn’t move too slow. That’s a lot of talented, creative people coming together and trying to find the thing that makes it Apple like.”
It also goes a long way to making the awkward horizontal camera placement when using the iPad Pro with Magic Keyboard. This has been a big drawback for using the iPad Pro as a portable video conferencing tool, something we’ve all been doing a lot of lately. I ask Ternus whether Center Stage was designed to mitigate this placement.
“Well, you can use iPad in any orientation right? So you’re going to have different experiences based on how you’re using it. But what’s amazing about this is that we can keep correcting the frame. What’s been really cool is that we’ve all been sitting around in these meetings all day long on video conferencing and it’s just nice to get up. This experience of just being able to stand up and kind of stretch and move around the room without walking away from the camera has been just absolutely game changing, it’s really cool.”
It’s worth noting that several other video sharing devices like the Portal and some video software like Teams already offer cropping-type follow features, but the user experience is everything when you’re shipping software like this to millions of people at once. It will be interesting to see how Center Stage stacks up agains the competition when we see it live.
With the ongoing chatter about how the iPad Pro and Mac are converging from a feature-set perspective, I ask how they would you characterize an iPad Pro vs. a MacBook buyer? Joswiak is quick to respond to this one.
“This is my favorite question because you know, you have one camp of people who believe that the iPad and the Mac are at war with one another right it’s one or the other to the death. And then you have others who are like, no, they’re bringing them together — they’re forcing them into one single platform and there’s a grand conspiracy here,” he says.
“They are at opposite ends of a thought spectrum and the reality is that neither is correct. We pride ourselves in the fact that we work really, really, really hard to have the best products in the respective categories. The Mac is the best personal computer, it just is. Customer satisfaction would indicate that is the case, by a longshot.”
Joswiak points out that the whole PC category is growing, which he says is nice to see. But he points out that Macs are way outgrowing PCs and doing ‘quite well’. He also notes that the iPad business is still outgrowing the tablets category (while still refusing to label the iPad a tablet).
“And it’s also the case that it’s not an ‘either or’. The majority of our Mac customers have an iPad. That’s an awesome thing. They don’t have it because they’re replacing their Mac, it’s because they use the right tool at the right time.
What’s very cool about what [Ternus] and his team have done with iPad Pro is that they’ve created something where that’s still the case for creative professionals too — the hardest to please audience. They’ve given them a tool where they can be equally at home using the Mac for their professional making money with it kind of work, and now they can pick up an iPad Pro — and they have been for multiple generations now and do things that, again, are part of how they make money, part of their creative workflow flow,” says Joswiak. “And that test is exciting. it isn’t one or the other, both of them have a role for these people.”
Since converting over to an iPad Pro as my only portable computer, I’ve been thinking a lot about the multimodal aspects of professional work. And, clearly, Apple has as well given its launch of a Pro Workflows team back in 2018. Workflows have changed massively over the last decade, and obviously the iPhone and an iPad, with their popularization of the direct manipulation paradigm, have had everything to do with that. In the current world we’re in, we’re way past ‘what is this new thing’, and we’re even way past ‘oh cool, this feels normal’ and we’re well into ‘this feels vital, it feels necessary.’
“Contrary to some people’s beliefs, we’re never thinking about what we should not do on an iPad because we don’t want to encroach on Mac or vice versa,” says Ternus. “Our focus is, what is the best way? What is the best iPad we can make what are the best Macs we can make. Some people are going to work across both of them, some people will kind of lean towards one because it better suits their needs and that’s, that’s all good.”
If you follow along, you’ll know that Apple studiously refuses to enter into the iPad vs. Mac debate — and in fact likes to place the iPad in a special place in the market that exists unchallenged. Joswiak often says that he doesn’t even like to say the word tablet.
“There’s iPads and tablets, and tablets aren’t very good. iPads are great,” Joswiak says. “We’re always pushing the boundaries with iPad Pro, and that’s what you want leaders to do. Leaders are the ones that push the boundaries leaders are the ones that take this further than has ever been taken before and the XDR display is a great example of that. Who else would you expect to do that other than us. And then once you see it, and once you use it, you won’t wonder, you’ll be glad we did.”
Image Credits: Apple
Apple is reportedly working on a couple of new options for a renewed entry into the smart home, including a mash-up of the Apple TV with a HomePod speaker, and an integrated camera for video chat, according to Bloomberg. It’s also said to be working on a smart speaker that basically combines a HomePod with an iPad, providing something similar to Amazon’s Echo Show or Google’s Nest Hub in functionality.
The Apple TV/HomePod hybrid would still connect to a television for outputting video, and would offer similar access to all the video and gaming services that the current Apple TV does, while the speaker component would provide sound output, music playback, and Siri integration. It would also include a built-in camera for using video conferencing apps on the TV itself, the report says.
That second device would be much more like existing smart assistant display devices on the market today, with an iPad-like screen providing integrated visuals. The project could involve attaching the iPad via a “robotic arm” according to Bloomberg, that would allow it to move to accommodate a user moving around, with the ability to keep them in frame during video chat sessions.
Bloomberg doesn’t provide any specific timelines for release of any of these potential products, and it sounds like they’re still very much in the development phase, which means Apple could easily abandon these plans depending on its evaluation of their potential. Apple just recently discontinued its original HomePod, the $300 smart speaker it debuted in 2018.
Rumors abound about a refreshed Apple TV arriving sometime this year, which should boast a faster processor and also an updated remote control. It could bring other hardware improvements, like support for a faster 120Hz refresh rate available on more modern TVs.
Apple this morning announced a handful of education-related updates to its suite of classroom apps as well as a new recognition for teachers, called the Apple Teacher Portfolio. Teachers who complete a total of nine lessons where they learn foundational skills on iPad and Mac to become an officially recognized Apple Teacher will be able to submit their portfolio of lesson examples to earn the Apple Teacher Portfolio recognition. They can then also share their portfolio with their colleagues or use it to showcase their work.
Teachers can work towards acquiring the badge through the Apple Teacher Learning Center, which is Apple’s self-paced learning platform for educators. This offering is designed to help teachers learn how to incorporate Apple technologies in the classroom, including by using iPad and Mac apps for creating art, videos, animations, recordings, page layouts, podcasts, data trackers, music, and more. Across the lessons, Apple provides templates as examples which teachers can customize or combine to make their own projects that use either an iPad or Mac and Apple software like Keynote, GarageBand, iMovie, and others.
In addition, Apple today rolled out updates across its Schoolwork and Classroom apps, as well as its “Everyone Can Create” curriculum, which has historically focused on taking advantage of Apple’s creative tools like iMovie, Clips and GarageBand.
In Schoolwork, teachers will gain the option to share Schoolwork projects with colleagues by exporting their assignments, which can then be imported back into Schoolwork or other platforms. Other improvements have been made to the sidebar navigation to make it quicker to access classes, assignments and student accounts.
Classroom, meanwhile, has been updated for remote learning — a feature that would have been more useful to have rolled out in 2020, amid the height of U.S. lockdowns during the pandemic. With the update, teachers will be able to invite remote students to Classroom sessions where they’ll be able to guide them to apps, view their screen (with the student’s permission) and track their engagement. The software has also been rebuilt using Mac Catalyst, making it work across iPad and Mac, including Macs powered by Apple’s M1 chips.
The Everyone Can Create” curriculum has had a number of smaller updates. Its Drawing guide has been updated to include motion graphics and animation in Keynote, while Photos now covers the creation of animated GIFs using Keynote, and the Camera and Photos apps. The Video guide will now explore creating short films using a green screen and other special effects, and Music adds new podcasting features using GarageBand, Apple says. Today, more than 5,000 K-12 institutions worldwide are using the curriculum.
Apple last year had updated its Schoolwork and Classroom apps, with some updates to Schoolwork to support distance learning — like managing assignments over the cloud and support for calling students via FaceTime, for example. But even as the pandemic forced schools towards remote learning, Google jumped ahead of edtech rivals by aggressively giving away its software and courting teachers. Its low-cost Chromebooks were being given out across school districts, doubling demand in 2020. Google Classroom, meanwhile, doubled to more than 100 million active users by April 2020, Bloomberg Quint reported. As of Feb. 2021, Google said the service was being used by over 150 million students, teachers and admins, up from just 40 million last year.
Apple didn’t say today how many users it has for its own educational software programs, by comparison. However, by encouraging teachers to create a portfolio which they can then share, Apple is helping to push towards greater adoption of its tools by more directly involving educators in the process.
Apple Teacher Portfolio launched today and is available in the Apple Teacher Learning Center. The “Everyone Can Create guides” are a free download on Apple Books. And new versions of both Schoolwork and Classroom are available in beta now through AppleSeed for IT.
The technology, Wi-Fi 6, is designed to reduce congestion from devices. We put it to the test.
Apple has released iOS 14.4 with security fixes for three vulnerabilities, said to be under active attack by hackers.
The technology giant said in its security update pages for iOS and iPadOS 14.4 that the three bugs affecting iPhones and iPads “may have been actively exploited.” Details of the vulnerabilities are scarce, and an Apple spokesperson declined to comment beyond what’s in the advisory.
It’s not known who is actively exploiting the vulnerabilities, or who might have fallen victim. Apple did not say if the attack was targeted against a small subset of users or if it was a wider attack. Apple granted anonymity to the individual who submitted the bug, the advisory said.
Two of the bugs were found in WebKit, the browser engine that powers the Safari browser, and the Kernel, the core of the operating system. Some successful exploits use sets of vulnerabilities chained together, rather than a single flaw. It’s not uncommon for attackers to first target vulnerabilities in a device’s browsers as a way to get access to the underlying operating system.
Apple said additional details would be available soon, but did not say when.
It’s a rare admission by Apple, which prides itself on its security image, that its customers might be under active attack by hackers.
In 2019, Google security researchers found a number of malicious websites laced with code that quietly hacked into victims’ iPhones. TechCrunch revealed that the attack was part of an operation, likely by the Chinese government, to spy on Uyghur Muslims. In response, Apple disputed some of Google’s findings in an equally rare public statement, for which Apple faced more criticism for underplaying the severity of the attack.
Last month, internet watchdog Citizen Lab found dozens of journalists had their iPhones hacked with a previously unknown vulnerability to install spyware developed by Israel-based NSO Group.
In the absence of details, iPhone and iPad users should update to iOS 14.4 as soon as possible.
Survival and strategy games are often played in stages. You have the early game where you’re learning the ropes, understanding systems. Then you have mid-game where you’re executing and gathering resources. The most fun part, for me, has always been the late mid-game where you’re in full control of your powers and skills and you’ve got resources to burn — where you execute on your master plan before the endgame gets hairy.
This is where Apple is in the game of power being played by the chip industry. And it’s about to be endgame for Intel.
Apple has introduced three machines that use its new M1 system on a chip, based on over a decade’s worth of work designing its own processing units based on the ARM instructions set. These machines are capable, assured and powerful, but their greatest advancements come in the performance per watt category.
I personally tested the 13” M1 MacBook Pro and after extensive testing, it’s clear that this machine eclipses some of the most powerful Mac portables ever made in performance while simultaneously delivering 2x-3x the battery life at a minimum.
These results are astounding, but they’re the product of that long early game that Apple has played with the A-series processors. Beginning in earnest in 2008 with the acquisition of PA Semiconductor, Apple has been working its way towards unraveling the features and capabilities of its devices from the product roadmaps of processor manufacturers.
The M1 MacBook Pro runs smoothly, launching apps so quickly that they’re often open before your cursor leaves your dock.
Video editing and rendering is super performant, only falling behind older machines when it leverages the GPU heavily. And even then only with powerful dedicated cards like the 5500M or VEGA II.
Compiling projects like WebKit produce better build times than nearly any machine (hell the M1 Mac Mini beats the Mac Pro by a few seconds). And it does it while using a fraction of the power.
This thing works like an iPad. That’s the best way I can describe it succinctly. One illustration I have been using to describe what this will feel like to a user of current MacBooks is that of chronic pain. If you’ve ever dealt with ongoing pain from a condition or injury, and then had it be alleviated by medication, therapy or surgery, you know how the sudden relief feels. You’ve been carrying the load so long you didn’t know how heavy it was. That’s what moving to this M1 MacBook feels like after using other Macs.
Every click is more responsive. Every interaction is immediate. It feels like an iOS device in all the best ways.
At the chip level, it also is an iOS device. Which brings us to…
iOS on M1
The iOS experience on the M1 machines is…present. That’s the kindest thing I can say about it. Apps install from the App Store and run smoothly, without incident. Benchmarks run on iOS apps show that they perform natively with no overhead. I even ran an iOS-based graphics benchmark which showed just fine.
That, however, is where the compliments end. The current iOS app experience on an M1 machine running Big Sur is almost comical; it’s so silly. There is no default tool-tip that explains how to replicate common iOS interactions like swipe-from-edge — instead a badly formatted cheat sheet is buried in a menu. The apps launch and run in windows only. Yes, that’s right, no full-screen iOS apps at all. It’s super cool for a second to have instant native support for iOS on the Mac, but at the end of the day this is a marketing win, not a consumer experience win.
Apple gets to say that the Mac now supports millions of iOS apps, but the fact is that the experience of using those apps on the M1 is sub-par. It will get better, I have no doubt. But the app experience on the M1 is pretty firmly in this order right now: Native M1 app>Rosetta 2 app>Catalyst app> iOS app. Provided that the Catalyst ports can be bothered to build in Mac-centric behaviors and interactions, of course. But it’s clear that iOS, though present, is clearly not where it needs to be on M1.
There is both a lot to say and not a lot to say about Rosetta 2. I’m sure we’ll get more detailed breakdowns of how Apple achieved what it has with this new emulation layer that makes x86 applications run fine on the M1 architecture. But the real nut of it is that it has managed to make a chip so powerful that it can take the approximate 26% hit (see the following charts) in raw power to translate apps and still make them run just as fast if not faster than MacBooks with Intel processors.
It’s pretty astounding. Apple would like us to forget the original Rosetta from the PowerPC transition as much as we would all like to forget it. And I’m happy to say that this is pretty easy to do because I was unable to track any real performance hit when comparing it to older, even ‘more powerful on paper’ Macs like the 16” MacBook Pro.
It’s just simply not a factor in most instances. And companies like Adobe and Microsoft are already hard at work bringing native M1 apps to the Mac, so the most needed productivity or creativity apps will essentially get a free performance bump of around 30% when they go native. But even now they’re just as fast. It’s a win-win situation.
My methodology for my testing was pretty straightforward. I ran a battery of tests designed to push these laptops in ways that reflected both real world performance and tasks as well as synthetic benchmarks. I ran the benchmarks with the machines plugged in and then again on battery power to estimate constant performance as well as performance per watt. All tests were run multiple times with cooldown periods in between in order to try to achieve a solid baseline.
Here are the machines I used for testing:
- 2020 13” M1 MacBook Pro 8-core 16GB
- 2019 16” Macbook Pro 8-core 2.4GHz 32GB w/5500M
- 2019 13” MacBook Pro 4-core 2.8GHz 16GB
- 2019 Mac Pro 12-Core 3.3GHz 48GB w/AMD Radeon Pro Vega II 32GB
Right up top I’m going to start off with the real ‘oh shit’ chart of this piece. I checked WebKit out from GitHub and ran a build on all of the machines with no parameters. This is the one deviation from the specs I mentioned above as my 13” had issues that I couldn’t figure out so I had some Internet friends help me. Also thanks to Paul Haddad of Tapbots for guidance here.
As you can see, the M1 performs admirably well across all models, with the MacBook and Mac Mini edging out the MacBook Air. This is a pretty straightforward way to visualize the difference in performance that can result in heavy tasks that last over 20 minutes, where the MacBook Air’s lack of active fan cooling throttles back the M1 a bit. Even with that throttling, the MacBook Air still beats everything here except for the very beefy MacBook Pro.
But, the big deal here is really this second chart. After a single build of WebKit, the M1 MacBook Pro had a massive 91% of its battery left. I tried multiple tests here and I could have easily run a full build of WebKit 8-9 times on one charge of the M1 MacBook’s battery. In comparison, I could have gotten through about 3 on the 16” and the 13” 2020 model only had one go in it.
This insane performance per watt of power is the M1’s secret weapon. The battery performance is simply off the chart. Even with processor-bound tasks. To give you an idea, throughout this build of WebKit the P-cluster (the power cores) hit peak pretty much every cycle while the E-cluster (the efficiency cores) maintained a steady 2GHz. These things are going at it, but they’re super power efficient.
In addition to charting battery performance in some real world tests, I also ran a couple of dedicated battery tests. In some cases they ran so long I thought I had left it plugged in by mistake, it’s that good.
I ran a mixed web browsing and web video playback script that hit a series of pages, waited for 30 seconds and then moved on to simulate browsing. The results return a pretty common sight in our tests, with the M1 outperforming the other MacBooks by just over 25%.
In fullscreen 4k/60 video playback, the M1 fares even better, clocking an easy 20 hours with fixed 50% brightness. On an earlier test, I left the auto-adjust on and it crossed the 24 hour mark easily. Yeah, a full day. That’s an iOS-like milestone.
The M1 MacBook Air does very well also, but its smaller battery means a less playback time at 16 hours. Both of them absolutely decimated the earlier models.
This was another developer-centric test that was requested. Once again, CPU bound, and the M1’s blew away any other system in my test group. Faster than the 8-core 16” MacBook Pro, wildly faster than the 13” MacBook Pro and yes, 2x as fast as the 2019 Mac Pro with its 3.3GHz Xeons.
For a look at the power curve (and to show that there is no throttling of the MacBook Pro over this period (I never found any throttling over longer periods by the way) here’s the usage curve.
Unified Memory and Disk Speed
Much ado has been made of Apple including only 16GB of memory on these first M1 machines. The fact of it, however, is that I have been unable to push them hard enough yet to feel any effect of this due to Apple’s move to unified memory architecture. Moving RAM to the SoC means no upgradeability — you’re stuck on 16GB forever. But it also means massively faster access
If I was a betting man I’d say that this was an intermediate step to eliminating RAM altogether. It’s possible that a future (far future, this is the play for now) version of Apple’s M-series chips could end up supplying memory to each of the various chips from a vast pool that also serves as permanent storage. For now, though, what you’ve got is a finite, but blazing fast, pool of memory shared between the CPU cores, GPU and other SoC denizens like the Secure Enclave and Neural Engine.
While running many applications simultaneously, the M1 performed extremely well. Because this new architecture is so close, with memory being a short hop away next door rather than out over a PCIE bus, swapping between applications was zero issue. Even while tasks were run in the background — beefy, data heavy tasks — the rest of the system stayed flowing.
Even when the memory pressure tab of Activity Monitor showed that OS X was using swap space, as it did from time to time, I noticed no slowdown in performance.
Though I wasn’t able to trip it up I would guess that you would have to throw a single, extremely large file at this thing to get it to show any amount of struggle.
The SSD in the M1 MacBook Pro is running on a PCIE 3.0 bus, and its write and read speeds indicate that.
The M1 MacBook Pro has two Thunderbolt controllers, one for each port. This means that you’re going to get full PCIE 4.0 speeds out of each and that it seems very likely that Apple could include up to 4 ports in the future without much change in architecture.
This configuration also means that you can easily power an Apple Pro Display XDR and another monitor besides. I was unable to test two Apple Pro Display XDR monitors side-by-side.
Cooling and throttling
No matter how long the tests I ran were, I was never able to ascertain any throttling of the CPU on the M1 MacBook Pro. From our testing it was evident that in longer operations (20-40 minutes on up) it was possible to see the MacBook Air pulling back a bit over time. Not so with the Macbook Pro.
Apple says that it has designed a new ‘cooling system’ in the M1 MacBook Pro, which holds up. There is a single fan but it is noticeably quieter than either of the other fans. In fact, I was never able to get the M1 much hotter than ‘warm’ and the fan ran at speeds that were much more similar to that of a water cooled rig than the turbo engine situation in the other MacBooks.
Even running a long, intense Cinebench 23 session could not make the M1 MacBook get loud. Over the course of the mark running all high-performance cores regularly hit 3GHz and the efficiency cores hitting 2GHz. Despite that, it continued to run very cool and very quiet in comparison to other MacBooks. It’s the stealth bomber at the Harrier party.
In that Cinebench test you can see that it doubles the multi-core performance of last year’s 13” MacBook and even beats out the single-core performance of the 16” MacBook Pro.
I ran a couple of Final Cut Pro tests with my test suite. First was a 5 minute 4k60 timeline shot with iPhone 12 Pro using audio, transitions, titles and color grading. The M1 Macbook performed fantastic, slightly beating out the 16” MacBook Pro.
With an 8K timeline of the same duration, the 16” MacBook Pro with its Radeon 5500M was able to really shine with FCP’s GPU acceleration. The M1 held its own though, showing 3x faster speeds than the 13” MacBook Pro with its integrated graphics.
And, most impressively, the M1 MacBook Pro used extremely little power to do so. Just 17% of the battery to output an 81GB 8k render. The 13” MacBook Pro could not even finish this render on one battery charge.
As you can see in these GFXBench charts, while the M1 MacBook Pro isn’t a powerhouse gaming laptop we still got some very surprising and impressive results in tests of the GPU when a rack of Metal tests were run on it. The 16″ MBP still has more raw power, but rendering games at retina is still very possible here.
The M1 is the future of CPU design
All too often over the years we’ve seen Mac releases hamstrung by the capabilities of the chips and chipsets that were being offered by Intel. Even as recently as the 16” MacBook Pro, Apple was stuck a generation or more behind. The writing was basically on the wall once the iPhone became such a massive hit that Apple began producing more chips than the entire rest of the computing industry combined.
Apple has now shipped over 2 billion chips, a scale that makes Intel’s desktop business look like a luxury manufacturer. I think it was politic of Apple to not mention them by name during last week’s announcement, but it’s also clear that Intel’s days are numbered on the Mac and that their only saving grace for the rest of the industry is that Apple is incredibly unlikely to make chips for anyone else.
Years ago I wrote an article about the iPhone’s biggest flaw being that its performance per watt limited the new experiences that it was capable of delivering. People hated that piece but I was right. Apple has spent the last decade “fixing” its battery problem by continuing to carve out massive performance gains via its A-series chips all while maintaining essentially the same (or slightly better) battery life across the iPhone lineup. No miracle battery technology has appeared, so Apple went in the opposite direction, grinding away at the chip end of the stick.
What we’re seeing today is the result of Apple flipping the switch to bring all of that power efficiency to the Mac, a device with 5x the raw battery to work with. And those results are spectacular.
The 2020 iPad Air comes at an interesting time in Apple’s release cycle. The iPad Pro is still strong from a specs perspective but is now technically a half generation or so behind in CPU. The new pro models won’t arrive for (theoretical) months.
So what you end up with is a device that shares the design philosophy of the iPad Pro and inherits some of its best features while simultaneously leaping ahead of it in raw compute power. This makes the Air one of the better overall values in any computing device from Apple in some time. In fact, it’s become obvious that this is my top choice to recommend as a casual, portable computer from Apple’s entire lineup including the MacBooks.
The clean new design has a thin, pleasantly colored simplicity to it. It matches the new iPhone 12 aesthetic quite well. The smoothly bullnosed corners and dusty blue peened finish make this one of the better looking iPads since the original. For years, Apple moved to try to “pull” the casing of the iPads around the back, making it disappear. This new design is a nice balance between the original’s frank simplicity and the new iPad Pro direction. A bit less sharp-edged and a bit more ‘friendly’ while still crisp.
One thing that I love a lot about the Air is that it lives up to its name and clocks in at the lightest weight of any of Apple’s portables at 1.0lb flat. This plus the Magic Keyboard is just such a killer portable writing machine it’s wild.
Apple didn’t fix the camera position on this, something that still stinks about the iPad Pro because you have to use Face ID to unlock it and your hand is always in the way in landscape mode. Instead, they straight up ditched the entire True Depth camera and Face ID altogether and tucked Touch ID into the power button.
The initial scanning process to set up a finger seemed ever so slightly more reluctant to grab my fingerprint here than it used to on the home button. My guess is that it’s to do with the oblong shape of the sensor or its housing. But once it was scanned and input, I’m happy to report that it works exactly as well if not better than any iPhone home button version. I set a finger on my left hand here because I only use iPads in horizontal mode. But if you aren’t a keyboard person and are doing a lot of reading, the right hand would be appropriate.
I actually found this to be a more natural feeling activation gesture than swiping up only to remember that my hand is in the way and having to move it and look at the camera. If the camera was placed along the horizontal edge of the iPad Pro or even in the corner I might feel differently. But as a compromise so that Apple doesn’t have to ship a True Depth camera in this unit, it works plenty fine.
The surface of the Touch ID button is covered by an opaque sapphire crystal cover that blends well with the casing but allows the print to be read through it.
Once you have the iPad Air unlocked, it falls right into the ‘X’ style navigation system. Swipes to open and navigate and move around. This is great because it brings near parity of navigation across Apple’s device lineup (minus the iPhone SE.)
The camera is fine. Do you shoot pictures on an iPad? Really you do? Wow, interesting, ok. Maybe buy the iPad Pro which has a full LiDAR array, a Wide and an Ultra Wide lens. Great for artists, scanning, reference work etc. On the iPad Air the camera is just fine but is really a formality. It can be used in all of those ways and the quality is on par, but it’s there because it has to be there.
Those of you that travel with an iPad and an iPhone will be happy to know that you can charge an iPhone from the USB-C port on the iPad Air. And yep, it works fine with USB-C hubs and card readers too.
The iPad Air has 4GB of RAM where the iPad Pro 2020 has 6GB. It has a Liquid Retina display, but no ProMotion 120hz refresh. The lack of ProMotion is unfortunate but understandable. It requires another whole layer of display technology that is quite a bit more expensive. Having gotten used to it now I would say that on a larger screen like this it’s easily the best excuse for spending the extra $150-200 to bump up to the 11” Pro model. It’s just really damn nice. If you’ve never had one, you’ll be a lot less likely to miss this obviously.
But it also has an A14 Bionic chip where the iPad Pro 2020 models are still on the A12Z. Because that ‘Z’ is related to the fact that it has an extended number of graphics cores (8-core CPU/8-core GPU), the performance gap isn’t as big as you’d think.
Though the iPad Air edges out the iPad Pro in single-core performance, the multi-core numbers are essentially on parity. This speaks to the iPad Pro being tuned to handle multiple processes in simultaneous threads for processing images and video. If you’re running Photoshop or Premiere Rush or LumaFusion on an iPad, you want the Pro. For most other uses, you’re gonna be just fine with the Air.
I do really wish that the Air started at 128GB instead of 64GB for the base $599 price. Apple has finally gotten the iPhone to a great place for minimum storage across the lineup, and I wish that the iPad Air matched that. If a ton of space is important to you, it’s important to note that you cannot get anything over 256GB in this unit, unlike the iPad Pro that is offered up to 1TB.
The two speaker system in the iPad Air is arranged in the much better horizontal array but it’s hal the amount that are in the iPad Pro and it shows. It’s a bit less loud overall but honestly the top volume is still way more than you need for typical iPad viewing distance.
Much of what I wrote about using Apple’s iPad Pro over the course of 10,000 miles of travel applies directly here. I still find it to be a great experience that, once you’ve adjusted for workflows, is just as powerful as any laptop. The additional features that have shipped in iOS 14 since that review have only made the iPad a better platform for legitimate work.
And now you get the Gen 2 pencil and the fantastic Magic Keyboard in an iPad outside of the Pro lineup and it honestly adds a ton of the utility.
Here’s my advice: Buy this if you want a portable iPad Pro to use alongside a MacBook or desktop computer for those times you don’t want to carry or can’t carry it. If you want an iPad Pro as your only computer, get the big iPad Pro but probably wait until they update that one in a few months.
Adobe today launched the first public version of its Illustrator vector graphics app on the iPad. That’s no surprise, given that it was already available for pre-order and as a private beta, but a lot of Illustrator users were looking forward to this day.
In addition, the company also today announced that its Fresco drawing and painting app is now available on Apple’s iPhone, too. Previously, you needed either a Windows machine or an iPad to use it.
Illustrator on the iPad supports Apple Pencil — no surprise there either — and should offer a pretty intuitive user experience for existing users. Like with Photoshop, the team adapted the user interface for a smaller screen and promises a more streamlined experience.
“While on the surface it may seem simple, more capabilities reveal themselves as you work. After a while you develop a natural rhythm where the app fades into the background, freeing you to express your creativity,” the company says.
Over time, the company plans to bring more effects, brushes and AI-powered features to Illustrator in general — including on the iPad.
As for Fresco, it’ll be interesting to see what that user experience will look like on a small screen. Since it uses Adobe’s Creative Cloud libraries, you can always start sketching on an iPhone and then move to another platform to finish your work. It’s worth noting that the iPhone version will feature the same interface, brushes and capabilities you’d expect on the other platforms.
The company also today launched version 2.0 of Fresco, with new smudge brushes, support for personalized brushes from Adobe Capture and more.
A surge in worldwide demand for low-cost laptops has created shipment delays and pitted desperate schools against one another.
Apple’s big iPhone event is finally here – virtual, which is to be expected these day. This is already the second virtual event Apple has hosted this fall, following one in September at which it revealed the Apple Watch Series 6 and a new iPad Air. This time around, we’re going to see what the iPhone 12 looks like, as well as how many colors and sizes it comes in.
There’s also supposed to be plenty of other news, including a new smaller HomePod mini, maybe an updated Apple TV, possibly a number of different headphone products and more. Will we get our first glance at the first shipping ARM-based Mac to use Apple’s in-house processors? Probably not, but maybe!
We’re going to be following along live and offering commentary below, and you can also tune in live to the video stream right here. Everything gets underway at 10 AM PT/ 1 PM ET.
All of our tech products will one day become obsolete, but here are some strategies to buying gadgets that you can enjoy for many years.
Even though Apple did not invent the mouse pointer, history has cemented its place in dragging it out of obscurity and into mainstream use. Its everyday utility, pioneered at Xerox Parc and later combined with a bit of iconic* work from Susan Kare at Apple, has made the pointer our avatar in digital space for nearly 40 years.
The arrow waits on the screen. Slightly angled, with a straight edge and a 45 degree slope leading to a sharp pixel-by-pixel point. It’s an instrument of precision, of tiny click targets on a screen feet away. The original cursor was a dot, then a line pointing straight upwards. It was demonstrated in the ‘Mother of all demos’ — a presentation roughly an hour and a half long that contained not only the world’s first look at the mouse but also hyper linking, document collaboration, video conferencing and more.
The star of the show, though, was the small line of pixels that made up the mouse cursor. It was hominem ex machina — humanity in the machine. Unlike the text entry models of before, which placed character after character in a facsimile of a typewriter, this was a tether that connected us, embryonic, to the aleph. For the first time we saw ourselves awkwardly in a screen.
We don’t know exactly why the original ‘straight up arrow’ envisioned by Doug Engelbart took on the precise angled stance we know today. There are many self-assured conjectures about the change, but few actual answers — all we know for sure is that, like a ready athlete, the arrow pointer has been there, waiting to leap towards our goal for decades. But for the past few years, thanks to touch devices, we’ve had a new, fleshier, sprinter: our finger.
The iPhone and later the iPad didn’t immediately re-invent the cursor. Instead, it removed it entirely. Replacing your digital ghost in the machine with your physical meatspace fingertip. Touch interactions brought with them “stickiness” — the 1:1 mating of intent and action. If you touched a thing, it did something. If you dragged your finger, the content came with it. This, finally, was human-centric computing.
Then, a few weeks ago, Apple dropped a new kind of pointer — a hybrid between these two worlds of pixels and pushes. The iPad’s cursor, I think, deserves closer examination. It’s a seminal bit of remixing from one of the most closely watched idea factories on the planet.
In order to dive a bit deeper on the brand new cursor and its interaction models, I spoke to Apple SVP Craig Federighi about its development and some of the choices by the teams at Apple that made it. First, let’s talk about some of the things that make the cursor so different from what came before…and yet strangely familiar.
The iPad cursor takes on the shape of a small circle, a normalized version of the way that the screen’s touch sensors read the tip of your finger. Already, this is different. It brings that idea of placing you inside the machine to the next level, blending the physical nature of touch with the one-step-removed trackpad experience.
Its size and shape is also a nod to the nature of iPad’s user interface. It was designed from the ground up as a touch-first experience. So much so that when an app is not properly optimized for that modality it feels awkward, clumsy. The cursor as your finger’s avatar has the same impact wherever it lands.
Honestly, the thinking could have stopped there and that would have been perfectly adequate. A rough finger facsimile as pointer. But the concept is pushed further. As you approach an interactive element, the circle reaches out, smoothly touching then embracing and encapsulating the button.
The idea of variable cursor velocity is pushed further here too. When you’re close to an object on the screen, it changes its rate of travel to get where you want to go quicker, but it does it contextually, rather than linearly, the way that OS X or Windows does.
Predictive math is applied to get you to where you’re going without you having to land precisely there, then a bit of inertia is applied to keep you where you need to be without over shooting it. Once you’re on the icon, small movements of your finger jiggle the icon so you know you’re still there.
The cursor even disappears when you stop moving it, much as the pressure of your finger disappears when you remove it from the screen. And in some cases the cursor possesses the element itself, becoming the button and casting a light ethereal glow around it.
This stir fry of path prediction, animation, physics and fun seasoning is all cooked into a dish that does its best to replicate the feel of something we do without thinking: reaching out and touching something directly.
These are, in design parlance, affordances. They take an operation that is at its base level much harder to do with a touchpad than it is your finger, and make it feel just as easy. All you have to do to render this point in crystal is watch a kid who uses an iPad all day try to use a mouse to accomplish the same task.
The idea that a cursor could change fluidly as needed in context isn’t exactly new. The I-Beam (the cursor type that appears when you hover over editable text) is a good example of this. There were also early experiments at Xerox Parc — the birthplace of the mouse — that also made use of a transforming cursor. They even tried color changes, but never quite got to the concept of on-screen elements as interactive objects — choosing to emulate functions of the keyboard.
But there has never been a cursor like this one. Designed to emulate your finger, but also to spread and squish and blob and rush and rest. It’s a unique addition to the landscape.
Given how highly scrutinized Apple’s every release is, the iPad cursor not being spoiled is a minor miracle. When it was released as a software update for existing iPads — and future ones — people began testing it immediately and discovering the dramatically different ways that it behaved from its pre-cursors.*
Inside Apple, the team enjoyed watching the speculation externally that Apple was going to pursue a relatively standard path — displaying a pointer on screen on the iPad — and used it as motivation to deliver something more rich, a solution to be paired with the Magic Keyboard. The scuttlebutt was that Apple was going to add cursor support to iPad OS, but even down to the last minute the assumption was that we would see a traditional pointer that brought the iPad as close as possible to ‘laptop’ behavior.
Since the 2018 iPad Pro debuted with the smart connector, those of us that use the iPad Pro daily have been waiting for Apple to ship a ‘real’ keyboard for the device. I went over my experiences with the Smart Keyboard Folio in my review of the new iPad Pro here, and the Magic Keyboard here, but suffice to say that the new design is incredible for heavy typists. And, of course, it brings along a world class trackpad for the ride.
When the team set out to develop the new cursor, the spec called for something that felt like a real pointer experience, but that melded philosophically with the nature of iPad.
A couple of truths to guide the process:
- The iPad is touch first.
- iPad is the most versatile computer that Apple makes.
In some ways, the work on the new iPad OS cursor began with the Apple TV’s refreshed interface back in 2015. If you’ve noticed some similarities between the way that the cursor behaves on iPad OS and the way it works on Apple TV, you’re not alone. There is the familiar ‘jumping’ from one point of interest to another, for instance, and the slight sheen of a button as you move your finger while ‘hovering’ on it.
“There was a process to figure out exactly how various elements would work together,” Federighi says. “We knew we wanted a very touch-centric cursor that was not conveying an unnecessary level of precision. We knew we had a focus experience similar to Apple TV that we could take advantage of in a delightful way. We knew that when dealing with text we wanted to provide a greater sense of feedback.”
“Part of what I love so much about what’s happened with iPadOS is the way that we’ve drawn from so many sources. The experience draws from our work on tvOS, from years of work on the Mac, and from the origins of iPhone X and early iPad, creating something new that feels really natural for iPad.”
And the Apple TV interface didn’t just ‘inspire’ the cursor — the core design team responsible works across groups, including the Apple TV, iPad OS and other products.
But to understand the process, you have to get a wider view of the options a user has when interacting with an Apple device.
Apple’s input modalities include:
- Mouse (Mac)
- Touchpad (Mac, MacBook, iPad)
- Touch (iPhone, iPad)
- AR (iPhone, iPad, still nascent)
Each of these modalities has situational advantages or disadvantages. The finger, of course, is an imprecise instrument. The team knew that they would have to telegraph the imprecise nature of a finger to the user, but also honor contexts in which precision was needed.
Apple approached the experience going in clean. The team knew that they had the raw elements to make it happen. They had to have a touch sensitive cursor, they knew that the Apple TV cursor showed promise and they knew that more interactive feedback was important when it came to text.
Where and how to apply which element was the big hurdle.
“When we were first thinking about the cursor, we needed it to reflect the natural and easy experience of using your finger when high precision isn’t necessary, like when accessing an icon on the home screen, but it also needed to scale very naturally into high precision tasks like editing text,” says Federighi.
“So we came up with a circle that elegantly transforms to accomplish the task at hand. For example, it morphs to become the focus around a button, or to hop over to another button, or it morphs into something more precise when that makes sense, like the I-beam for text selection.“
The predictive nature of the cursor is the answer that they came up with for “How do you scale a touch analogue into high precision?”
But the team needed to figure out the what situations demanded precision. Interacting with one element over another one close by, for example. That’s where the inertia and snapping came in. The iPad, specifically, is multipurpose computer so it’s way more complex than any single-input device. There are multiple modalities to service with any cursor implementation on the platform. And they have to be honored without tearing down all of the learning that you’ve put millions of users through with a primary touch interface.
“We set out to design the cursor in a way that retains the touch-first experience without fundamentally changing the UI,” Federighi says. “So customers who may never use a trackpad with their iPad won’t have to learn something new, while making it great for those who may switch back and forth between touch and trackpad.”
The team knew that it needed to imbue the cursor with the same sense of fluidity that has become a pillar of the way that iOS works. So they animated it, from dot to I-beam to blob. If you slow down the animation you can see it sprout a bezier curve and flow into its new appearance. This serves the purpose of ‘delighting’ the user — it’s just fun — but it also tells a story about where the cursor is going. This keeps the user in sync with the actions of the blob, which is always a danger any time you introduce even a small amount of autonomy in a user avatar.
Once on the icon, the cursor moves the icon in a small parallax, but this icon shift is simulated — there are not layers here like on Apple TV, but they would be fun to have.
Text editing gets an upgrade as well, with the I-Beam conforming to the size of the text you’re editing, to make it abundantly clear where the cursor will insert and what size of text it will produce when you begin typing.
The web presented its own challenges. The open standard means that many sites have their own hover elements and behaviors. The question that the team had to come to grips with was how far to push conformity to the “rules” of iPad OS and the cursor. The answer was not a one-size application of the above elements. It had to honor the integral elements of the web.
Simply, they knew that people were not going to re-write the web for Apple.
“Perfecting exactly where to apply these elements was an interesting journey. For instance, websites do all manner of things – sometimes they have their own hover experiences, sometimes the clickable area of an element does not match what the user would think of as a selectable area,” he says. “So we looked carefully at where to push what kind of feedback to achieve a really high level of compatibility out the gates with the web as well as with third party apps.”
Any third-party apps that have used the standard iPad OS elements get all of this work for free, of course. It just works. And the implementation for apps that use custom elements is pretty straightforward. Not flick-a-switch simple, but not a heavy lift either.
The response to the cursor support has been hugely positive so far, and that enthusiasm creates momentum. If there’s a major suite of productivity tools that has a solid user base on iPad Pro, you can bet it will get an update. Microsoft, for instance, is working on iPad cursor support that’s expected to ship in Office for iPad this fall.
System gestures also feel fresh and responsive even on the distanced touchpad. In some ways, the flicking and swiping actually feel more effective and useful on the horizontal than they do on the screen itself. I can tell you from personal experience that context switching back and forth from the screen to the keyboard to switch between workspaces introduces a lot of cognitive wear and tear. Even the act of continuously holding your arm up and out to swipe back and forth between workspaces a foot off the table introduces a longer term fatigue issue.
When the gestures are on the trackpad, they’re more immediate, smaller in overall physical space and less tiring to execute.
“Many iPad gestures on the trackpad are analogous to those on the Mac, so you don’t have to think about them or relearn anything. However, they respond in a different, more immediate way on iPad, making everything feel connected and effortless,” says Federighi.
Remember that the first iPad multitasking gestures felt like a weird offshoot. An experiment that appeared useless at worst but an interesting curiosity at best. Now, on the home button free iPad Pro, the work done by the team that built the iPhone X shines brightly. It’s pretty remarkable that they built a system so usable that it even works on trackpad — one element removed from immediate touch.
Federighi says that they thought about rethinking 3 finger gestures altogether. But they discovered that they work just fine as is. In the case of anything that goes off of the edge you hit a limit and just push beyond it again to confirm and you get the same result.
There are still gaps in the iPad’s cursor paradigms. There is no support for cursor lock on iPad, making it a non starter for relative mouse movement in 3D apps like first person games. There’s more to come, no doubt, but Apple had no comment when I asked about it.
The new iPad cursor is a product of what came before, but it’s blending, rather than layering, that makes it successful in practice. The blending of the product team’s learnings across Apple TV, Mac and iPad. The blending of touch, mouse and touchpad modalities. And, of course, the blending of a desire to make something new and creative and the constraint that it also had to feel familiar and useful right out of the box. It’s a speciality that Apple, when it is at its best, continues to hold central to its development philosophy.
Shelter-in-place orders have transformed the tablet computer from a superfluous device into a must-have.
Over the past two years, I’ve typed nearly every word I’ve written while traveling on the iPad Pro’s Smart Keyboard Folio. For more on why you can see my iPad Pro review here.
For the purposes of this look at the new Magic Keyboard, though, you should probably just know two things:
- It was reliable, incredibly durable and never once failed me.
- It kind of stunk in every other way.
The Keyboard Folio’s plastic coated surface made it impervious to spills, but it also made the keys much less responsive. It rendered them unable to give your fingers the feedback necessary to confirm that a key had been struck, leading me to adopt a technique where I just hit every key with maximum strength at all times.
The new Magic Keyboard is as different from that device as the new MacBook Pro keyboards are from the low profile ones that dominated headlines over the last couple of years. It’s a huge jump forward in usability for the iPad Pro — and for last year’s model too.
I am very relieved I don’t have to slam my fingers onto the plastic keyboard anymore, because over long and fast typing sessions I could feel a numbness that would begin to radiate from the tips of my fingers a bit. An enervation of sorts. It wasn’t precisely painful but it was noticeable.
The Magic Keyboard offers a lovely, backlit deck that holds its own against the 16” MacBook Pro and the new MacBook Air for best portable keyboards. The key travel is excellent — in between the two laptops in my opinion — and the feel is tight, responsive and precise. This is a first class typing experience, full stop.
I’ve been testing the three keyboards alongside one another for the past few days and I can’t stress enough how stable the keys are. Even the MacBook Air allows a tiny bit of key shift if you touch your finger to it and gently circle it — though the MacBook Pro is better. There’s such a small amount of that here that it’s almost imperceptible.
It’s a tad spongier than the 16” MBP but more firm than the MacBook Air, which has a bit more return and travel. In my opinion, this keyboard is ‘louder’ (due to the plastic casing being more resonant than the aluminum), than the 16” MBP, but about the same as the MacBook Air. The throw feels similar to the 16” though, with the Air being slightly deeper but ‘sloppier’.
So a hybrid between those two keyboards as far as feel goes, but a clear descendant of the work that was done to turn those offerings around.
Among my biggest concerns was that Apple would get overly clever with the hinge design, making the the typing an exercise in wobble. Happy to say here that they took the clear path here and made it as sturdy as possible, even if that was at the cost of variability.
The hinge is a simple limit stop design that opens far less than you’d expect and then allows a second hinge to engage to open in an arc between the 80 and 130 degrees. The 90 degree and fully open positions basically mimic the angles that were offered by the grooves of the Keyboard Folio — but now you can choose any in-between position that feels natural to you.
Apple has obviously put this hard stop fold out limit in place to maintain balance on tables and laps, and its clever use of counter opposing forces with the second hinge combines to limit tipping and to make typing on a lap finally a completely viable thing to do. The fact that you don’t have to hammer the keyboard to type also makes this a better proposition.
For typing, these positions should be just fine for the vast majority of users. And the solid (very high friction) hinge means that the whole thing is very sturdy feeling, even with more moving parts. I have been quite comfortable grabbing the whole assembly of the 12.9” iPad Pro plus Magic Keyboard by the deck of the keyboard and carrying it around, much in the way I’d carry a laptop. No worries about accidental floppiness or detachment.
At the same time, the new design that floats the iPad in the air allows you to quickly pop it off with little effort by either your left or right hand. This makes the Magic Keyboard take on the use case of a desktop based dock, something that never felt right with the Keyboard Folio.
The touchpad physically moves here, and is not a haptic pad, but it is clickable across its entire surface. It’s also a laptop-class trackpad, proving that Apple’s engineering teams still have a better idea about how to make a trackpad that works crisply and as expected than any other hardware team out there.
I do love the soft touch coating of the case itself, but I believe it will wear in a similar fashion to these kinds of surfaces on other devices. It will likely develop shiny spots on either side of the trackpad on the hand rest areas.
The responsive half arrow keys are extremely welcome.
Some other details, quirks and upper limits
The camera placement situation is much improved here, as you’re less likely to hold the left side of the iPad to keep it stable. The lift of the keyboard (at times about an inch and a bit) means that the eye line, while still not ideal, is improved for zoom calls and the like. Less double chin up the nose action. Apple should still move the iPad Pro’s camera on future versions.
The keyboard’s backlight brightness is decent and adjustable in the settings pane once it’s attached to iPad Pro. The unit did use more battery in my tests, though I haven’t had it long enough to assign any numbers to it. I did notice during a recent Facetime call that the battery was draining faster than it could charge, but that is so far anecdotal and I haven’t had the time to reproduce it in testing.
This is not the case that artists have been waiting for. This case does not rotate around backwards like the keyboard folio, meaning that you’re going to be popping it off the case if you’re going to draw on it at all. In some ways the ease of removal feels like an Apple concession. ‘Hey, we couldn’t fit all this in and a way to position it at a drawing angle, so we made it really easy to get it loose.’ It works, but I hope that more magic happens between now and the next iteration to find a way to serve both typing and drawing in one protected configuration.
A little quirk: when it’s tilted super far back to the full stop I sometimes nick the bottom edge of the iPad with my fingers when hitting numbers — could be my typing form or bigger hands but I thought it worth mentioning.
It’s a bit heavy. At 700g for the 12.9” keyboard, it more than doubles the weight of the whole package. The larger iPad Pro and keyboard is basically the weight of a MacBook Air. Get the 11” if weight is a concern. This keyboard makes the iPad 12.9″ package feel very chunky. The
The fact that this keyboard works on the older iPad Pro (the camera just floats inside the cutout) means that this is a fantastic upgrade for existing users. It really makes the device feel like it got a huge upgrade without having to buy a new core unit, which fits with Apple’s modular approach to iPad Pro and also stands out as pretty rare in a world where the coolest new features are often hardware related and new device limited.
At $300 and $350 for each size of Magic Keyboard, the price is something you must think about up front. Given that it is now easily the best keyboard available for these devices I think you need to consider it a part of the package price of the device. If you can’t swing that, consider another option — it’s that good.