Apple’s RealityKit 2 allows developers to create 3D models for AR using iPhone photos

At its Worldwide Developer Conference, Apple announced a significant update to RealityKit, its suite of technologies that allow developers to get started building AR (augmented reality) experiences. With the launch of RealityKit 2, Apple says developers will have more visual, audio, and animation control when working on their AR experiences. But the most notable part of the update is how Apple’s new Object Capture technology will allow developers to create 3D models in minutes using only an iPhone.

Apple noted during its developer address that one of the most difficult parts of making great AR apps was the process of creating 3D models. These could take hours and thousands of dollars.

With Apple’s new tools, developers will be able take a series of pictures using just an iPhone (or iPad or DSLR, if they prefer) to capture 2D images of an object from all angles, including the bottom.

Then, using the Object Capture API on macOS Monterey, it only takes a few lines of code to generate the 3D model, Apple explained.

Image Credits: Apple

To begin, developers would start a new photogrammetry session in RealityKit that points to the folder where they’ve captured the images. Then, they would call the process function to generate the 3D model at the desired level of detail. Object Capture allows developers to generate the USDZ files optimized for AR Quick Look — the system that lets developers add virtual, 3D objects in apps or websites on iPhone and iPad. The 3D models can also be added to AR scenes in Reality Composer in Xcode.

Apple said developers like Wayfair, Etsy and others are using Object Capture to create 3D models of real-world objects — an indication that online shopping is about to get a big AR upgrade.

Wayfair, for example, is using Object Capture to develop tools for their manufacturers so they can create a virtual representation of their merchandise. This will allow Wayfair customers to be able to preview more products in AR than they could today.

Image Credits: Apple (screenshot of Wayfair tool))

In addition, Apple noted developers including Maxon and Unity are using Object Capture for creating 3D content within 3D content creation apps, such as Cinema 4D and Unity MARS.

Other updates in RealityKit 2 include custom shaders that give developers more control over the rendering pipeline to fine tune the look and feel of AR objects; dynamic loading for assets; the ability to build your own Entity Component System to organize the assets in your AR scene; and the ability to create player-controlled characters so users can jump, scale and explore AR worlds in RealityKit-based games.

One developer, Mikko Haapoja of Shopify, has been trying out the new technology (see below) and shared some real-world tests where he shot objects using an iPhone 12 Max via Twitter.

Developers who want to test it for themselves can leverage Apple’s sample app and install Monterey on their Mac to try it out.

read more about Apple's WWDC 2021 on TechCrunch

#animation, #apple, #apple-inc, #apps, #ar, #augmented-reality, #computing, #ios, #ipad, #iphone, #macos, #mobile, #online-shopping, #realitykit, #unity, #wayfair, #wwdc-2021

0

Twitch introduces Animated Emotes for their 10th anniversary

Twitch announced today that they will release major updates to their Emotes this month to celebrate their 10th anniversary. These new features will include Animated Emotes, Follower Emotes, and a Library for Emotes. 

Since the origin of the live streaming platform for gamers, Emotes – Twitch’s version of emojis – have been a key component of Twitch culture. They’re micro memes, and images like Kappa, TriHard, and PogChamp have come to carry meaning in the greater gaming world, even off the Twitch platform. 

“Emotes are a language that transcends countries,” said Ivan Santana, Senior Director of Community Product at Twitch. “Anywhere you are in the world, they mean the same thing for us.”

The Amazon-owned platform regularly adds new global Emotes, which can be used on any streamer’s channel. Individual creators can make custom Emotes for their own community, which paying subscribers can use across the platform. But the ability to add animated gifs as Emotes is something that the community has been asking for since Santana can remember. 

“I’ve been at Twitch for four years, and it’s something people have been asking for since before I joined,” Santana told TechCrunch. “It’s certainly been a very, very long time.” 

Streamers who lack animation skills need not worry. While the more tech-savvy among us can upload custom gifs, Twitch will provide six templates for streamers to choose from, which can animate their existing Emotes. These animations include Shake, Rave, Roll, Spin, Slide In, and Slide Out. Viewers who are sensitive to animations will be able to turn off the feature in their Chat Settings. 

Image Credits: Twitch

Twitch is also beta testing Follower Emotes, which will be available to select Partners and Affiliates. This feature creates a fun, free incentive for viewers to hit the follow button on a channel they might be checking out for the first time. When viewers follow a channel, they’ll be notified when the creator is streaming, which can lead to an eventual subscription. Twitch takes 50% of streamers’ subscription money, creating a valuable revenue stream for the company.

In Q1 of 2021, Twitch viewership hit an all-time high, growing 16.5% since the previous quarter. Twitch viewers watched 6.34 billion hours of content in Q1, making up 72.3% of the market share. That’s double the total hours watched on Twitch in Q1 of 2020. Facebook Gaming and YouTube Gaming earned 12.1% and 15.6% of viewership in the sector respectively. 

“For a long time, creators have been asking for better ways to attract and welcome new viewers into their channel,” said Santana. “The idea is generally to create a lot of excitement around that community, and more feelings ultimately of community.”

Creators with beta access will be able to upload up to five Emotes for their followers, but unlike Subscriber Emotes, followers won’t be able to use these across other channels. There’s no guarantee that Follower Emotes will be here to stay – Santana says it’s a feature Twitch is “experimenting” with – but if all goes well, the feature will roll out more widely later in the year.

Finally, the Library function will make it easier for creators to to swap Emotes in and out of subscription tiers without having to delete and reupload them each time. This builds upon an upgrade that launched in January, which centralized channel-specific icons into an Emotes tab on the Creator Dashboard. As usual, new Emotes have to be approved by Twitch before they’re put into use. The Library will roll out soon to all Partners and Affiliates, staggered over a few months to account for an expected increase in volume of new Emotes. 

“As Twitch has scaled, we now have millions of communities across many different cultures across the world,” Santana said. “We can hand over more of the controls of our Emote language to our community, and let them sort of evolve in a way that we never could imagine that ultimately serves them in their unique ways.”

Twitch teased that there’s more in the works to celebrate the platform’s 10th anniversary, including an official 10 Year celebration. 

#animation, #apps, #computing, #digital-media, #emotes, #entertainment, #gamers, #gaming, #livestreaming, #streaming, #twitch, #video-gaming, #video-hosting

0

Reface now lets users face-swap into pics and GIFs they upload

Buzzy face-swapping video app Reface is expanding its reality-shifting potential beyond selfies by letting users upload more of their own content for its AI to bring to life.

Users of its iOS and Android apps still can’t upload their own user generated video but the latest feature — which it calls Swap Animation — lets them upload images of humanoid stuff (monuments, memes, fine art portraits, or — indeed — photos of other people) which they want animated, choosing from a selection of in-app song snippets and poems for the AI-incarnate version to appear to speak/sing etc.

Reface’s freemium app has, thus far, taken a tightly curated approach to the content users can animate, only letting you face swap a selfie into a pre-set selection of movie and music video snippets (plus memes, GIFs, red carpet celeb shots, salon hair-dos and more).

But the new feature — which similarly relies on GAN (generative adversarial network) algorithms to work its reality-bending effects — expands the expressive potential of the app by letting users supply their own source material to face swap/animate.

Some rival apps do already offer this kind of functionality — so there’s an element of Reface catching up to apps like Avatarify, Wombo and Deep Nostalgia.

But it’s also going further as users can also swap their own face into their chosen source content. So you could, for example, get to see yourself as a singing Venus de Milo, or watch your visage recite a poem from the middle of a pop-art painting like Andy Warhol’s Marilyn.

The Andreessen Horowitz-backed startup is still being cautious as it expands what users can do with its high tech face-shifting tool — saying it will be manually moderating all uploads from the new feature.

Rivals in the deepfake space are arguably pushing its hand to open up functionality faster, though, with apps like Avatarify already letting users animate their own snaps. And — notably — a Reface spokeswoman told us it’s planning to make user generated video uploads available “in the near future”.

Pro users are getting a little taster — as they can upload their own GIFs to face swap into with this latest feature release too.

“We’re really excited to see what Reface users do with swap-reenactment, which is a major technical milestone in terms of the machine learning technology inside of the app,” said CEO Dima Shvets in a statement. “Reface content creators have been clamoring for more tools for personalized content and self-expression – and this feature delivers, dramatically extending the opportunities for realizing their vision and creativity.” 

The still young app has proved popular over its short run, garnering viral buzz via social media shares as users were keen to show off their funny face swaps.

As of March 2021 Reface said it had 100M installs some 14 months since going live. 

#andreessen-horowitz, #animation, #apps, #artificial-intelligence, #deep-learning, #deepfake, #machine-learning-technology, #mobile-applications, #mobile-software, #reface, #social, #social-media, #synthesized-media, #tc

0

The gaming industry needs more than just coders

While the COVID-19 pandemic has had a devastating impact on countless businesses across the globe, the $118 billion gaming industry not only survived, it thrived, with 55% of American consumers turning to gaming for entertainment, stress relief, relaxation and a connection to the outside world amid lockdowns.

This drove a 20% boost in gaming sales globally and created nearly 20,000 jobs in 2020 alone. And it’s not expected to stop any time soon: According to research company IBISWorld, the industry is set to grow again in 2021, adding to the year-over-year growth the industry has seen in the preceding half-decade.

This is great news for the growing gaming industry and especially those looking to score a job at a company developing the next blockbuster. The unemployment rate is already near zero for those with gaming development and design skills, which means there is an unprecedented opportunity to join the field.

The unemployment rate is already near zero for those with gaming development and design skills, which means there is an unprecedented opportunity to join the field.

Gamesmith, a digital community dedicated to the gaming industry, currently has more than 5,750 open jobs posted on its site, with roles in design, engineering and animation leading the way. In other words, if you never considered a career in the gaming industry — or thought that your skill set wouldn’t translate — this expanding job market needs employees from all types of backgrounds. Chances are one of your interests — besides gaming, of course — can act as a conduit into finding a career.

One career path for those with art skills — particularly with a talent for digital art tools like Autodesk Maya and Adobe Photoshop — is animation. An animator can do any number of jobs at a gaming company or studio, from building immersive landscapes and cities to modeling what a certain character will look like to designing user interfaces and navigational components. There is significant growth here, too: According to a recent study, most sectors of the animation industry are growing 2% to 3% year over year. According to Gamesmith, the average 30-year-old female artist or animator makes a salary of just under $90,000 a year.

If you’re a whiz with words and witty dialogue, then you might consider applying for a job as a writer at a studio. Writers are responsible for writing everything from the profanity-laced shouts heard in the background in the Grand Theft Auto series to the long speeches in epic adventures like The Legend of Zelda: Breath of the Wild. Employers are looking for writers with a flair for crafting stories and understanding characters, so even if you’re not a career writer, highlight the work you’ve done that fits in the mold of what game publishers are looking for.

One of the most important segments of the industry — and one of the fastest-growing — is developing and designing the gaming experiences themselves. The job market for these roles is predicted to grow by 9.3% between 2016 and 2026, according to the New England Institute of Technology, and the breadth of jobs in this wing of the industry range from level designers to lead designer and developers.

Gamesmith calculated that these jobs account for 16% of the available openings, but make up only 5% of all applications. While you will need at least a computer science degree and an understanding of the fundamentals of programming to land one of these jobs, the payoff for the time spent hitting the books is worth it. Gamesmith estimates that the average 28-year-old male engineer earns a salary north of $100,000.

Even if you don’t have any of these skills to help design a game from the ground up, there are still plenty of ways to break into the industry. No matter how good a game is, it will only be a success if people know about it, so the marketing and promotions teams at studios play a crucial role in making sure that consumers purchase the latest release and that it gets written about. If you have great communication skills and can work through people’s problems, companies always need customer service representatives.

Across all sectors of the gaming world, companies are looking to diversify their workforce and move away from an image as a job sector solely populated by white men in their early to mid-30s. Gamesmith research found that currently 74% of the industry’s workforce is male and 64% is white.

But that is changing. While in 2020, only 24% of studios invested moderate resources into diversity initiatives, out of those studios that did invest those resources, 96% reported at least moderately successful results and improvements to company culture. It may seem slow, but there does seem to be a recognition that the gaming industry needs a more diverse workforce as a way not just to bring more equity to their offices, but to make better games in the future and make the industry look more like the people who play games.

“Diversity isn’t a nicety; it’s a necessity if the industry is going to grow, thrive and truly reflect the tens of millions of people who play games every day in this country,” said Jo Twist, the CEO of the U.K.-based gaming trade association Ukie. “A diverse industry that draws on myriad cultures, lifestyles and experiences will lead to more creative and inclusive games that capture the imagination of players and drive our sector forward.”

Between the push to diversify the industry and a slew of new opportunities in the field, the key takeaway is that there are a wide range of possible careers in the industry and, even if you don’t think they do, your skills probably translate into one of the many roles that a gaming company needs to fill. Avid gamers know that you’re not going to beat a game the first time you turn on your console. So hone your skills, build up your experience and continue your quest to land a job in the industry of your dreams.

#animation, #artist, #column, #digital-media, #diversity, #diversity-and-inclusion, #engineer, #entertainment, #gamer, #opinion, #tc, #video-game, #video-gaming, #writer

0

With new owner Naver, Wattpad looks to supercharge its user-generated IP factory

Toronto-based Wattpad is officially part of South Korean internet giant Naver as of today, with the official close of the $600 million cash and stock acquisition deal. Under the terms of the acquisition, Wattpad will continue to be headquartered in, and operate from Canada, with co-founder and Allen Lau remaining CEO of the social storytelling company and reporting to the CEO of Naver’s Webtoon, Jun Koo Kim.

I spoke to Lau about what will change, and what won’t, now that Wattpad is part of Naver and Webtoon. As mentioned, Wattpad will remain headquartered in Toronto — and in fact, the company will be growing its headcount in Canada under its new owners with significant new hiring.

“For Wattpad itself, last year was one of our fastest growing years in terms of both in terms of revenue and company size,” Lau said. “This year will be even faster; we’re planning to hire over 100 people, primarily in Toronto and Halifax. So in terms of the number of jobs, and the number of opportunities, this puts us on another level.”

While the company is remaining in Canada and expanding its local talent pool, while maintaining its focus on delivering socially collaborative fiction, Lau says that the union with Naver and Webtoon is about more than just increasing the rate at which it can grow. The two companies share unique “synergies,” he says, that can help each better capitalize on their respective opportunities.

“Naver is one of the world’s largest internet companies,” Lau told me. “But the number one reason that this merger is happening is because of Webtoon. Webtoon is the largest digital publisher in the world, and they have over 76 million monthly users. Combined with our 90 million, that adds up to 166 total monthly users — the reach is enormous. We are now by far the leader in this space, in the storytelling space, in both comics and fiction: By far the largest one in the world.”

The other way in which the two companies complement each other is around IP. Wattpad has demonstrated its ability to take its user-generated fiction, and turn that into successful IP upon which original series and movies are based. The company has both a Books and a Studios publishing division, and has generated hits like Netflix’s The Kissing Booth out of the work of the authors on its platform. Increasingly, competing streaming services are looking around for new properties that will resonate with younger audiences, in order to win and maintain subscriptions.

“Wattpad is the IP factory for user generated content,” Lau said. “And Webtoons also have a lot of amazing IP that are proven to build audience, along with all the data and analytics and insight around those. So the combined library of the top IPs that are blockbusters literally double overnight [with the merger]. And not just the size, but the capability. Because before the acquisition, we had our online fiction, we have both publishing business, and we have TV shows and movies, as well; but with the combination, now we also have comics, we also have animation and potentially other capabilities, as well.”

The key to Wattpad’s success with developing IP in partnership with the creators on its platform isn’t just that its’ user-generated and crowd-friendly; Wattpad also has unique insight into the data behind what’s working about successful IP with its fans and readers. The company’s analytics platform can then provide collaborators in TV and movies with unparalleled, data-backed perspective into what should strike a chord with fans when translated into a new medium, and what might not be so important to include in the adaptation. This is what provides Wattpad with a unique edge when going head-to-head with legacy franchises including those from Disney and other megawatt brands.

“No only do we have the fan bases — it’s data driven,” Lau said. “When we adapt from the fiction on our platform to a movie, we can tell the screenwriter, ‘Keep chapter one, chapter five and chapter seven, but in seven only the first two paragraphs,’ because that’s what the 200,000 comments are telling us. That’s what our machine learning story DNA technology can tell you this is the insight; where are they excited? This is something unprecedented.”

With Naver and Webtoon, Wattpad gains the ability to leverage its insight-gathering IP generation in a truly cross-media context, spanning basically every means a fan might choose to engage with a property. For would-be Disney competitors, that’s likely to be an in-demand value proposition.

#animation, #canada, #disney, #internet, #machine-learning, #mass-media, #naver, #netflix, #publishing, #streaming-services, #tc, #toronto, #wattpad, #webtoon

0

Google launches the next developer preview of Android 12

Right on schedule, Google today launched the third developer preview of Android 12, the latest version of its mobile operating system. According to Google’s roadmap, this will be the last developer preview before Android 12 goes into beta, which is typically also when you’ll likely see the first over-the-air updates for non-developers who want to try it out. For now, developers still have to flash a device image to their supported Pixel devices.

Google notes that with the beta phase coming up, now is the time for developers to start compatibility testing to make sure their apps are ready. Currently, the plan is for Android 12 to reach platform stability by August 2021. At that point, all the app-facing features will be locked in and finalized.

So what’s new in this preview? As usual, there are dozens of smaller new features, tweaks and changes, but the highlights this time around are the ability for developers to provide new haptic feedback experiences in their apps and new app launch animations.

This new app launch experience may be the most noticeable change here for both developers and users. The new animation will take the app from launch to a splash screen that shows the app’s icon and then to the app itself. “The new experience brings standard design elements to every app launch, but we’ve also made it customizable so apps can maintain their unique branding,” Google explains. Developers will get quite a bit of leeway in how they want to customize this splash screen with their own branding. The most basic launch experience is enabled by default, though.

Rich haptic feedback is also new in this release. It’s hard not to look at this and think of Apple’s now mostly abandoned Force Touch, but this is a bit different. The idea here is to provide “immersive and delightful effects for gaming, and attentional haptics for productivity.”

Other new features in this release include a new call notification template that is meant to make it easier for users to manage incoming and ongoing calls. Google says these new notifications will be more visible and scannable. There are also improvements to the Neural Networks API for machine learning workloads and new APIs to support a wider range of ultra high-resolution camera sensors.

With Android 12, Google is also deprecating its RenderScript API for running computationally intensive tasks in favor of GPU compute frameworks like Vulkan and OpenGL.

You can find a full breakdown of all of the changes in this release here.

 

#android, #animation, #api, #flash, #google, #google-android, #mobile-operating-system, #mobile-os, #operating-system, #operating-systems, #tc

0

As competition heats up, TikTok announces six new interactive music effects for creators

TikTok today is doubling down on its roots as a music-backed creation app with the launch of a half dozen new music effects for creators. The effects, which offer interactivity, visualizations, animations and more, will roll out over the next few weeks, starting with the launch of Music Visualizer. This effect is now available to TikTok’s global user base and runs real-time beat tracking to animate a retro greenscreen landscape, the company says.

The effect was added to TikTok’s Creative Effects tray yesterday and there are already over 28,000 videos created using the new feature. The effect results in videos featuring a purple sky with multiple moons (or planets?) in the background, where the grid on the ground pulses up and down with the music.

Music Visualizer works with any sounds in TikTok’s music library and has also been adopted by the electronic dance music duo AREA21 who used Music Visualizer to tease their new track “La La La.” Unfortunately, their use of the effect hides the animation behind one of their own. But several other creators showcase the effect better.

Other effects on the way include:

  • Music Machine, which offers an interactive set of tools that will allow users to control the real-time rendering of MIDI loops for different music layers. There will also be a BPM slider for real-time adjustments; five, one-shot sound effects; and dynamic visual responses for the video of your recorded music.
  • Delayed Beats, which recreates the freeze-frame effect that’s already popular on TikTok while aligning transitions to the beat of the music
  • Text Beats, which allows creators to add animated text overlays to their video that transition in sync with the beat of any sound from TikTok’s library.
  • Solid Beats, which add visual effects that sync to the beat of any song.
  • Mirror Beats, which align display transitions with the beat of any song from TikTok’s library.

The launch of the new features follow arrival of several new TikTok competitors from major social networks, including Instagram (Reels), Snapchat (Spotlight), and YouTube (Shorts). The additions of the features help to demonstrate how far behind these rivals are in terms of competing on on the product experience. While the newcomers to the short-form video space may have launched their own set of basic creation tools, they’re lacking the larger libraries of creative effects that make TikTok fun to use, as well as appealing to those who are more specifically interested in its music features.

All the new effects will be rolled out to the dedicated “Music” tab within TikTok’s Creative Effects tray, as they become globally available.

#animation, #apps, #bpm, #bytedance, #social, #social-networks, #special-effects, #tc, #tiktok, #visual-effects

0

HBO Max signs “adult” cartoon series based on Scooby-Doo’s Velma

This artist's approximation imagines Velma from the <em>Scooby-Doo</em> series getting chummy with the <em>Clone High</em> cast. We doubt that such a crossover will happen, but animation nerds can dream, right?

Enlarge / This artist’s approximation imagines Velma from the Scooby-Doo series getting chummy with the Clone High cast. We doubt that such a crossover will happen, but animation nerds can dream, right? (credit: WarnerMedia / Aurich Lawson)

One of HBO Max’s biggest differentiators in the video-streaming scramble has been its animation family, which includes a glut of established “mature” cartoons from families like Adult Swim and DC Universe. A Wednesday announcement sees WarnerMedia moving aggressively on that front with a whopping seven new series orders on top of existing series in development.

Today, the company’s Hanna-Barbera family announced one of its biggest nudge-nudge, wink-wink series ideas since off-kilter fare like Space Ghost: a series, simply named Velma, about the “origins” of Scooby-Doo mainstay Velma Dinkley.

Suggestive Scooby stuff, from Gunn to Max

WarnerMedia’s press release says the series’ first ordered season will offer “an original and humorous spin that unmasks the complex and colorful past of one of America’s most beloved mystery solvers,” then confirms Mindy Kaling (The Office) as both Velma’s voice and an executive producer of the show. The announcement doesn’t include information on other cast members or writers/directors, simply doubling down on a suggestive description: “an adult animated comedy series.”

Read 7 remaining paragraphs | Comments

#animation, #gaming-culture, #hbo-max, #paramount-plus

0

Netflix acquires the rights to all 22 Redwall books, plans film and series

An anthropomorphized mouse holds up a sword and a shield.

Enlarge / The book cover for the first book in the series, Redwall. (credit: Penguin Random House)

Netflix has acquired the rights to all 22 books in Brian Jacques’ fantasy series Redwall, marking the first time that rights to the entire series have been purchased by one film or television company. Netflix made a deal for the rights with book publisher Penguin Random House Children, according to Deadline.

This is a major franchise move even for Netflix, as the books are considered classics by many and have sold more than 30 million copies. The series follows the fantasy adventures of noble and heroic talking animals. Every book in the series was written by author Brian Jacques, who passed away in 2011 shortly before the publication of the 22nd book.

The streaming network plans to create both a feature film and an event TV series. The film will be based on the series’ first book, which is simply titled Redwall. The screenplay will be written by Patrick McHale, who is best known as the creator of Cartoon Network’s critically acclaimed animated miniseries Over the Garden Wall.

Read 5 remaining paragraphs | Comments

#animation, #brian-jacques, #gaming-culture, #netflix, #penguin-random-house, #redwall, #streaming, #tv

0

SpaceX’s Starship prototype once again flies to great heights, and again explodes on landing

SpaceX has once again flown its Starship spacecraft, a still-in-development space launch vehicle it’s building in south Florida. This test was a flight of SN9, the ninth in its current series of prototype rockets. The test involved flying SN9 to an altitude of around 10 km (just over 6 miles or nearly 33,000 feet). After reaching that apogee, the SN9 spacecraft altered its attitude to angle for re-entry (simulated, since it didn’t actually leave Earth”s atmosphere) and then descended for a controlled landing.

This is the second test along these lines, with the first happening in December using its SN8 prototype, the one before this in the current series. Today’s test saw SN9 reach its target altitude as intended, and saw a successful ‘belly flop’ maneuver, as well as the required propellant hand-off. This was also a successful test of the flaps on Starship, which control its angle as it moves through the air, and which alter their angle via on-board motors to do so. The landing portion didn’t go as smoothly – the spacecraft attempted to re-orient itself to go vertical for landing, but didn’t make it quite straight up-and-down, and also had too much speed going into the touchdown, so it exploded rather spectacularly when it hit ground.

Image Credits: SpaceX

SpaceX had a very similar test the first time around, with things going mostly smoothly up until the landing portion of the mission. During SN8’s flight, the Starship prototype appeared to be better-oriented for landing before touching down too hard, but it’s difficult to say too much about which was more or less successful without access to the data and the testing parameters.

Starship is designed to perform this crucial maneuver as part of its approach to reusability – the spacecraft is intended to be fully reusable, and will accomplish this with a powered-landing that includes, obviously, not the exploding component. As the company noted, however, the rest of this test looks pretty much like what they wanted to happen.

This kind of early testing isn’t expected to go exactly to plan, and the point is primarily to collect data that will help improve further attempts and spacecraft development. Of course, you’d hope to get things exactly right upon your first attempts, but it never actually works that way in rocketry. What is unusual is how public SpaceX is with its development program at this stage of testing.

The company will be back at it with another try soon. It already has its SN10 prototype set up on its launch site at its Texas facility, which is the other spaceship you see in the early part of the animation above.

#aerospace, #animation, #florida, #outer-space, #south-florida, #space, #spacecraft, #spaceflight, #spacex, #spacex-starship, #tc, #texas

0

Robert Downey Jr. is launching a new ‘rolling’ venture fund to back sustainability startups

A little less than two years ago, when the actor, producer, and investor Robert Downey Jr. unveiled his new, sustainability focused initiative called the FootPrint Coalition at Amazon’s re:MARS conference it was little more than a static website and a subscription prompt.

Jump cut to today, and the firm now has five portfolio companies, a non-profit initiative, and is launching a rolling venture fund, Footprint Coalition Ventures, at the World Economic Forum’s Digital Davos event.

With the new rolling fund, managed through AngelList, Downey Jr.’s initiative sits the intersection of two of the biggest ideas reshaping the world economy — the democratization of access to capital and investment vehicles and the $10 trillion opportunity to decarbonize global industry.

It’s another arrow in the quiver for an institution that aims to combine storytelling, investing, and non-profit commitments to combat the world’s climate crisis.

Rolling funds and the revolution in finance

There’s a revolution happening in finance right now, whether it’s the rise of the Redittors trying to avenge the malfeasance of short-sellers and big institutional investors that’s happening through investments in stocks like Blockbuster, Nokia, Gamestop and AMC, or the new crowdfunding sources and rolling funds that are allowing regular investors to finance early stage companies, things on Wall Street are definitely changing.

And while the public market gambles are undoubtably minting some new millionaires, opening up access to interesting startup investments is a thesis that’s a stark contrast to the cynicism of day-trading gambles.

Both could leave investors with less than zero in some cases, but with rolling funds or crowdfunding, there’s a real opportunity to build something rather than just sticking it to the man.

Unlike traditional venture funds, rolling funds raise new capital commitments on a quarterly basis and invest as they go, hence the “rolling”. Investors come on for a minimum one-year commitment, and invest at a quarterly cadence. In Downey Jr.’s fund that commitment amounts to $5,000 per quarter for up to 2,000 qualified investors (and a smaller number of accredited ones), according to a person with knowledge of the firm’s plans.

“The idea of opening [the fund] to real people, rather than the ivory tower of the institutional bigwigs… It’s a little bit more slamdance than Sundance [and] I kind of dig it,” said Downey Jr.

A guide to recognizing FootPrint Coalition Ventures

FootPrint Coalition Ventures will be split between early and late stage investment funds and will be looking to make six investments per year in early stage companies and four later stage deals.

Helping Downey Jr. manage the operations are investors like Jonathan Schulthof, who previously founded LOOM Media, which leverages smart urban infrastructure for advertising, founded Motivate International, which manages bike sharing services in cities across the U.S., and served as a managing partner for Global Technology Investments. Schulthof is joined by Steve Levin, who co-founded Team Downey, Downey Jr.’s media production company and Downey Ventures, which invests in media and technology companies. 

The firm already has four companies in its portfolio through investments it made using the founders’ own capital. And while those investments were all under $1 million, the firm expects that the size of its commitments will grow as it raises additional cash. Footprint Coalition has also maintained pro-rata investment rights so that it can increase the size of its stake in businesses over time. And the investments it made to date were sized in anticipation of potential for follow-ons at much higher valuations.

A venture fund inside of a coalition

The initiative that Downey Jr. hopes to build is more than just an investment arm. Both he and his co-founders see the investment side as a single piece of a broader platform that leverages the massive social following Downey Jr. has created and the storytelling skills he and his team have mastered through decades spent working in the movie business.

That broader team includes Rachel Kropa, the former head of the CAA Foundation, who will lead scientific and philanthropic efforts and serve as the fund’s Impact Advisor and liaison to the scientific and research communities, according to a statement.

Rachel Kropa, former head of the CAA Foundation who joined Footprint Coalition to lead scientific and philanthropic efforts last year, will serve as the fund’s Impact Advisor and liaison to the scientific and research communities.

“The idea that the content that we made can be related back to the individual is very powerful,” said Kropa. “This problem is so intractable and interconnected across the world. It does matter that the fish that you eat are made using a sustainable feed.”

Kropa is referring to a piece that the FootPrint Coalition put out around sustainable aquaculture tied to the group’s recent investment in Ÿnsect, a company that makes protein from crickets for use in animal feed and human food.

“Our content around Cellular Agriculture, exemplifies the type of content we can create in the course of taking a deep dive into a particular industry. Though we have not (yet) invested in the space, we do believe there are interesting stories to tell,” said one person who works with the company.

That media is additive to activate the group’s audience, and is not something that it charges for — or considers part of its investment valuation. “We’ve been creating edited video segments with Robert doing voice over and overlaying animation all of which we’ve been posting to social. We do this for free to the companies, and we don’t charge / strong-arm / cajole for warrants, advisor shares, or the like in return,” the person said.

Weird science and sustainability

While Ÿnsect is one example of a company that the FootPrint Coalition has backed that’s doing something which may be a little outside of the purview of most of Downey Jr.’s following, other businesses like the bamboo toilet paper company, Cloud Paper, and the new investment in the sustainability focused financial services company, Aspiration, have definite direct consumer ties.

That balance is something that Schulthof said the firm was looking for as it pursues not just environmental and sustainability returns, but, more concretely, profit.

“We look at things that are meaningful and impactful [and] I get to be purely capitalist. The question is this a good opportunity is something that has to do with its margins, its scale, its risk profile, the people involved and fundamentally what are the terms… do we think the company will deliver value to investors,” said Schulthof. “We’re looking for returns.”

The opportunity for returns is enormous. As the group noted, the ESG sector – funds that focus on the Environmental, Social and Governance issues – continues to grow rapidly Part of the broader stakeholder capitalism movement, impact investing funds have topped $250 billion, and sustainability assets have doubled in value over the past three years.

“We see two powerful trends working together to support the environment. First, engaging content and media distribution enable us to create a passionate community from Robert’s 100 million followers and to use that audience to access great investments. Second, a turnkey technology platform now enables us to manage a broad set of individual investors,” said Schulthof in a statement. “Venture funds traditionally have high minimums that exclude only the wealthiest individuals, or endowments and foundations. With much lower minimums and shorter investment periods, we can now offer access to these same companies to a much broader group. When these investors further ignite our passionate audience, we hope to set a positive feedback loop in motion with environmental technologies as the ultimate beneficiary.”

 

#advisor, #amc, #angellist, #animation, #aspiration, #cloud-paper, #corporate-finance, #crowdfunding, #davos, #economy, #entrepreneurship, #finance, #funding, #gamestop, #head, #investor, #managing-partner, #media-production, #money, #nokia, #private-equity, #sundance, #tc, #technology, #united-states, #venture-capital, #world-economic-forum, #ynsect

0

LottieFiles, a platform for the animation format, lands $9 million Series A led by M12, Microsoft’s venture fund

LottieFiles, a platform for JSON-based Lottie animations, has raised a Series A of $9 million. The round was led by M12, Microsoft’s venture capital arm, with participation from returning investor 500 Startups.

Based in San Francisco and Kuala Lumpur, LottieFiles was founded in 2018. The platform includes Lottie creation, editing and testing tools, and a marketplace for animations. It now claims about one million users from 65,000 companies, including Airbnb, Google, TikTok, Disney and Netflix, and 300% year-over-year growth. The new funding brings its total raised to about $10 million.

Smaller than GIF or PNG graphics, Lottie animations also have the advantage of being scalable and interactive. It was introduced as an open-source library by Airbnb engineers six years ago and quickly became popular with app developers because Lottie files can be used across platforms without additional coding and edited after shipping.

An illustration from animation startup LottieFiles

An illustration from animation startup LottieFiles

LottieFiles co-founder and chief executive officer Kshitij Minglani told TechCrunch the startup originally started as a community for designers and developers, before adding tools, integrations and other resources. It launched its marketplace during the COVID-19 lockdown, with 70% of earnings going directly to creators, and also has a list of animators who are available for hire.

LottieFiles’ core platform and tools are currently pre-revenue, with plans to monetize later this year. “It’s not often a revolutionary format comes about and disrupts an entire industry, saving tons of precious design and development hours,” said Minglani. “We didn’t want to stunt the adoption of Lottie by monetizing early on.”

The new funding will be used on LottieFiles’ product roadmap, expanding its infrastructure and increasing its global user base.

#animation, #asia, #design, #developers, #fundings-exits, #lottie, #lottiefiles, #malaysia, #southeast-asia, #startups, #tc

0

Biteable raises $7 million Series A for its template-based online video builder

Online video platform Biteable, a startup that makes it easier to create polished and professional videos using templates and a library of images and animations, has raised $7 million in Series A funding led by Cloud Apps Capital Partners. The service today competes with products from Vimeo, Canva, Adobe and others, but focuses on creating video assets that have more staying power than temporary social videos.

These sort of videos are in more demand than ever, as the pandemic has prompted increased use of video communications — particularly among smaller businesses — which has also helped Biteable to grow, the company says. 

“[The pandemic] accelerated the move toward video that was already happening,” notes Biteable CEO Brent Chudoba, who joined the company at the end of last year. “It helped, obviously, things like Zoom and products like Loom. We saw a benefit, as well. I think, actually, we’ll see an even bigger benefit over time as companies are now used to working and sharing messages remotely, and having to get more creative in how they distribute information,” he says.

The startup itself was originally co-founded in 2015 in Hobart, Australia by CTO Tommy Fotak, Simon Westlake, and James MacGregor. Fotak’s background was in software development, while Westlake had experience in animation and studio production. MacGregor, who has a software, product and marketing background and previously worked at BigCommerce, is Biteable’s Chief Product Officer.

The team had initially been working on freelance projects for people who needed advertising and explainer videos, which led them to realize the opportunity in the video creation market. They believed there was demand for a video builder product that simplified a lot of the decisions that needed to be made when making a video. That is, they wanted to do for video creation what Squarespace, Wix and others have done for website creation.

Chudoba more recently joined the company as a result of Biteable wanting to bring in a CEO with more experience growing freemium software productivity businesses. Prior to Biteable, Chudoba worked in private equity, was an early SurveyMonkey employee (CRO), and also spent time at PicMonkey (COO), Thrive Global (COO and CFO) and Calendly (Head of Business Operations).

While Chudoba admits Biteable is not without its competition, he views that as a positive thing. It means there’s opportunity in the market and it gives Biteable a chance to differentiate itself from others.

On that front, Biteable’s customers tell him the product offers a good balance between the amount of time and skill set they have and the quality of the results.

“They can produce results that they are very proud of and that exceed what they thought they could do with — not necessarily low effort — but without having the training, skills and design background,” says Chudoba.

Above: A Biteable explainer video built with Biteable

The customers also appreciate the sizable content library, which includes a combination of stock photography, stock video footage, and hundreds of animations and scenes. Biteable licenses content from Unsplash and Storyblocks, while its animation library and templates are built in-house by its own professional design team. This allows the company to release hundreds of scenes per month, to keep the library content refreshed and current.

Unlike some of the other products on the market, Biteable’s sweet spot isn’t on “quick hit” social videos, like Instagram Stories, for example. Instead, it focuses on assets that companies use and reuse — like video explainers for a business, online marketing and ads, videos that appear on product pages, and more. Its videos also tend to be between 30 seconds and 3 and half minutes.

Already, Biteable has found individuals from companies like Amazon, Microsoft, Google, Disney, Salesforce, BBC, Shopify and Samsung have used its service, though it doesn’t have any official contracts with these larger businesses.

Above: A Biteable recruiting video template

Today, the startup generates revenue via a freemium business model that includes multiple subscription plans.

A free plan for individual users offers access to the suite of tools for video creation, including the 1.8 million pictures, clips, and animations within the Biteable library. However, free videos are watermarked with Biteable’s branding. A $19 per month plan for single users removes the watermark and allows you to add your own, and offers HD 1080p resolution and other features, like commercial usage rights. Professionals pay $49 per month for shared editing and projects and use by unlimited team members, and more. Custom pricing plans are also available.

Combined, Biteable’s free and paid users total over 6 million. They create around 100,000 unique videos per month, and that number had roughly doubled over the course of 2020.

Image Credits: Biteable

The new $7 million round, announced today, was led by new investor Cloud Apps Capital Partners. Existing investor Tank Stream Ventures also returned for the round and was joined by both new and existing angel investors. In total, Biteable has raised over $9 million (USD) to date.

With the additional funds, Biteable aims to hire and continue to develop the product.

Another shift attributed to the pandemic is that people have warmed up to remote work. Biteable had already been a remote-first operation whose team of 46 is geographically distributed across Australia, the U.S., Canada, and Western Europe. Now, Biteable no longer has to convince people that remote work is the future — and this helps with recruiting talent, too.

Chudoba believes Biteable can grow to become a larger company over time.

“Video is a platform concept and so I think you can build a really big standalone company,” Chudoba says. “We’re a freemium model so that’s a low customer acquisition cost. It’s a high value service and it can be very sticky when you get teams involved,” he notes.

“The power of online video has been incredible, and the events of 2020 have accelerated adoption trends that would have otherwise taken five or more years to evolve,” notes Matt Holleran, General Partner at Cloud Apps Capital Partners, in a statement about his firm’s investment. “As a firm, we look for great businesses in high growth industries with excellent teams that we can help reach the next level. In Biteable, we see all three of those elements and are incredibly excited to partner with Brent and the Biteable team on this next chapter of growth,” he says.

#animation, #australia, #biteable, #funding, #media, #online-video, #startups, #tc, #video, #video-creation, #video-marketing

0

NextMind’s Dev Kit for mind controlled-computing offers a rare ‘wow’ factor in tech

NextMind debuted its Dev Kit hardware at CES last year, but the hardware is now actually shipping out, and the startup shared with me the production version to take a test drive. The NextMind controller is a sensor that reads electrical signals from your brain’s visual cortex, and translates those into input signals for a connected PC. A lot of companies have developed novel input solutions that use either eye-tracking or electrical impulse input from the body, but NextMind’s is the first I’ve tried that worked instantly and wonderfully, providing a truly amazing experience of a kind that’s hard to find in the current world of relatively mature computing paradigms.

The basics

NextMind’s developer kit is just that – a product aimed at developers that’s meant to give them everything they need to get building software that works with NextMind’s hardware and APIs. It includes the NextMind sensor which works with a range of headgear including simple straps, Oculus VR headsets, and even baseball hats, along with the software and SDK required to make it work on your PC.

Image Credits: NextMind

The package that NextMind provided me included the sensor, a fabric headband, a Surface PC with the engine pre-installed, and a USB gamepad for use with one of the company’s pre-build software demos.

The sensor itself is lightweight, and can operate for up to eight hours continuously on a single charge. It can charge via USB-C, and its software is compatible with both Mac and PC, along with Oculus, HTC Vive and also Microsoft’s HoloLens.

Design and features

The NextMind sensor itself is surprisingly small and light – it fits in the palm of your hand, with two arms that extend slightly beyond that. It features an integrated clip mount that can be used to attach it to just about anything to secure it to your head. In terms of fit, you just need to ensure that the 9 sets of two-pronged electrode sensors make contact with your skin, which NextMind provides instructions on doing by essentially making sure it straps snugly to your head, and then ‘combing’ the device slightly (moving it up and down to get your hair out of the way).

It wears comfortably, though you will notice the electrodes pressing into your skin, especially over longer use periods. The ability to use a standard baseball cap with the clip makes it super convenient to install and wear, and it worked with the Oculus Rift and Oculus Quest headstraps easily and instantly, too.

Image Credits: NextMind

Setup was a breeze. I was guided by NextMind’s co-creators, but the app provides clear instructions as well. There’s a calibration process during which you look at an animation being displayed on the host PC, which helps the sensor identify the specific signals that your occipital lobe is emitting when performing the target behaviour that you’ll later use to actually interact with NextMind-optimized software.

Here’s where it’s worth pausing to explain how NextMind is actually ‘reading your thoughts’: The sensor basically learns what it looks like when your brain is engaged in what the company calls “active, visual focus.” It does this using a common signal that it overlays on controllable elements of a software’s graphical user interface. That way, when you focus on a specific item, it can translate that into a ‘press’ action, or a ‘hold and move,’ or any other number of potential output results.

NextMind’s system is elegantly simple in conception, which is probably why it feels so powerful and rich in use. After the calibration process, I immediately jumped into the demos and was performing a range of actions effectively with my brain. First was media playback and window management on a desktop, and from there I moved on to composing music, entering a pin on a number pad, and playing multiple games, including a platformer where my mind control was supplementing my physical input on a USB gamepad to create a whole new level of fun and complex gameplay that wouldn’t be possible otherwise.

This is a Dev Kit, so the included software is just a small sampling of what could be possible with NextMind eventually, now that developers are able to build their own. What’s amazing is that the included samples are breathtaking on their own, providing an overall experience that is mind-bending in all the best possible ways. Imagining a future where NextMind hardware is even smaller and a seamless part of an overall computing experience that also includes traditional input is tantalizing, indeed.

Bottom line

NextMind’s Dev Kit is definitely just that – a Dev Kit. It’s intended for developers who are going to use it to write their own software that will take advantage of this unique, safe and convenient form of brain-computer interface (BCI). The kit retails for 399.00 € (around $487 USD), and is now shipping. NextMind has plans to eventually consumerise the product, and to work with other OEMs as well on implementations, but for now, even in this state, it’s an awe-inspiring glimpse into what could well be the next major shift in our daily computing paradigm.

#animation, #biotech, #computing, #controller, #htc, #oculus, #oculus-vr, #science, #startups, #tc, #usb, #virtual-reality, #wearable-devices

0

Shinji Aoba Charged in Kyoto Animation Fire

Prosecutors accused Shinji Aoba in connection with a blaze in Kyoto last year that killed 36, making it the country’s deadliest attack in decades.

#animation, #arson, #japan, #kyoto-animation-co-ltd, #murders-attempted-murders-and-homicides, #shinji-aoba

0

Genies updates its software development kit and partners with Gucci, Giphy

Genies, has updated its software development kit and added Giphy and Gucci as new partners to enable their users to create personalized Genie avatars.

The company released the first version of its sdk in 2018 when it raised a $10 million to directly challenge Snap and Apple for avatar dominance. Now, with the latest update, the company said it has managed to create a new three dimensional rendering that can be used across platforms — if developers let Genies handle the animation.

Genies has already managed to sign up many of the biggest names in entertainment to act as their official manager through their Genies talent agency. These include celebrities like Shawn Mendes, Justin Bieber, Cardi B, and Rihanna. Genies also locked in deals with the National Football League’s player’s association along with Major League Baseball and the National Basketball Association.

Now, those celebrities and athletes can monetize exclusive digital goods made by Genies on platforms like Gucci and Giphy and the fashion house and meme generator can now give users their own digital identity to play around with.

“Over the past year, our technology has been sharpened by the exacting creative demands of celebrities. This advanced Genies’ march to be the go-to avatar globally,” said Akash Nigam, Genies CEO and co-founder, in a statement. “What was previously a celebrity exclusive experience, is now broadly available for consumers to use as their virtual portable identities. By opening up to the masses, we’ve now created an opportunity for tastemakers to forge new, unique relationships with their audiences through avatar digital goods.”

The SDK integrations are still highly curated and tailored (there’s a lot of heavy lifting that Genies needs to do with each one). For instance, Gucci users can try on the latest designs and the company will sell digital goods on its platform created by Genies. Giphy users will use their avatars as gifs on its site and through its distribution network.

“Our Avatar Agency has served as the go-to platform for thousands of artists, and with our next-gen, highly expressive and dynamic 3D Genie, we will further solidify our position as the universal digital identity,” said Izzy Pollak, Director of Avatar SDK at Genies. “For celebrities and everyday users alike, it unlocks new arenas and verticals for users to cultivate their avatars in. On top of traditional 2D environments like mobile apps and websites, Genies can now live in AR/VR platforms, games, and in use cases or SDK partner platforms that demand a 360-degree rendering of the digital goods they purchase,”

#akash-nigam, #animation, #apple, #avatar, #films, #genies, #giphy, #gucci, #justin-bieber, #major, #major-league-baseball, #national-basketball-association, #national-football-league, #player, #snap, #tc

0

The painstaking, hyper-granular process behind Amazon’s artful Undone

Shot by Justin Wolfson, edited by Jeremy Smolik. Click here for transcript.

Let us be the 47th outlet to say it: Nothing else on TV or streaming looks like Undone. Amazon Prime’s animated time-bending sci-fi series centers on a woman named Alma (played by Rosa Salazar, of Alita fame) who suffers an accident that changes her relationship to the world. And as Alma deals with that in-progress 180, she attempts to investigate the mysterious death of her father (played by Bob Odenkirk, Better Call Saul). The story… well, better to say less and avoid spoilers for any soon-to-be viewers.

Undone’s style, however, deserves all the words one can devote. If you heard of the show before, it’s likely because it represents the first major streaming series to be done entirely in rotoscope, an animation technique where artists paint over live actors using a variety of methods and styles. (Maybe you’ve seen the campus shooting documentary Tower or Richard Linkliter’s Waking Life; that’s rotoscoping in action.) Rotoscoped work can be dreamy,  museum-like, nightmarish, disjointed, or other-worldly—sometimes all at once. In other words, it might be the perfect creative visual choice for a show like Undone. 

Credit for executing this vision goes to a trio of production companies behind the scenes: Tornante in Southern California, Submarine Productions in Amsterdam, and Minnow Mountain in Austin, Texas. If that kind of globe-spanning collaboration doesn’t already say it, we will: the process was complicated. But you don’t have to take it from us, since Undone director and production designer Hisko Hulsing kindly sat down for our latest entertainment episode of “War Stories” and outlined the laborious process that makes the show seem so effortlessly beautiful to all of us watching at home.

Read 6 remaining paragraphs | Comments

#amazon, #animation, #feature, #features, #gaming-culture, #undone, #war-stories

0

With 170M users, Bilibili is the nearest thing China has to Youtube

Bilibili, a Chinese video streaming website that was once regarded as a haven for youth subculture, has been steadily making its way into the mainstream as users age up and content diversifies. The NASDAQ-traded company recorded a 70% year-over-year growth to reach 172 million monthly active users by the first quarter, placing it in the same rank as video services operated by Tencent and Baidu’s iQiyi.

Daily time spent per user soared to a record of 87 minutes, which is likely linked to the extended stay-at-home order imposed on students during COVID-19.

In the same period, Tencent Video reported 112 million subscribers, while iQiyi commanded 118.9 million, almost all of whom are paying. Bilibili, by contrast, saw only about 8% of its MAU paying.

Bilibili’s growth engine is fundamentally different from the two giants though. While Tencent Video and iQiyi bet on Netflix -style, professionally produced programs, Bilibili relies on a wide array of user-generated content in the style of Youtube. The number of monthly creators grew 146% to 1.8 million, who collectively submitted 4.9 million pieces per month. Among its top creators is, lo and behold, the Communist Youth League of China.

The site also has an unconventional way of monetizing its audience. It doubles as a mobile gaming platform — to be expected given its young user base — and earned half of its revenue from video games in Q1. Other avenues of revenue generation come from virtual item sales during live broadcasting, advertising, and sales from content creators who operate online shops via Bilibili.

Despite healthy user growth, Bilibili widened net loss to 538.6 million yuan or US$76.1 million in the first quarter, a steep increase from 195.6 million yuan from the year before. It cites COVID-19 in causing delays in merchandise deliveries through its platform.

Nonetheless, the company bolstered its cash reserve to 8 billion yuan or $1.13 billion following Sony’s outsized $400 million investment to explore synergies in animation and games between the pair. The online entertainment upstart is among a small crop of companies that have attracted financing from both Alibaba and Tencent, which are long-time archrivals.

“Our cash flow in Q1 is positive and higher than our losses. In all, the company is in a healthy financial position,” its chief financial officer Fan Xin asserted during the company earnings call.

#alibaba, #animation, #asia, #bilibili, #china, #earnings, #iqiyi, #netflix, #sony, #tencent, #video-hosting

0

Here’s what SpaceX and NASA’s crucial Crew Dragon mission should look like on May 27

SpaceX and NASA are planning a triumphant return to American human spaceflight on May 27, with the SpaceX Demo-2 mission for its Crew Dragon spacecraft. This is the final step required for Crew Dragon to become certified for human flight, after which it’ll enter into regular operational service ferrying people (and some cargo) to the International Space Station on behalf of the U.S. and some of its allies.

The animation above shows how SpaceX and NASA envision the mission going, from the astronauts stepping out of their ride to the launch pad (a Tesla Model X badged with NASA logos past and present), their trip across the bridge linking the launch tower to the Falcon 9 that will take them up, and their spacecraft’s separation from the rocket and subsequent docking procedure with the ISS.

SpaceX and NASA have done plenty of preparation to get to this point, including running a full uncrewed original demo mission that more or less followed this exact flow, just without any actual astronauts on board. That mission also included the undocking of the Crew Dragon capsule, and its return to Earth, with a parachute-assisted splashdown in the ocean.

Since then, SpaceX has improved and rigorously tested its parachute design, and flown an escape system abort test to show what its safety measures would do in the unlikely event of a problem with the launch vehicle after its taken off but before it reaches space.

The May 27 date is fast-approaching for this crucial milestone, which will mark the first time astronauts have lifted off for space from U.S. soil since the end of the Shuttle program in 2011.

#aerospace, #animation, #dragon, #falcon, #falcon-9, #international-space-station, #outer-space, #private-spaceflight, #science, #space, #spacecraft, #spaceflight, #spacex, #tc, #united-states

0

How Apple reinvented the cursor for iPad

Even though Apple did not invent the mouse pointer, history has cemented its place in dragging it out of obscurity and into mainstream use. Its everyday utility, pioneered at Xerox Parc and later combined with a bit of iconic* work from Susan Kare at Apple, has made the pointer our avatar in digital space for nearly 40 years.

The arrow waits on the screen. Slightly angled, with a straight edge and a 45 degree slope leading to a sharp pixel-by-pixel point. It’s an instrument of precision, of tiny click targets on a screen feet away. The original cursor was a dot, then a line pointing straight upwards. It was demonstrated in the ‘Mother of all demos’ — a presentation roughly an hour and a half long that contained not only the world’s first look at the mouse but also hyper linking, document collaboration, video conferencing and more.

The star of the show, though, was the small line of pixels that made up the mouse cursor. It was hominem ex machina — humanity in the machine. Unlike the text entry models of before, which placed character after character in a facsimile of a typewriter, this was a tether that connected us, embryonic, to the aleph. For the first time we saw ourselves awkwardly in a screen.

We don’t know exactly why the original ‘straight up arrow’ envisioned by Doug Engelbart took on the precise angled stance we know today. There are many self-assured conjectures about the change, but few actual answers — all we know for sure is that, like a ready athlete, the arrow pointer has been there, waiting to leap towards our goal for decades. But for the past few years, thanks to touch devices, we’ve had a new, fleshier, sprinter: our finger.

The iPhone and later the iPad didn’t immediately re-invent the cursor. Instead, it removed it entirely. Replacing your digital ghost in the machine with your physical meatspace fingertip. Touch interactions brought with them “stickiness” — the 1:1 mating of intent and action. If you touched a thing, it did something. If you dragged your finger, the content came with it. This, finally, was human-centric computing.

Then, a few weeks ago, Apple dropped a new kind of pointer — a hybrid between these two worlds of pixels and pushes. The iPad’s cursor, I think, deserves closer examination. It’s a seminal bit of remixing from one of the most closely watched idea factories on the planet.

In order to dive a bit deeper on the brand new cursor and its interaction models, I spoke to Apple SVP Craig Federighi about its development and some of the choices by the teams at Apple that made it. First, let’s talk about some of the things that make the cursor so different from what came before…and yet strangely familiar.

—————————

The iPad cursor takes on the shape of a small circle, a normalized version of the way that the screen’s touch sensors read the tip of your finger. Already, this is different. It brings that idea of placing you inside the machine to the next level, blending the physical nature of touch with the one-step-removed trackpad experience.

Its size and shape is also a nod to the nature of iPad’s user interface. It was designed from the ground up as a touch-first experience. So much so that when an app is not properly optimized for that modality it feels awkward, clumsy. The cursor as your finger’s avatar has the same impact wherever it lands.

Honestly, the thinking could have stopped there and that would have been perfectly adequate. A rough finger facsimile as pointer. But the concept is pushed further. As you approach an interactive element, the circle reaches out, smoothly touching then embracing and encapsulating the button.

The idea of variable cursor velocity is pushed further here too. When you’re close to an object on the screen, it changes its rate of travel to get where you want to go quicker, but it does it contextually, rather than linearly, the way that OS X or Windows does.

Predictive math is applied to get you to where you’re going without you having to land precisely there, then a bit of inertia is applied to keep you where you need to be without over shooting it. Once you’re on the icon, small movements of your finger jiggle the icon so you know you’re still there.

The cursor even disappears when you stop moving it, much as the pressure of your finger disappears when you remove it from the screen. And in some cases the cursor possesses the element itself, becoming the button and casting a light ethereal glow around it.

This stir fry of path prediction, animation, physics and fun seasoning is all cooked into a dish that does its best to replicate the feel of something we do without thinking: reaching out and touching something directly.

These are, in design parlance, affordances. They take an operation that is at its base level much harder to do with a touchpad than it is your finger, and make it feel just as easy. All you have to do to render this point in crystal is watch a kid who uses an iPad all day try to use a mouse to accomplish the same task.

The idea that a cursor could change fluidly as needed in context isn’t exactly new. The I-Beam (the cursor type that appears when you hover over editable text) is a good example of this. There were also early experiments at Xerox Parc — the birthplace of the mouse — that also made use of a transforming cursor. They even tried color changes, but never quite got to the concept of on-screen elements as interactive objects — choosing to emulate functions of the keyboard.

But there has never been a cursor like this one. Designed to emulate your finger, but also to spread and squish and blob and rush and rest. It’s a unique addition to the landscape.

—————————

Given how highly scrutinized Apple’s every release is, the iPad cursor not being spoiled is a minor miracle. When it was released as a software update for existing iPads — and future ones — people began testing it immediately and discovering the dramatically different ways that it behaved from its pre-cursors.*

Inside Apple, the team enjoyed watching the speculation externally that Apple was going to pursue a relatively standard path — displaying a pointer on screen on the iPad — and used it as motivation to deliver something more rich, a solution to be paired with the Magic Keyboard. The scuttlebutt was that Apple was going to add cursor support to iPad OS, but even down to the last minute the assumption was that we would see a traditional pointer that brought the iPad as close as possible to ‘laptop’ behavior.

Since the 2018 iPad Pro debuted with the smart connector, those of us that use the iPad Pro daily have been waiting for Apple to ship a ‘real’ keyboard for the device. I went over my experiences with the Smart Keyboard Folio in my review of the new iPad Pro here, and the Magic Keyboard here, but suffice to say that the new design is incredible for heavy typists. And, of course, it brings along a world class trackpad for the ride.

When the team set out to develop the new cursor, the spec called for something that felt like a real pointer experience, but that melded philosophically with the nature of iPad.

A couple of truths to guide the process:

  • The iPad is touch first.
  • iPad is the most versatile computer that Apple makes.

In some ways, the work on the new iPad OS cursor began with the Apple TV’s refreshed interface back in 2015. If you’ve noticed some similarities between the way that the cursor behaves on iPad OS and the way it works on Apple TV, you’re not alone. There is the familiar ‘jumping’ from one point of interest to another, for instance, and the slight sheen of a button as you move your finger while ‘hovering’ on it.

“There was a process to figure out exactly how various elements would work together,” Federighi says. “We knew we wanted a very touch-centric cursor that was not conveying an unnecessary level of precision. We knew we had a focus experience similar to Apple TV that we could take advantage of in a delightful way. We knew that when dealing with text we wanted to provide a greater sense of feedback.”

“Part of what I love so much about what’s happened with iPadOS is the way that we’ve drawn from so many sources. The experience draws from our work on tvOS, from years of work on the Mac, and from the origins of iPhone X and early iPad, creating something new that feels really natural for iPad.”

And the Apple TV interface didn’t just ‘inspire’ the cursor — the core design team responsible works across groups, including the Apple TV, iPad OS and other products.

—————————

But to understand the process, you have to get a wider view of the options a user has when interacting with an Apple device.

Apple’s input modalities include:

  • Mouse (Mac)
  • Touchpad (Mac, MacBook, iPad)
  • Touch (iPhone, iPad)
  • AR (iPhone, iPad, still nascent)

Each of these modalities has situational advantages or disadvantages. The finger, of course, is an imprecise instrument. The team knew that they would have to telegraph the imprecise nature of a finger to the user, but also honor contexts in which precision was needed.

(Image:Jared Sinclair/Black Pixel)

Apple approached the experience going in clean. The team knew that they had the raw elements to make it happen. They had to have a touch sensitive cursor, they knew that the Apple TV cursor showed promise and they knew that more interactive feedback was important when it came to text.

Where and how to apply which element was the big hurdle.

When we were first thinking about the cursor, we needed it to reflect the natural and easy experience of using your finger when high precision isn’t necessary, like when accessing an icon on the home screen, but it also needed to scale very naturally into high precision tasks like editing text,” says Federighi.

“So we came up with a circle that elegantly transforms to accomplish the task at hand. For example, it morphs to become the focus around a button, or to hop over to another button, or it morphs into something more precise when that makes sense, like the I-beam for text selection.“

The predictive nature of the cursor is the answer that they came up with for “How do you scale a touch analogue into high precision?”

But the team needed to figure out the what situations demanded precision. Interacting with one element over another one close by, for example. That’s where the inertia and snapping came in. The iPad, specifically, is multipurpose computer so it’s way more complex than any single-input device. There are multiple modalities to service with any cursor implementation on the platform. And they have to be honored without tearing down all of the learning that you’ve put millions of users through with a primary touch interface.

We set out to design the cursor in a way that retains the touch-first experience without fundamentally changing the UI,” Federighi says. “So customers who may never use a trackpad with their iPad won’t have to learn something new, while making it great for those who may switch back and forth between touch and trackpad.”

The team knew that it needed to imbue the cursor with the same sense of fluidity that has become a pillar of the way that iOS works. So they animated it, from dot to I-beam to blob. If you slow down the animation you can see it sprout a bezier curve and flow into its new appearance. This serves the purpose of ‘delighting’ the user — it’s just fun — but it also tells a story about where the cursor is going. This keeps the user in sync with the actions of the blob, which is always a danger any time you introduce even a small amount of autonomy in a user avatar.

Once on the icon, the cursor moves the icon in a small parallax, but this icon shift is simulated — there are not layers here like on Apple TV, but they would be fun to have.

Text editing gets an upgrade as well, with the I-Beam conforming to the size of the text you’re editing, to make it abundantly clear where the cursor will insert and what size of text it will produce when you begin typing.

The web presented its own challenges. The open standard means that many sites have their own hover elements and behaviors. The question that the team had to come to grips with was how far to push conformity to the “rules” of iPad OS and the cursor. The answer was not a one-size application of the above elements. It had to honor the integral elements of the web.

Simply, they knew that people were not going to re-write the web for Apple.

Perfecting exactly where to apply these elements was an interesting journey. For instance, websites do all manner of things – sometimes they have their own hover experiences, sometimes the clickable area of an element does not match what the user would think of as a selectable area,” he says. “So we looked carefully at where to push what kind of feedback to achieve a really high level of compatibility out the gates with the web as well as with third party apps.”

Any third-party apps that have used the standard iPad OS elements get all of this work for free, of course. It just works. And the implementation for apps that use custom elements is pretty straightforward. Not flick-a-switch simple, but not a heavy lift either.

The response to the cursor support has been hugely positive so far, and that enthusiasm creates momentum. If there’s a major suite of productivity tools that has a solid user base on iPad Pro, you can bet it will get an update. Microsoft, for instance, is working on iPad cursor support that’s expected to ship in Office for iPad this fall.

—————————

System gestures also feel fresh and responsive even on the distanced touchpad. In some ways, the flicking and swiping actually feel more effective and useful on the horizontal than they do on the screen itself. I can tell you from personal experience that context switching back and forth from the screen to the keyboard to switch between workspaces introduces a lot of cognitive wear and tear. Even the act of continuously holding your arm up and out to swipe back and forth between workspaces a foot off the table introduces a longer term fatigue issue.

When the gestures are on the trackpad, they’re more immediate, smaller in overall physical space and less tiring to execute.

Many iPad gestures on the trackpad are analogous to those on the Mac, so you don’t have to think about them or relearn anything. However, they respond in a different, more immediate way on iPad, making everything feel connected and effortless,” says Federighi.

Remember that the first iPad multitasking gestures felt like a weird offshoot. An experiment that appeared useless at worst but an interesting curiosity at best. Now, on the home button free iPad Pro, the work done by the team that built the iPhone X shines brightly. It’s pretty remarkable that they built a system so usable that it even works on trackpad — one element removed from immediate touch.

Federighi says that they thought about rethinking 3 finger gestures altogether. But they discovered that they work just fine as is. In the case of anything that goes off of the edge you hit a limit and just push beyond it again to confirm and you get the same result.

There are still gaps in the iPad’s cursor paradigms. There is no support for cursor lock on iPad, making it a non starter for relative mouse movement in 3D apps like first person games. There’s more to come, no doubt, but Apple had no comment when I asked about it.

The new iPad cursor is a product of what came before, but it’s blending, rather than layering, that makes it successful in practice. The blending of the product team’s learnings across Apple TV, Mac and iPad. The blending of touch, mouse and touchpad modalities. And, of course, the blending of a desire to make something new and creative and the constraint that it also had to feel familiar and useful right out of the box. It’s a speciality that Apple, when it is at its best, continues to hold central to its development philosophy.

#animation, #apple, #apple-inc, #apple-keyboard, #computer-mouse, #computing, #craig-federighi, #cursor, #focus, #ipad, #ipads, #iphone, #mac-os-x, #microsoft, #microsoft-windows, #office-for-ipad, #pointer, #portable-media-players, #s, #susan-kare, #tablet-computers, #tc, #touchpad, #video-conferencing, #xerox

0

Max Q: Countdown to a return to U.S. astronaut launches

Last week was a fairly busy week in space news, but the dominating story was preparation for the first ever commercial crew launch that will actually carry human astronauts to space. This is, in many ways, the culmination of years of work and billions of dollars spent by partners NASA and SpaceX on their part of the commercial crew program.

On May 27, SpaceX will launch two NASA astronauts on a demonstration mission to the International Space Station, and this week saw a flurry of activity to get ready for that milestone, including a fully day of press briefings by both the agency and SpaceX.

Here’s how SpaceX’s first astronaut launch will go

This is what will happen during that historic first SpaceX astronaut launch, which has been expanded to include not just a quick trip up and back to the ISS, but also a small tour of duty for Bob Behnken and Doug Hurley as temporary space station staff. It could last anywhere from one to more than three months, depending on NASA’s needs.

NASA awarded human lander contracts

NASA announced who it would be contracting to develop and build human landers for its Artemis program, which will return humans to the surface of the Moon. They actually picked three different companies, including SpaceX, Blue Origin and Dynetics, each of which will be taking very different approaches to building human-rated landers that can transport astronauts from lunar orbit to the Moon’s surface.

Blue Origin’s winning lander bid in action

Here’s a concept animation of what Blue Origin’s winning lander will look like in practice, complete with transfer and ascent vehicles built by top-tier aerospace industry partners Northrop Grumman and Lockheed Martin. Bezos’ space company went with an all-star lineup, which has to have reassured the agency about its chances of success.

SpaceX’s Starship passes a key development test

SpaceX’s winning bid was actually for Starship, the fully reusable multi-purpose spacecraft that it’s in the process of developing and testing. So far, the full-scale starship prototypes have not held up well to high-pressure fuel tank testing, but the latest version did ace that test, and is now getting ready for engine fire and low-altitude flight tests. There’s still a lot of work before it gets to the Moon, but now NASA is counting on SpaceX making that happen.

NASA taps small satellite maker for solar sail test

NASA wants to develop and certify solar sail technology for use in deep space missions that don’t necessarily involve transporting humans, since it’s a very cost-effective way to propel small satellites over long distances (mainly because you don’t need to take any fuel with you). The agency has now signed up NanoAvionics to build the spacecraft that will test its solar sail prototype in space.

SpaceX is using flip-up sunshades to help out Earth astronomers

SpaceX has a new, hardware-based potential solution to the phenomenon where its Starlink satellites appear very visibly in the night sky, potentially blocking out Earth-based observation of stars and other stellar bodies. The company has designed a system of hardware shades that flip out and block the sun, preventing it from reflecting off the antennas on the satellite that broadcast internet signals back to Earth. It’ll test these soon and then they could become a permanent part of Starlink’s design.

LIDAR from space could make for much better maps

Devin check in on a project at MIT that is looking to supplement road maps with lidar in order to improve even the best, machine-learning based inferred maps of roadways and transit paths. Extra Crunch subscription required.

#aerospace, #animation, #artemis-program, #astronaut, #bezos, #blue-origin, #devin, #elon-musk, #hyperloop, #international-space-station, #lockheed-martin, #max-q, #mit, #northrop-grumman, #outer-space, #science, #space, #space-tourism, #spacecraft, #spaceflight, #spacex, #tc

0

Here’s how Blue Origin’s human lander system will carry astronauts to the lunar surface

Blue Origin was among the companies selected by NASA to develop and build a human lander system for its Artemis missions, which include delivering the next man and first woman to the surface of the Moon in 2024. The Jeff Bezos -founded space company chose to deliver a bid that included a space industry ‘dream team’ of subcontractors, including Lockheed Martin, Northrop Grumman and Draper, and its Artemis Human Landing System will use the expertise of all three.

The Blue Origin bid was one of three that ended up winning a contract form NASA, alongside SpaceX’s Starship and a human landing system developed by Dynetics working with a range of subcontractors. Blue Origin originally debuted its vision of a human lander last year, first with the unveiling of its Blue Moon craft in May, and then with the announcement of its cross-industry ‘national team’ at IAC later in the year.

Now, the company has released an animation of how its landing system will work, including Blue Moon docking with a transfer element to bring astronauts over from the Orion capsule that will carry them to the Moon from Earth, as well as the descent stage to actually land, and the ascent stage to take-off again from the disposable lander platform and return the astronauts to their ride home.

Here’s where each each company is involved and what they’re contributing to what you see above: Blue Origin is building the lander proper, which is that platform with legs which you see first in the video, and which is left behind on the Moon at the end. Lockheed Martin builds the bubble-like vehicle which attaches to the lander, and which takes off from it at the end. Northrop Grumman builds the long cylinder that connects up with the lander and provides its propulsion through low lunar orbit as it readies to land, and then disconnects before the actual descent. Draper is behind the senses across all of this, delivering avionics for flight control and the landing itself.

As mentioned, Blue Origin is one of three companies selected by NASA to develop these lander systems, but its team brings a lot of combined expertise in spaceflight and spacecraft development to the table. The launch system itself will arrive separately from the astronauts on board Orion, making the trip either via a New Glenn rocket built by Blue Origin, which is still in development, or via the United Launch Alliance’s Vulcan, another in-development spacecraft which is set to take off for the first time next year.

#aerospace, #animation, #artemis-program, #blue-moon, #blue-origin, #commercial-lunar-payload-services, #jeff-bezos, #lockheed-martin, #nasa, #northrop-grumman, #orion, #outer-space, #science, #space, #space-tourism, #spaceflight, #spacex, #tc

0

Using 25% lower bandwidth, Disney+ launches in UK, Ireland, 5 other European countries, France to come online April 7

Disney+, the streaming service from the Walt Disney Company, has been rapidly ramping up in the last several weeks. But while some of that expansion has seen some hiccups, other regions are basically on track. Today, as expected, Disney announced that it is officially launching across 7 markets in Euopre — but doing so using reduced bandwidth given the strain on broadband networks as more people are staying home because of the coronavirus pandemic. From today, it will be live in the UK, Ireland, Germany, Italy, Spain, Austria, and Switzerland; and Disney also reconfirmed the delayed debut in France will be coming online on April 7.

Seven is the operative number here, it seems: it’s the largest multi-country launch so far for the service.

“Launching in seven markets simultaneously marks a new milestone for Disney+,“ said Kevin Mayer, Chairman of Walt Disney Direct-to-Consumer & International, in a statement. “As the streaming home for Disney, Marvel, Pixar, Star Wars, and National Geographic, Disney+ delivers high-quality, optimistic storytelling that fans expect from our brands, now available broadly, conveniently, and permanently on Disney+. We humbly hope that this service can bring some much-needed moments of respite for families during these difficult times.”

Pricing is £5.99/€6.99 per month, or £59.99/€69.99 for an annual subscription. Belgium, the Nordics, and Portugal, will follow in summer 2020.

The service being rolled out will feature 26 Disney+ Originals plus an “extensive collection” of titles (some 500 films, 26 exclusive original movies and series and thousands of TV episodes to start with) from Disney, Pixar, Marvel, Star Wars, National Geographic, and other content producers owned by the entertainment giant, in what has been one of the boldest moves yet from a content company to go head-to-head with OTT streaming services like Netflix, Amazon and Apple.

Caught in the crossfire of Covid-19

The expansion of Disney+ has been caught in the crossfire of world events.

The new service is launching at what has become an unprecedented time for streaming media. Because of the coronavirus pandemic, a lot of of the world is being told to stay home, and many people are turning to their televisions and other screens for diversion and information.

That means huge demand for new services to entertain or distract people who are now sheltering in place. And that has put a huge strain on broadband networks. So, to be a responsible streamer (and to make sure quality is not too impacted), Disney confirmed (as it previously said it would) that it would be launching the service with “lower overall bandwidth utilization by at least 25%.”

There are now dozens of places to get an online video fix, but Disney has a lot of valuable cards in its hand, specifically in the form of a gigantic catalog of famous, premium content, and the facilities to produce significantly more at scale, dwarfing the efforts (valiant or great as they are) from the likes of Netflix, Amazon and Apple .

Titles in the mix debuting today include “The Mandalorian” live-action Star Wars series; a live-action “Lady and the Tramp,” “High School Musical: The Musical: The Series,”; “The World According to Jeff Goldblum” docuseries from National Geographic; “Marvel’s Hero Project,” which celebrates extraordinary kids making a difference in their communities; “Encore!,” executive produced by the multi-talented Kristen Bell; “The Imagineering Story” a 6-part documentary from Emmy and Academy Award-nominated filmmaker Leslie Iwerks and animated short film collections “SparkShorts” and “Forky Asks A Question” from Pixar Animation Studios.

Some 600 episodes of “The Simpsons” is also included (with the latest season 31 coming later this year).

With entire households now being told to stay together and stay inside, we’re seeing a huge amount of pressure being put on to broadband networks and a true test of the multiscreen approach that streaming services have been building over the years.

In this case, you can use all the usuals: mobile phones, streaming media players, smart TVs and gaming consoles to watch the Disney+ service (including Amazon devices, Apple devices, Google devices, LG Smart TVs with webOS, Microsoft’s Xbox Ones, Roku, Samsung Smart TVs and Sony / Sony Interactive Entertainment, with the ability to use four concurrent streams per subscription, or up to 10 devices with unlimited downloads. As you would expect, there is also the ability to set up parental controls and individual profiles.

Carriers with paid-TV services that are also on board so far include Deutsche Telekom, O2 in the UK, Telefonica in Spain, TIM in Italy and Canal+ in France when the country comes online. No BT in the UK, which is too bad for me (sniff). Sky and NOW TV are also on board.

#amazon, #animation, #apple, #austria, #belgium, #broadband, #chairman, #companies, #coronavirus, #covid-19, #deutsche-telekom, #disney, #disney-channel, #e-commerce, #emmy, #entertainment, #europe, #executive, #france, #germany, #google, #internet-television, #ireland, #italy, #kevin-mayer, #lg, #media, #microsoft, #mobile-phones, #national-geographic, #netflix, #pixar, #pixar-animation-studios, #portugal, #roku, #smart-tv, #sony, #spain, #streaming-media, #streaming-media-players, #streaming-services, #switzerland, #telefonica, #the-walt-disney-company, #the-simpsons, #united-kingdom

0

Los Angeles-based Talespin nabs $15 million for its extended reality-based workforce training tools

It turns out the virtual and augmented reality companies aren’t dead — as long as they focus on the enterprise. That’s what the Los Angeles-based extended reality technology developer Talespin did — and it just raised $15 million to grow its business. 

Traditional venture capitalists may have made it rain on expensive Hollywood studios that were promising virtual reality would be the future of entertainment and social networking (given coronavirus fears, it may yet be), but Talespin and others like it are focused on much more mundane goals. Specifically, making talent management, training and hiring easier for employers in certain industries.

For Talespin, the areas that were the most promising were ones that aren’t obvious to a casual observer. Insurance and virtual reality are hardly synonymous, but Talespin’s training tools have helped claims assessors do their jobs and helped train a new generation of insurance investigators in what to look for when they’re trying to determine how much their companies are going to pay out.

Talespin‘s immersive platform has transformed employee learning and proven to be an impactful addition to our training programs. We’re honored to continue to support the Talespin team through this next phase of growth and development,” said Scott Lindquist, Chief Financial Officer at Farmers Insurance, in a statement.

Farmers is an investor in Talespin, as is the corporate training and talent management software provider Cornerstone OnDemand, and the hardware manufacturer HTC. The round’s composition speaks to the emerging confidence of corporate investors and just how skeptical traditional venture firms have become of the prospects for virtual reality.

The prospects of augmented and virtual reality may be uncertain, but what’s definite is the need for new tools and technologies to transfer knowledge and train up employees as skilled, experienced workers age out of the workforce — and the development of new skills becomes critically important as technology changes the workplace.

Cornerstone, which led the Talespin Series B round, will also be partnering with the company to develop human resources training tools in virtual reality.

“We share Talespin’s vision that the workforce needs innovative solutions to stay competitive, maximize opportunity and increase employee satisfaction,” said Jason Gold, Vice President of Finance, Corporate Development and Investor Relations at Cornerstone, in a statement. “We’ve been incredibly impressed with Talespin’s technology, leadership team and vision to transform the workplace through XR. Talespin’s technology is a perfect fit in our suite of products, and we look forward to working together to deliver great solutions for our customers.”

Talespin previously raised $5 million in financing. The company initially grew its business by developing a number of one-off projects for eventual customers as it determined a product strategy. Part of the company’s success has relied in its ability to use game engine and animation instead of 360 degree video. That means assets can be reused multiple times and across different training modules.

“Creating better alignment between skills and opportunities is the key to solving the reskilling challenges organizations across the world are facing,” said Kyle Jackson, CEO and Co-Founder of Talespin, in a statement. “That’s why it’s critical companies find a way to provide accelerated, continuous learning and create better skills data. By doing so, we will open up career pathways for individuals that are better aligned to their natural abilities and learned skills, and enable companies to implement a skills-based approach to talent development, assessment, and placement. Our new funding and partnership with Cornerstone will allow us to expand our product offerings to achieve these goals, and to continue building innovative solutions that redefine what work looks like in the future.”

#animation, #articles, #augmented-reality, #chief-financial-officer, #cornerstone, #education, #htc, #los-angeles, #ondemand, #social-networking, #tc, #virtual-reality, #workplace

0